Sample records for comparative model analysis

  1. An Analysis Technique/Automated Tool for Comparing and Tracking Analysis Modes of Different Finite Element Models

    NASA Technical Reports Server (NTRS)

    Towner, Robert L.; Band, Jonathan L.

    2012-01-01

    An analysis technique was developed to compare and track mode shapes for different Finite Element Models. The technique may be applied to a variety of structural dynamics analyses, including model reduction validation (comparing unreduced and reduced models), mode tracking for various parametric analyses (e.g., launch vehicle model dispersion analysis to identify sensitivities to modal gain for Guidance, Navigation, and Control), comparing models of different mesh fidelity (e.g., a coarse model for a preliminary analysis compared to a higher-fidelity model for a detailed analysis) and mode tracking for a structure with properties that change over time (e.g., a launch vehicle from liftoff through end-of-burn, with propellant being expended during the flight). Mode shapes for different models are compared and tracked using several numerical indicators, including traditional Cross-Orthogonality and Modal Assurance Criteria approaches, as well as numerical indicators obtained by comparing modal strain energy and kinetic energy distributions. This analysis technique has been used to reliably identify correlated mode shapes for complex Finite Element Models that would otherwise be difficult to compare using traditional techniques. This improved approach also utilizes an adaptive mode tracking algorithm that allows for automated tracking when working with complex models and/or comparing a large group of models.

  2. Quantitative structure-activity relationship of organosulphur compounds as soybean 15-lipoxygenase inhibitors using CoMFA and CoMSIA.

    PubMed

    Caballero, Julio; Fernández, Michael; Coll, Deysma

    2010-12-01

    Three-dimensional quantitative structure-activity relationship studies were carried out on a series of 28 organosulphur compounds as 15-lipoxygenase inhibitors using comparative molecular field analysis and comparative molecular similarity indices analysis. Quantitative information on structure-activity relationships is provided for further rational development and direction of selective synthesis. All models were carried out over a training set including 22 compounds. The best comparative molecular field analysis model only included steric field and had a good Q² = 0.789. Comparative molecular similarity indices analysis overcame the comparative molecular field analysis results: the best comparative molecular similarity indices analysis model also only included steric field and had a Q² = 0.894. In addition, this model predicted adequately the compounds contained in the test set. Furthermore, plots of steric comparative molecular similarity indices analysis field allowed conclusions to be drawn for the choice of suitable inhibitors. In this sense, our model should prove useful in future 15-lipoxygenase inhibitor design studies. © 2010 John Wiley & Sons A/S.

  3. Simulation skill of APCC set of global climate models for Asian summer monsoon rainfall variability

    NASA Astrophysics Data System (ADS)

    Singh, U. K.; Singh, G. P.; Singh, Vikas

    2015-04-01

    The performance of 11 Asia-Pacific Economic Cooperation Climate Center (APCC) global climate models (coupled and uncoupled both) in simulating the seasonal summer (June-August) monsoon rainfall variability over Asia (especially over India and East Asia) has been evaluated in detail using hind-cast data (3 months advance) generated from APCC which provides the regional climate information product services based on multi-model ensemble dynamical seasonal prediction systems. The skill of each global climate model over Asia was tested separately in detail for the period of 21 years (1983-2003), and simulated Asian summer monsoon rainfall (ASMR) has been verified using various statistical measures for Indian and East Asian land masses separately. The analysis found a large variation in spatial ASMR simulated with uncoupled model compared to coupled models (like Predictive Ocean Atmosphere Model for Australia, National Centers for Environmental Prediction and Japan Meteorological Agency). The simulated ASMR in coupled model was closer to Climate Prediction Centre Merged Analysis of Precipitation (CMAP) compared to uncoupled models although the amount of ASMR was underestimated in both models. Analysis also found a high spread in simulated ASMR among the ensemble members (suggesting that the model's performance is highly dependent on its initial conditions). The correlation analysis between sea surface temperature (SST) and ASMR shows that that the coupled models are strongly associated with ASMR compared to the uncoupled models (suggesting that air-sea interaction is well cared in coupled models). The analysis of rainfall using various statistical measures suggests that the multi-model ensemble (MME) performed better compared to individual model and also separate study indicate that Indian and East Asian land masses are more useful compared to Asia monsoon rainfall as a whole. The results of various statistical measures like skill of multi-model ensemble, large spread among the ensemble members of individual model, strong teleconnection (correlation analysis) with SST, coefficient of variation, inter-annual variability, analysis of Taylor diagram, etc. suggest that there is a need to improve coupled model instead of uncoupled model for the development of a better dynamical seasonal forecast system.

  4. Comparing the High School English Curriculum in Turkey through Multi-Analysis

    ERIC Educational Resources Information Center

    Batdi, Veli

    2017-01-01

    This study aimed to compare the High School English Curriculum (HSEC) in accordance with Stufflebeam's context, input, process and product (CIPP) model through multi-analysis. The research includes both quantitative and qualitative aspects. A descriptive analysis was operated through Rasch Measurement Model; SPSS program for the quantitative…

  5. A Comparative Evaluation of Mixed Dentition Analysis on Reliability of Cone Beam Computed Tomography Image Compared to Plaster Model.

    PubMed

    Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam

    2017-01-01

    The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t -test was performed to qualitatively evaluate the data and P < 0.05 was considered statistically significant. Statistically, significant results were obtained on data comparison between CBCT-derived images and plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis.

  6. Beyond the scope of Free-Wilson analysis: building interpretable QSAR models with machine learning algorithms.

    PubMed

    Chen, Hongming; Carlsson, Lars; Eriksson, Mats; Varkonyi, Peter; Norinder, Ulf; Nilsson, Ingemar

    2013-06-24

    A novel methodology was developed to build Free-Wilson like local QSAR models by combining R-group signatures and the SVM algorithm. Unlike Free-Wilson analysis this method is able to make predictions for compounds with R-groups not present in a training set. Eleven public data sets were chosen as test cases for comparing the performance of our new method with several other traditional modeling strategies, including Free-Wilson analysis. Our results show that the R-group signature SVM models achieve better prediction accuracy compared with Free-Wilson analysis in general. Moreover, the predictions of R-group signature models are also comparable to the models using ECFP6 fingerprints and signatures for the whole compound. Most importantly, R-group contributions to the SVM model can be obtained by calculating the gradient for R-group signatures. For most of the studied data sets, a significant correlation with that of a corresponding Free-Wilson analysis is shown. These results suggest that the R-group contribution can be used to interpret bioactivity data and highlight that the R-group signature based SVM modeling method is as interpretable as Free-Wilson analysis. Hence the signature SVM model can be a useful modeling tool for any drug discovery project.

  7. Comparative analysis of zonal systems for macro-level crash modeling.

    PubMed

    Cai, Qing; Abdel-Aty, Mohamed; Lee, Jaeyoung; Eluru, Naveen

    2017-06-01

    Macro-level traffic safety analysis has been undertaken at different spatial configurations. However, clear guidelines for the appropriate zonal system selection for safety analysis are unavailable. In this study, a comparative analysis was conducted to determine the optimal zonal system for macroscopic crash modeling considering census tracts (CTs), state-wide traffic analysis zones (STAZs), and a newly developed traffic-related zone system labeled traffic analysis districts (TADs). Poisson lognormal models for three crash types (i.e., total, severe, and non-motorized mode crashes) are developed based on the three zonal systems without and with consideration of spatial autocorrelation. The study proposes a method to compare the modeling performance of the three types of geographic units at different spatial configurations through a grid based framework. Specifically, the study region is partitioned to grids of various sizes and the model prediction accuracy of the various macro models is considered within these grids of various sizes. These model comparison results for all crash types indicated that the models based on TADs consistently offer a better performance compared to the others. Besides, the models considering spatial autocorrelation outperform the ones that do not consider it. Based on the modeling results and motivation for developing the different zonal systems, it is recommended using CTs for socio-demographic data collection, employing TAZs for transportation demand forecasting, and adopting TADs for transportation safety planning. The findings from this study can help practitioners select appropriate zonal systems for traffic crash modeling, which leads to develop more efficient policies to enhance transportation safety. Copyright © 2017 Elsevier Ltd and National Safety Council. All rights reserved.

  8. How to Construct More Accurate Student Models: Comparing and Optimizing Knowledge Tracing and Performance Factor Analysis

    ERIC Educational Resources Information Center

    Gong, Yue; Beck, Joseph E.; Heffernan, Neil T.

    2011-01-01

    Student modeling is a fundamental concept applicable to a variety of intelligent tutoring systems (ITS). However, there is not a lot of practical guidance on how to construct and train such models. This paper compares two approaches for student modeling, Knowledge Tracing (KT) and Performance Factors Analysis (PFA), by evaluating their predictive…

  9. 2D- and 3D-quantitative structure-activity relationship studies for a series of phenazine N,N'-dioxide as antitumour agents.

    PubMed

    Cunha, Jonathan Da; Lavaggi, María Laura; Abasolo, María Inés; Cerecetto, Hugo; González, Mercedes

    2011-12-01

    Hypoxic regions of tumours are associated with increased resistance to radiation and chemotherapy. Nevertheless, hypoxia has been used as a tool for specific activation of some antitumour prodrugs, named bioreductive agents. Phenazine dioxides are an example of such bioreductive prodrugs. Our 2D-quantitative structure activity relationship studies established that phenazine dioxides electronic and lipophilic descriptors are related to survival fraction in oxia or in hypoxia. Additionally, statistically significant models, derived by partial least squares, were obtained between survival fraction in oxia and comparative molecular field analysis standard model (r² = 0.755, q² = 0.505 and F = 26.70) or comparative molecular similarity indices analysis-combined steric and electrostatic fields (r² = 0.757, q² = 0.527 and F = 14.93), and survival fraction in hypoxia and comparative molecular field analysis standard model (r² = 0.736, q² = 0.521 and F = 18.63) or comparative molecular similarity indices analysis-hydrogen bond acceptor field (r² = 0.858, q² = 0.737 and F = 27.19). Categorical classification was used for the biological parameter selective cytotoxicity emerging also good models, derived by soft independent modelling of class analogy, with both comparative molecular field analysis standard model (96% of overall classification accuracy) and comparative molecular similarity indices analysis-steric field (92% of overall classification accuracy). 2D- and 3D-quantitative structure-activity relationships models provided important insights into the chemical and structural basis involved in the molecular recognition process of these phenazines as bioreductive agents and should be useful for the design of new structurally related analogues with improved potency. © 2011 John Wiley & Sons A/S.

  10. Comparative and Predictive Multimedia Assessments Using Monte Carlo Uncertainty Analyses

    NASA Astrophysics Data System (ADS)

    Whelan, G.

    2002-05-01

    Multiple-pathway frameworks (sometimes referred to as multimedia models) provide a platform for combining medium-specific environmental models and databases, such that they can be utilized in a more holistic assessment of contaminant fate and transport in the environment. These frameworks provide a relatively seamless transfer of information from one model to the next and from databases to models. Within these frameworks, multiple models are linked, resulting in models that consume information from upstream models and produce information to be consumed by downstream models. The Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES) is an example, which allows users to link their models to other models and databases. FRAMES is an icon-driven, site-layout platform that is an open-architecture, object-oriented system that interacts with environmental databases; helps the user construct a Conceptual Site Model that is real-world based; allows the user to choose the most appropriate models to solve simulation requirements; solves the standard risk paradigm of release transport and fate; and exposure/risk assessments to people and ecology; and presents graphical packages for analyzing results. FRAMES is specifically designed allow users to link their own models into a system, which contains models developed by others. This paper will present the use of FRAMES to evaluate potential human health exposures using real site data and realistic assumptions from sources, through the vadose and saturated zones, to exposure and risk assessment at three real-world sites, using the Multimedia Environmental Pollutant Assessment System (MEPAS), which is a multimedia model contained within FRAMES. These real-world examples use predictive and comparative approaches coupled with a Monte Carlo analysis. A predictive analysis is where models are calibrated to monitored site data, prior to the assessment, and a comparative analysis is where models are not calibrated but based solely on literature or judgement and is usually used to compare alternatives. In many cases, a combination is employed where the model is calibrated to a portion of the data (e.g., to determine hydrodynamics), then used to compare alternatives. Three subsurface-based multimedia examples are presented, increasing in complexity. The first presents the application of a predictive, deterministic assessment; the second presents a predictive and comparative, Monte Carlo analysis; and the third presents a comparative, multi-dimensional Monte Carlo analysis. Endpoints are typically presented in terms of concentration, hazard, risk, and dose, and because the vadose zone model typically represents a connection between a source and the aquifer, it does not generally represent the final medium in a multimedia risk assessment.

  11. MultiMetEval: Comparative and Multi-Objective Analysis of Genome-Scale Metabolic Models

    PubMed Central

    Gevorgyan, Albert; Kierzek, Andrzej M.; Breitling, Rainer; Takano, Eriko

    2012-01-01

    Comparative metabolic modelling is emerging as a novel field, supported by the development of reliable and standardized approaches for constructing genome-scale metabolic models in high throughput. New software solutions are needed to allow efficient comparative analysis of multiple models in the context of multiple cellular objectives. Here, we present the user-friendly software framework Multi-Metabolic Evaluator (MultiMetEval), built upon SurreyFBA, which allows the user to compose collections of metabolic models that together can be subjected to flux balance analysis. Additionally, MultiMetEval implements functionalities for multi-objective analysis by calculating the Pareto front between two cellular objectives. Using a previously generated dataset of 38 actinobacterial genome-scale metabolic models, we show how these approaches can lead to exciting novel insights. Firstly, after incorporating several pathways for the biosynthesis of natural products into each of these models, comparative flux balance analysis predicted that species like Streptomyces that harbour the highest diversity of secondary metabolite biosynthetic gene clusters in their genomes do not necessarily have the metabolic network topology most suitable for compound overproduction. Secondly, multi-objective analysis of biomass production and natural product biosynthesis in these actinobacteria shows that the well-studied occurrence of discrete metabolic switches during the change of cellular objectives is inherent to their metabolic network architecture. Comparative and multi-objective modelling can lead to insights that could not be obtained by normal flux balance analyses. MultiMetEval provides a powerful platform that makes these analyses straightforward for biologists. Sources and binaries of MultiMetEval are freely available from https://github.com/PiotrZakrzewski/MetEval/downloads. PMID:23272111

  12. A Comparative Evaluation of Mixed Dentition Analysis on Reliability of Cone Beam Computed Tomography Image Compared to Plaster Model

    PubMed Central

    Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam

    2017-01-01

    Aims and Objective: The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Materials and Methods: Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t-test was performed to qualitatively evaluate the data and P < 0.05 was considered statistically significant. Results: Statistically, significant results were obtained on data comparison between CBCT-derived images and plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. Conclusion: CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis. PMID:28852639

  13. The role of empathy and emotional intelligence in nurses' communication attitudes using regression models and fuzzy-set qualitative comparative analysis models.

    PubMed

    Giménez-Espert, María Del Carmen; Prado-Gascó, Vicente Javier

    2018-03-01

    To analyse link between empathy and emotional intelligence as a predictor of nurses' attitudes towards communication while comparing the contribution of emotional aspects and attitudinal elements on potential behaviour. Nurses' attitudes towards communication, empathy and emotional intelligence are key skills for nurses involved in patient care. There are currently no studies analysing this link, and its investigation is needed because attitudes may influence communication behaviours. Correlational study. To attain this goal, self-reported instruments (attitudes towards communication of nurses, trait emotional intelligence (Trait Emotional Meta-Mood Scale) and Jefferson Scale of Nursing Empathy (Jefferson Scale Nursing Empathy) were collected from 460 nurses between September 2015-February 2016. Two different analytical methodologies were used: traditional regression models and fuzzy-set qualitative comparative analysis models. The results of the regression model suggest that cognitive dimensions of attitude are a significant and positive predictor of the behavioural dimension. The perspective-taking dimension of empathy and the emotional-clarity dimension of emotional intelligence were significant positive predictors of the dimensions of attitudes towards communication, except for the affective dimension (for which the association was negative). The results of the fuzzy-set qualitative comparative analysis models confirm that the combination of high levels of cognitive dimension of attitudes, perspective-taking and emotional clarity explained high levels of the behavioural dimension of attitude. Empathy and emotional intelligence are predictors of nurses' attitudes towards communication, and the cognitive dimension of attitude is a good predictor of the behavioural dimension of attitudes towards communication of nurses in both regression models and fuzzy-set qualitative comparative analysis. In general, the fuzzy-set qualitative comparative analysis models appear to be better predictors than the regression models are. To evaluate current practices, establish intervention strategies and evaluate their effectiveness. The evaluation of these variables and their relationships are important in creating a satisfied and sustainable workforce and improving quality of care and patient health. © 2018 John Wiley & Sons Ltd.

  14. Comparison of 3D quantitative structure-activity relationship methods: Analysis of the in vitro antimalarial activity of 154 artemisinin analogues by hypothetical active-site lattice and comparative molecular field analysis

    NASA Astrophysics Data System (ADS)

    Woolfrey, John R.; Avery, Mitchell A.; Doweyko, Arthur M.

    1998-03-01

    Two three-dimensional quantitative structure-activity relationship (3D-QSAR) methods, comparative molecular field analysis (CoMFA) and hypothetical active site lattice (HASL), were compared with respect to the analysis of a training set of 154 artemisinin analogues. Five models were created, including a complete HASL and two trimmed versions, as well as two CoMFA models (leave-one-out standard CoMFA and the guided-region selection protocol). Similar r2 and q2 values were obtained by each method, although some striking differences existed between CoMFA contour maps and the HASL output. Each of the four predictive models exhibited a similar ability to predict the activity of a test set of 23 artemisinin analogues, although some differences were noted as to which compounds were described well by either model.

  15. Comparing model-based and model-free analysis methods for QUASAR arterial spin labeling perfusion quantification.

    PubMed

    Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J

    2013-05-01

    Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.

  16. 3D-quantitative structure-activity relationship studies on benzothiadiazepine hydroxamates as inhibitors of tumor necrosis factor-alpha converting enzyme.

    PubMed

    Murumkar, Prashant R; Giridhar, Rajani; Yadav, Mange Ram

    2008-04-01

    A set of 29 benzothiadiazepine hydroxamates having selective tumor necrosis factor-alpha converting enzyme inhibitory activity were used to compare the quality and predictive power of 3D-quantitative structure-activity relationship, comparative molecular field analysis, and comparative molecular similarity indices models for the atom-based, centroid/atom-based, data-based, and docked conformer-based alignment. Removal of two outliers from the initial training set of molecules improved the predictivity of models. Among the 3D-quantitative structure-activity relationship models developed using the above four alignments, the database alignment provided the optimal predictive comparative molecular field analysis model for the training set with cross-validated r(2) (q(2)) = 0.510, non-cross-validated r(2) = 0.972, standard error of estimates (s) = 0.098, and F = 215.44 and the optimal comparative molecular similarity indices model with cross-validated r(2) (q(2)) = 0.556, non-cross-validated r(2) = 0.946, standard error of estimates (s) = 0.163, and F = 99.785. These models also showed the best test set prediction for six compounds with predictive r(2) values of 0.460 and 0.535, respectively. The contour maps obtained from 3D-quantitative structure-activity relationship studies were appraised for activity trends for the molecules analyzed. The comparative molecular similarity indices models exhibited good external predictivity as compared with that of comparative molecular field analysis models. The data generated from the present study helped us to further design and report some novel and potent tumor necrosis factor-alpha converting enzyme inhibitors.

  17. Realism of Indian Summer Monsoon Simulation in a Quarter Degree Global Climate Model

    NASA Astrophysics Data System (ADS)

    Salunke, P.; Mishra, S. K.; Sahany, S.; Gupta, K.

    2017-12-01

    This study assesses the fidelity of Indian Summer Monsoon (ISM) simulations using a global model at an ultra-high horizontal resolution (UHR) of 0.25°. The model used was the atmospheric component of the Community Earth System Model version 1.2.0 (CESM 1.2.0) developed at the National Center for Atmospheric Research (NCAR). Precipitation and temperature over the Indian region were analyzed for a wide range of space and time scales to evaluate the fidelity of the model under UHR, with special emphasis on the ISM simulations during the period of June-through-September (JJAS). Comparing the UHR simulations with observed data from the India Meteorological Department (IMD) over the Indian land, it was found that 0.25° resolution significantly improved spatial rainfall patterns over many regions, including the Western Ghats and the South-Eastern peninsula as compared to the standard model resolution. Convective and large-scale rainfall components were analyzed using the European Centre for Medium Range Weather Forecast (ECMWF) Re-Analysis (ERA)-Interim (ERA-I) data and it was found that at 0.25° resolution, there was an overall increase in the large-scale component and an associated decrease in the convective component of rainfall as compared to the standard model resolution. Analysis of the diurnal cycle of rainfall suggests a significant improvement in the phase characteristics simulated by the UHR model as compared to the standard model resolution. Analysis of the annual cycle of rainfall, however, failed to show any significant improvement in the UHR model as compared to the standard version. Surface temperature analysis showed small improvements in the UHR model simulations as compared to the standard version. Thus, one may conclude that there are some significant improvements in the ISM simulations using a 0.25° global model, although there is still plenty of scope for further improvement in certain aspects of the annual cycle of rainfall.

  18. Entrance and exit region friction factor models for annular seal analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Elrod, David Alan

    1988-01-01

    The Mach number definition and boundary conditions in Nelson's nominally-centered, annular gas seal analysis are revised. A method is described for determining the wall shear stress characteristics of an annular gas seal experimentally. Two friction factor models are developed for annular seal analysis; one model is based on flat-plate flow theory; the other uses empirical entrance and exit region friction factors. The friction factor predictions of the models are compared to experimental results. Each friction model is used in an annular gas seal analysis. The seal characteristics predicted by the two seal analyses are compared to experimental results and to the predictions of Nelson's analysis. The comparisons are for smooth-rotor seals with smooth and honeycomb stators. The comparisons show that the analysis which uses empirical entrance and exit region shear stress models predicts the static and stability characteristics of annular gas seals better than the other analyses. The analyses predict direct stiffness poorly.

  19. Systems thinking, the Swiss Cheese Model and accident analysis: a comparative systemic analysis of the Grayrigg train derailment using the ATSB, AcciMap and STAMP models.

    PubMed

    Underwood, Peter; Waterson, Patrick

    2014-07-01

    The Swiss Cheese Model (SCM) is the most popular accident causation model and is widely used throughout various industries. A debate exists in the research literature over whether the SCM remains a viable tool for accident analysis. Critics of the model suggest that it provides a sequential, oversimplified view of accidents. Conversely, proponents suggest that it embodies the concepts of systems theory, as per the contemporary systemic analysis techniques. The aim of this paper was to consider whether the SCM can provide a systems thinking approach and remain a viable option for accident analysis. To achieve this, the train derailment at Grayrigg was analysed with an SCM-based model (the ATSB accident investigation model) and two systemic accident analysis methods (AcciMap and STAMP). The analysis outputs and usage of the techniques were compared. The findings of the study showed that each model applied the systems thinking approach. However, the ATSB model and AcciMap graphically presented their findings in a more succinct manner, whereas STAMP more clearly embodied the concepts of systems theory. The study suggests that, whilst the selection of an analysis method is subject to trade-offs that practitioners and researchers must make, the SCM remains a viable model for accident analysis. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Comparative Analysis of Predictive Models for Liver Toxicity Using ToxCast Assays and Quantitative Structure-Activity Relationships (MCBIOS)

    EPA Science Inventory

    Comparative Analysis of Predictive Models for Liver Toxicity Using ToxCast Assays and Quantitative Structure-Activity Relationships Jie Liu1,2, Richard Judson1, Matthew T. Martin1, Huixiao Hong3, Imran Shah1 1National Center for Computational Toxicology (NCCT), US EPA, RTP, NC...

  1. Case Problems for Problem-Based Pedagogical Approaches: A Comparative Analysis

    ERIC Educational Resources Information Center

    Dabbagh, Nada; Dass, Susan

    2013-01-01

    A comparative analysis of 51 case problems used in five problem-based pedagogical models was conducted to examine whether there are differences in their characteristics and the implications of such differences on the selection and generation of ill-structured case problems. The five pedagogical models were: situated learning, goal-based scenario,…

  2. Multilevel Structural Equation Models for the Analysis of Comparative Data on Educational Performance

    ERIC Educational Resources Information Center

    Goldstein, Harvey; Bonnet, Gerard; Rocher, Thierry

    2007-01-01

    The Programme for International Student Assessment comparative study of reading performance among 15-year-olds is reanalyzed using statistical procedures that allow the full complexity of the data structures to be explored. The article extends existing multilevel factor analysis and structural equation models and shows how this can extract richer…

  3. A kinematic model to assess spinal motion during walking.

    PubMed

    Konz, Regina J; Fatone, Stefania; Stine, Rebecca L; Ganju, Aruna; Gard, Steven A; Ondra, Stephen L

    2006-11-15

    A 3-dimensional multi-segment kinematic spine model was developed for noninvasive analysis of spinal motion during walking. Preliminary data from able-bodied ambulators were collected and analyzed using the model. Neither the spine's role during walking nor the effect of surgical spinal stabilization on gait is fully understood. Typically, gait analysis models disregard the spine entirely or regard it as a single rigid structure. Data on regional spinal movements, in conjunction with lower limb data, associated with walking are scarce. KinTrak software (Motion Analysis Corp., Santa Rosa, CA) was used to create a biomechanical model for analysis of 3-dimensional regional spinal movements. Measuring known angles from a mechanical model and comparing them to the calculated angles validated the kinematic model. Spine motion data were collected from 10 able-bodied adults walking at 5 self-selected speeds. These results were compared to data reported in the literature. The uniaxial angles measured on the mechanical model were within 5 degrees of the calculated kinematic model angles, and the coupled angles were within 2 degrees. Regional spine kinematics from able-bodied subjects calculated with this model compared well to data reported by other authors. A multi-segment kinematic spine model has been developed and validated for analysis of spinal motion during walking. By understanding the spine's role during ambulation and the cause-and-effect relationship between spine motion and lower limb motion, preoperative planning may be augmented to restore normal alignment and balance with minimal negative effects on walking.

  4. An NCME Instructional Module on Latent DIF Analysis Using Mixture Item Response Models

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Suh, Youngsuk; Lee, Woo-yeol

    2016-01-01

    The purpose of this ITEMS module is to provide an introduction to differential item functioning (DIF) analysis using mixture item response models. The mixture item response models for DIF analysis involve comparing item profiles across latent groups, instead of manifest groups. First, an overview of DIF analysis based on latent groups, called…

  5. Analysis of model output and science data in the Virtual Model Repository (VMR).

    NASA Astrophysics Data System (ADS)

    De Zeeuw, D.; Ridley, A. J.

    2014-12-01

    Big scientific data not only includes large repositories of data from scientific platforms like satelites and ground observation, but also the vast output of numerical models. The Virtual Model Repository (VMR) provides scientific analysis and visualization tools for a many numerical models of the Earth-Sun system. Individual runs can be analyzed in the VMR and compared to relevant data through relevant metadata, but larger collections of runs can also now be studied and statistics generated on the accuracy and tendancies of model output. The vast model repository at the CCMC with over 1000 simulations of the Earth's magnetosphere was used to look at overall trends in accuracy when compared to satelites such as GOES, Geotail, and Cluster. Methodology for this analysis as well as case studies will be presented.

  6. FINDING A METHOD FOR THE MADNESS: A COMPARATIVE ANALYSIS OF STRATEGIC DESIGN METHODOLOGIES

    DTIC Science & Technology

    2017-06-01

    FINDING A METHOD FOR THE MADNESS: A COMPARATIVE ANALYSIS OF STRATEGIC DESIGN METHODOLOGIES BY AMANDA DONNELLY A THESIS...work develops a comparative model for strategic design methodologies, focusing on the primary elements of vision, time, process, communication and...collaboration, and risk assessment. My analysis dissects and compares three potential design methodologies including, net assessment, scenarios and

  7. Comparison of composite rotor blade models: A coupled-beam analysis and an MSC/NASTRAN finite-element model

    NASA Technical Reports Server (NTRS)

    Hodges, Robert V.; Nixon, Mark W.; Rehfield, Lawrence W.

    1987-01-01

    A methodology was developed for the structural analysis of composite rotor blades. This coupled-beam analysis is relatively simple to use compared with alternative analysis techniques. The beam analysis was developed for thin-wall single-cell rotor structures and includes the effects of elastic coupling. This paper demonstrates the effectiveness of the new composite-beam analysis method through comparison of its results with those of an established baseline analysis technique. The baseline analysis is an MSC/NASTRAN finite-element model built up from anisotropic shell elements. Deformations are compared for three linear static load cases of centrifugal force at design rotor speed, applied torque, and lift for an ideal rotor in hover. A D-spar designed to twist under axial loading is the subject of the analysis. Results indicate the coupled-beam analysis is well within engineering accuracy.

  8. Comparative Analysis of Models of the Earth's Gravity: 3. Accuracy of Predicting EAS Motion

    NASA Astrophysics Data System (ADS)

    Kuznetsov, E. D.; Berland, V. E.; Wiebe, Yu. S.; Glamazda, D. V.; Kajzer, G. T.; Kolesnikov, V. I.; Khremli, G. P.

    2002-05-01

    This paper continues a comparative analysis of modern satellite models of the Earth's gravity which we started in [6, 7]. In the cited works, the uniform norms of spherical functions were compared with their gradients for individual harmonics of the geopotential expansion [6] and the potential differences were compared with the gravitational accelerations obtained in various models of the Earth's gravity [7]. In practice, it is important to know how consistently the EAS motion is represented by various geopotential models. Unless otherwise stated, a model version in which the equations of motion are written using the classical Encke scheme and integrated together with the variation equations by the implicit one-step Everhart's algorithm [1] was used. When calculating coordinates and velocities on the integration step (at given instants of time), the approximate Everhart formula was employed.

  9. Comparative analysis of stress in a new proposal of dental implants.

    PubMed

    Valente, Mariana Lima da Costa; de Castro, Denise Tornavoi; Macedo, Ana Paula; Shimano, Antonio Carlos; Dos Reis, Andréa Cândido

    2017-08-01

    The purpose of this study was to compare, through photoelastic analysis, the stress distribution around conventional and modified external hexagon (EH) and morse taper (MT) dental implant connections. Four photoelastic models were prepared (n=1): Model 1 - conventional EH cylindrical implant (Ø 4.0mm×11mm - Neodent®), Model 2 - modified EH cylindrical implant, Model 3 - conventional MT Conical implant (Ø 4.3mm×10mm - Neodent®) and Model 4 - modified MT conical implant. 100 and 150N axial and oblique loads (30° tilt) were applied in the devices coupled to the implants. A plane transmission polariscope was used in the analysis of fringes and each position of interest was recorded by a digital camera. The Tardy method was used to quantify the fringe order (n), that calculates the maximum shear stress (τ) value in each selected point. The results showed lower stress concentration in the modified cylindrical implant (EH) compared to the conventional model, with application of 150N axial and 100N oblique loads. Lower stress was observed for the modified conical (MT) implant with the application of 100 and 150N oblique loads, which was not observed for the conventional implant model. The comparative analysis of the models showed that the new design proposal generates good stress distribution, especially in the cervical third, suggesting the preservation of bone tissue in the bone crest region. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. GVIPS Models and Software

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M.; Gendy, Atef; Saleeb, Atef F.; Mark, John; Wilt, Thomas E.

    2007-01-01

    Two reports discuss, respectively, (1) the generalized viscoplasticity with potential structure (GVIPS) class of mathematical models and (2) the Constitutive Material Parameter Estimator (COMPARE) computer program. GVIPS models are constructed within a thermodynamics- and potential-based theoretical framework, wherein one uses internal state variables and derives constitutive equations for both the reversible (elastic) and the irreversible (viscoplastic) behaviors of materials. Because of the underlying potential structure, GVIPS models not only capture a variety of material behaviors but also are very computationally efficient. COMPARE comprises (1) an analysis core and (2) a C++-language subprogram that implements a Windows-based graphical user interface (GUI) for controlling the core. The GUI relieves the user of the sometimes tedious task of preparing data for the analysis core, freeing the user to concentrate on the task of fitting experimental data and ultimately obtaining a set of material parameters. The analysis core consists of three modules: one for GVIPS material models, an analysis module containing a specialized finite-element solution algorithm, and an optimization module. COMPARE solves the problem of finding GVIPS material parameters in the manner of a design-optimization problem in which the parameters are the design variables.

  11. Analysis of random structure-acoustic interaction problems using coupled boundary element and finite element methods

    NASA Technical Reports Server (NTRS)

    Mei, Chuh; Pates, Carl S., III

    1994-01-01

    A coupled boundary element (BEM)-finite element (FEM) approach is presented to accurately model structure-acoustic interaction systems. The boundary element method is first applied to interior, two and three-dimensional acoustic domains with complex geometry configurations. Boundary element results are very accurate when compared with limited exact solutions. Structure-interaction problems are then analyzed with the coupled FEM-BEM method, where the finite element method models the structure and the boundary element method models the interior acoustic domain. The coupled analysis is compared with exact and experimental results for a simplistic model. Composite panels are analyzed and compared with isotropic results. The coupled method is then extended for random excitation. Random excitation results are compared with uncoupled results for isotropic and composite panels.

  12. Statistical analysis of life history calendar data.

    PubMed

    Eerola, Mervi; Helske, Satu

    2016-04-01

    The life history calendar is a data-collection tool for obtaining reliable retrospective data about life events. To illustrate the analysis of such data, we compare the model-based probabilistic event history analysis and the model-free data mining method, sequence analysis. In event history analysis, we estimate instead of transition hazards the cumulative prediction probabilities of life events in the entire trajectory. In sequence analysis, we compare several dissimilarity metrics and contrast data-driven and user-defined substitution costs. As an example, we study young adults' transition to adulthood as a sequence of events in three life domains. The events define the multistate event history model and the parallel life domains in multidimensional sequence analysis. The relationship between life trajectories and excess depressive symptoms in middle age is further studied by their joint prediction in the multistate model and by regressing the symptom scores on individual-specific cluster indices. The two approaches complement each other in life course analysis; sequence analysis can effectively find typical and atypical life patterns while event history analysis is needed for causal inquiries. © The Author(s) 2012.

  13. Comparing Internet Probing Methodologies Through an Analysis of Large Dynamic Graphs

    DTIC Science & Technology

    2014-06-01

    comparable Internet topologies in less time. We compare these by modeling union of traceroute outputs as graphs, and using standard graph theoretical...topologies in less time. We compare these by modeling union of traceroute outputs as graphs, and using standard graph theoretical measurements as well...We compare these by modeling union of traceroute outputs as graphs, and study the graphs by using vertex and edge count, average vertex degree

  14. Using the weighted area under the net benefit curve for decision curve analysis.

    PubMed

    Talluri, Rajesh; Shete, Sanjay

    2016-07-18

    Risk prediction models have been proposed for various diseases and are being improved as new predictors are identified. A major challenge is to determine whether the newly discovered predictors improve risk prediction. Decision curve analysis has been proposed as an alternative to the area under the curve and net reclassification index to evaluate the performance of prediction models in clinical scenarios. The decision curve computed using the net benefit can evaluate the predictive performance of risk models at a given or range of threshold probabilities. However, when the decision curves for 2 competing models cross in the range of interest, it is difficult to identify the best model as there is no readily available summary measure for evaluating the predictive performance. The key deterrent for using simple measures such as the area under the net benefit curve is the assumption that the threshold probabilities are uniformly distributed among patients. We propose a novel measure for performing decision curve analysis. The approach estimates the distribution of threshold probabilities without the need of additional data. Using the estimated distribution of threshold probabilities, the weighted area under the net benefit curve serves as the summary measure to compare risk prediction models in a range of interest. We compared 3 different approaches, the standard method, the area under the net benefit curve, and the weighted area under the net benefit curve. Type 1 error and power comparisons demonstrate that the weighted area under the net benefit curve has higher power compared to the other methods. Several simulation studies are presented to demonstrate the improvement in model comparison using the weighted area under the net benefit curve compared to the standard method. The proposed measure improves decision curve analysis by using the weighted area under the curve and thereby improves the power of the decision curve analysis to compare risk prediction models in a clinical scenario.

  15. New segregation analysis of panic disorder

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vieland, V.J.; Fyer, A.J.; Chapman, T.

    1996-04-09

    We performed simple segregation analyses of panic disorder using 126 families of probands with DSM-III-R panic disorder who were ascertained for a family study of anxiety disorders at an anxiety disorders research clinic. We present parameter estimates for dominant, recessive, and arbitrary single major locus models without sex effects, as well as for a nongenetic transmission model, and compare these models to each other and to models obtained by other investigators. We rejected the nongenetic transmission model when comparing it to the recessive model. Consistent with some previous reports, we find comparable support for dominant and recessive models, and inmore » both cases estimate nonzero phenocopy rates. The effect of restricting the analysis to families of probands without any lifetime history of comorbid major depression (MDD) was also examined. No notable differences in parameter estimates were found in that subsample, although the power of that analysis was low. Consistency between the findings in our sample and in another independently collected sample suggests the possibility of pooling such samples in the future in order to achieve the necessary power for more complex analyses. 32 refs., 4 tabs.« less

  16. Analysis of a Chevron Beam Thermal Actuator

    NASA Astrophysics Data System (ADS)

    Joshi, Amey Sanjay; Mohammed, Hussain; Kulkarni, S. M., Dr.

    2018-02-01

    Thermal MEMS (Micro-Electro-Mechanical Systems) actuators and sensors have a wide range of applications. The chevron type thermal actuators comparatively show superior performance over other existing electrostatic and thermal actuators. This paper describes the design and analysis of chevron type thermal actuator. Here standard design of Chevron type thermal actuator is considered which comprises of proof mass at center and array of six beams of a uniform cross section of 3 3 microns and an initial angle of 5°. The thermal actuator was designed and analyzed using analytical and finite element method and the results were compared. The model was also analyzed for initial angles of 2.5° and 7.5°, and the results were compared with FEA model. The cross section of the beam was varied and the finite element analysis of all three models was compared to suggest the best suitable thermal actuator structure.

  17. Marker-based or model-based RSA for evaluation of hip resurfacing arthroplasty? A clinical validation and 5-year follow-up.

    PubMed

    Lorenzen, Nina Dyrberg; Stilling, Maiken; Jakobsen, Stig Storgaard; Gustafson, Klas; Søballe, Kjeld; Baad-Hansen, Thomas

    2013-11-01

    The stability of implants is vital to ensure a long-term survival. RSA determines micro-motions of implants as a predictor of early implant failure. RSA can be performed as a marker- or model-based analysis. So far, CAD and RE model-based RSA have not been validated for use in hip resurfacing arthroplasty (HRA). A phantom study determined the precision of marker-based and CAD and RE model-based RSA on a HRA implant. In a clinical study, 19 patients were followed with stereoradiographs until 5 years after surgery. Analysis of double-examination migration results determined the clinical precision of marker-based and CAD model-based RSA, and at the 5-year follow-up, results of the total translation (TT) and the total rotation (TR) for marker- and CAD model-based RSA were compared. The phantom study showed that comparison of the precision (SDdiff) in marker-based RSA analysis was more precise than model-based RSA analysis in TT (p CAD < 0.001; p RE = 0.04) and TR (p CAD = 0.01; p RE < 0.001). The clinical precision (double examination in 8 patients) comparing the precision SDdiff was better evaluating the TT using the marker-based RSA analysis (p = 0.002), but showed no difference between the marker- and CAD model-based RSA analysis regarding the TR (p = 0.91). Comparing the mean signed values regarding the TT and the TR at the 5-year follow-up in 13 patients, the TT was lower (p = 0.03) and the TR higher (p = 0.04) in the marker-based RSA compared to CAD model-based RSA. The precision of marker-based RSA was significantly better than model-based RSA. However, problems with occluded markers lead to exclusion of many patients which was not a problem with model-based RSA. HRA were stable at the 5-year follow-up. The detection limit was 0.2 mm TT and 1° TR for marker-based and 0.5 mm TT and 1° TR for CAD model-based RSA for HRA.

  18. Comparative study of two approaches to model the offshore fish cages

    NASA Astrophysics Data System (ADS)

    Zhao, Yun-peng; Wang, Xin-xin; Decew, Jud; Tsukrov, Igor; Bai, Xiao-dong; Bi, Chun-wei

    2015-06-01

    The goal of this paper is to provide a comparative analysis of two commonly used approaches to discretize offshore fish cages: the lumped-mass approach and the finite element technique. Two case studies are chosen to compare predictions of the LMA (lumped-mass approach) and FEA (finite element analysis) based numerical modeling techniques. In both case studies, we consider several loading conditions consisting of different uniform currents and monochromatic waves. We investigate motion of the cage, its deformation, and the resultant tension in the mooring lines. Both model predictions are sufficient close to the experimental data, but for the first experiment, the DUT-FlexSim predictions are slightly more accurate than the ones provided by Aqua-FE™. According to the comparisons, both models can be successfully utilized to the design and analysis of the offshore fish cages provided that an appropriate safety factor is chosen.

  19. Two-Year versus One-Year Head Start Program Impact: Addressing Selection Bias by Comparing Regression Modeling with Propensity Score Analysis

    ERIC Educational Resources Information Center

    Leow, Christine; Wen, Xiaoli; Korfmacher, Jon

    2015-01-01

    This article compares regression modeling and propensity score analysis as different types of statistical techniques used in addressing selection bias when estimating the impact of two-year versus one-year Head Start on children's school readiness. The analyses were based on the national Head Start secondary dataset. After controlling for…

  20. Wellness Model of Supervision: A Comparative Analysis

    ERIC Educational Resources Information Center

    Lenz, A. Stephen; Sangganjanavanich, Varunee Faii; Balkin, Richard S.; Oliver, Marvarene; Smith, Robert L.

    2012-01-01

    This quasi-experimental study compared the effectiveness of the Wellness Model of Supervision (WELMS; Lenz & Smith, 2010) with alternative supervision models for developing wellness constructs, total personal wellness, and helping skills among counselors-in-training. Participants were 32 master's-level counseling students completing their…

  1. Test Cases for Modeling and Validation of Structures with Piezoelectric Actuators

    NASA Technical Reports Server (NTRS)

    Reaves, Mercedes C.; Horta, Lucas G.

    2001-01-01

    A set of benchmark test articles were developed to validate techniques for modeling structures containing piezoelectric actuators using commercially available finite element analysis packages. The paper presents the development, modeling, and testing of two structures: an aluminum plate with surface mounted patch actuators and a composite box beam with surface mounted actuators. Three approaches for modeling structures containing piezoelectric actuators using the commercially available packages: MSC/NASTRAN and ANSYS are presented. The approaches, applications, and limitations are discussed. Data for both test articles are compared in terms of frequency response functions from deflection and strain data to input voltage to the actuator. Frequency response function results using the three different analysis approaches provided comparable test/analysis results. It is shown that global versus local behavior of the analytical model and test article must be considered when comparing different approaches. Also, improper bonding of actuators greatly reduces the electrical to mechanical effectiveness of the actuators producing anti-resonance errors.

  2. Comparative Analysis of VaR Estimation of Double Long-Memory GARCH Models: Empirical Analysis of China's Stock Market

    NASA Astrophysics Data System (ADS)

    Cao, Guangxi; Guo, Jianping; Xu, Lin

    GARCH models are widely used to model the volatility of financial assets and measure VaR. Based on the characteristics of long-memory and lepkurtosis and fat tail of stock market return series, we compared the ability of double long-memory GARCH models with skewed student-t-distribution to compute VaR, through the empirical analysis of Shanghai Composite Index (SHCI) and Shenzhen Component Index (SZCI). The results show that the ARFIMA-HYGARCH model performance better than others, and at less than or equal to 2.5 percent of the level of VaR, double long-memory GARCH models have stronger ability to evaluate in-sample VaRs in long position than in short position while there is a diametrically opposite conclusion for ability of out-of-sample VaR forecast.

  3. Multi-objective calibration and uncertainty analysis of hydrologic models; A comparative study between formal and informal methods

    NASA Astrophysics Data System (ADS)

    Shafii, M.; Tolson, B.; Matott, L. S.

    2012-04-01

    Hydrologic modeling has benefited from significant developments over the past two decades. This has resulted in building of higher levels of complexity into hydrologic models, which eventually makes the model evaluation process (parameter estimation via calibration and uncertainty analysis) more challenging. In order to avoid unreasonable parameter estimates, many researchers have suggested implementation of multi-criteria calibration schemes. Furthermore, for predictive hydrologic models to be useful, proper consideration of uncertainty is essential. Consequently, recent research has emphasized comprehensive model assessment procedures in which multi-criteria parameter estimation is combined with statistically-based uncertainty analysis routines such as Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. Such a procedure relies on the use of formal likelihood functions based on statistical assumptions, and moreover, the Bayesian inference structured on MCMC samplers requires a considerably large number of simulations. Due to these issues, especially in complex non-linear hydrological models, a variety of alternative informal approaches have been proposed for uncertainty analysis in the multi-criteria context. This study aims at exploring a number of such informal uncertainty analysis techniques in multi-criteria calibration of hydrological models. The informal methods addressed in this study are (i) Pareto optimality which quantifies the parameter uncertainty using the Pareto solutions, (ii) DDS-AU which uses the weighted sum of objective functions to derive the prediction limits, and (iii) GLUE which describes the total uncertainty through identification of behavioral solutions. The main objective is to compare such methods with MCMC-based Bayesian inference with respect to factors such as computational burden, and predictive capacity, which are evaluated based on multiple comparative measures. The measures for comparison are calculated both for calibration and evaluation periods. The uncertainty analysis methodologies are applied to a simple 5-parameter rainfall-runoff model, called HYMOD.

  4. QSAR and 3D QSAR of inhibitors of the epidermal growth factor receptor

    NASA Astrophysics Data System (ADS)

    Pinto-Bazurco, Mariano; Tsakovska, Ivanka; Pajeva, Ilza

    This article reports quantitative structure-activity relationships (QSAR) and 3D QSAR models of 134 structurally diverse inhibitors of the epidermal growth factor receptor (EGFR) tyrosine kinase. Free-Wilson analysis was used to derive the QSAR model. It identified the substituents in aniline, the polycyclic system, and the substituents at the 6- and 7-positions of the polycyclic system as the most important structural features. Comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) were used in the 3D QSAR modeling. The steric and electrostatic interactions proved the most important for the inhibitory effect. Both QSAR and 3D QSAR models led to consistent results. On the basis of the statistically significant models, new structures were proposed and their inhibitory activities were predicted.

  5. Using multi-criteria analysis of simulation models to understand complex biological systems

    Treesearch

    Maureen C. Kennedy; E. David Ford

    2011-01-01

    Scientists frequently use computer-simulation models to help solve complex biological problems. Typically, such models are highly integrated, they produce multiple outputs, and standard methods of model analysis are ill suited for evaluating them. We show how multi-criteria optimization with Pareto optimality allows for model outputs to be compared to multiple system...

  6. An efficient current-based logic cell model for crosstalk delay analysis

    NASA Astrophysics Data System (ADS)

    Nazarian, Shahin; Das, Debasish

    2013-04-01

    Logic cell modelling is an important component in the analysis and design of CMOS integrated circuits, mostly due to nonlinear behaviour of CMOS cells with respect to the voltage signal at their input and output pins. A current-based model for CMOS logic cells is presented, which can be used for effective crosstalk noise and delta delay analysis in CMOS VLSI circuits. Existing current source models are expensive and need a new set of Spice-based characterisation, which is not compatible with typical EDA tools. In this article we present Imodel, a simple nonlinear logic cell model that can be derived from the typical cell libraries such as NLDM, with accuracy much higher than NLDM-based cell delay models. In fact, our experiments show an average error of 3% compared to Spice. This level of accuracy comes with a maximum runtime penalty of 19% compared to NLDM-based cell delay models on medium-sized industrial designs.

  7. Direct power comparisons between simple LOD scores and NPL scores for linkage analysis in complex diseases.

    PubMed

    Abreu, P C; Greenberg, D A; Hodge, S E

    1999-09-01

    Several methods have been proposed for linkage analysis of complex traits with unknown mode of inheritance. These methods include the LOD score maximized over disease models (MMLS) and the "nonparametric" linkage (NPL) statistic. In previous work, we evaluated the increase of type I error when maximizing over two or more genetic models, and we compared the power of MMLS to detect linkage, in a number of complex modes of inheritance, with analysis assuming the true model. In the present study, we compare MMLS and NPL directly. We simulated 100 data sets with 20 families each, using 26 generating models: (1) 4 intermediate models (penetrance of heterozygote between that of the two homozygotes); (2) 6 two-locus additive models; and (3) 16 two-locus heterogeneity models (admixture alpha = 1.0,.7,.5, and.3; alpha = 1.0 replicates simple Mendelian models). For LOD scores, we assumed dominant and recessive inheritance with 50% penetrance. We took the higher of the two maximum LOD scores and subtracted 0.3 to correct for multiple tests (MMLS-C). We compared expected maximum LOD scores and power, using MMLS-C and NPL as well as the true model. Since NPL uses only the affected family members, we also performed an affecteds-only analysis using MMLS-C. The MMLS-C was both uniformly more powerful than NPL for most cases we examined, except when linkage information was low, and close to the results for the true model under locus heterogeneity. We still found better power for the MMLS-C compared with NPL in affecteds-only analysis. The results show that use of two simple modes of inheritance at a fixed penetrance can have more power than NPL when the trait mode of inheritance is complex and when there is heterogeneity in the data set.

  8. Comparative analysis of bleeding risk by the location and shape of arachnoid cysts: a finite element model analysis.

    PubMed

    Lee, Chang-Hyun; Han, In Seok; Lee, Ji Yeoun; Phi, Ji Hoon; Kim, Seung-Ki; Kim, Young-Eun; Wang, Kyu-Chang

    2017-01-01

    Although arachnoid cysts (ACs) are observed in various locations, only sylvian ACs are mainly regarded to be associated with bleeding. The reason for this selective association of sylvian ACs with bleeding is not understood well. This study is to investigate the effect of the location and shape of ACs on the risk of bleeding. A developed finite element model of the head/brain was modified for models of sylvian, suprasellar, and posterior fossa ACs. A spherical AC was placed at each location to compare the effect of AC location. Bowl-shaped and oval-shaped AC models were developed to compare the effect by shape. The shear force on the spot-weld elements (SFSW) was measured between the dura and the outer wall of the ACs or the comparable arachnoid membrane in the normal model. All AC models revealed higher SFSW than comparable normal models. By location, sylvian AC displayed the highest SFSW for frontal and lateral impacts. By shape, small outer wall AC models showed higher SFSW than large wall models in sylvian area and lower SFSW than large ones in posterior fossa. In regression analysis, the presence of AC was the only independent risk of bleeding. The bleeding mechanism of ACs is very complex, and the risk quantification failed to show a significant role of location and shape of ACs. The presence of AC increases shear force on impact condition and may be a risk factor of bleeding, and sylvian location of AC may not have additive risks of AC bleeding.

  9. Finding Groups Using Model-Based Cluster Analysis: Heterogeneous Emotional Self-Regulatory Processes and Heavy Alcohol Use Risk

    ERIC Educational Resources Information Center

    Mun, Eun Young; von Eye, Alexander; Bates, Marsha E.; Vaschillo, Evgeny G.

    2008-01-01

    Model-based cluster analysis is a new clustering procedure to investigate population heterogeneity utilizing finite mixture multivariate normal densities. It is an inferentially based, statistically principled procedure that allows comparison of nonnested models using the Bayesian information criterion to compare multiple models and identify the…

  10. Surface Modeling, Solid Modeling and Finite Element Modeling. Analysis Capabilities of Computer-Assisted Design and Manufacturing Systems.

    ERIC Educational Resources Information Center

    Nee, John G.; Kare, Audhut P.

    1987-01-01

    Explores several concepts in computer assisted design/computer assisted manufacturing (CAD/CAM). Defines, evaluates, reviews and compares advanced computer-aided geometric modeling and analysis techniques. Presents the results of a survey to establish the capabilities of minicomputer based-systems with the CAD/CAM packages evaluated. (CW)

  11. Macro-level pedestrian and bicycle crash analysis: Incorporating spatial spillover effects in dual state count models.

    PubMed

    Cai, Qing; Lee, Jaeyoung; Eluru, Naveen; Abdel-Aty, Mohamed

    2016-08-01

    This study attempts to explore the viability of dual-state models (i.e., zero-inflated and hurdle models) for traffic analysis zones (TAZs) based pedestrian and bicycle crash frequency analysis. Additionally, spatial spillover effects are explored in the models by employing exogenous variables from neighboring zones. The dual-state models such as zero-inflated negative binomial and hurdle negative binomial models (with and without spatial effects) are compared with the conventional single-state model (i.e., negative binomial). The model comparison for pedestrian and bicycle crashes revealed that the models that considered observed spatial effects perform better than the models that did not consider the observed spatial effects. Across the models with spatial spillover effects, the dual-state models especially zero-inflated negative binomial model offered better performance compared to single-state models. Moreover, the model results clearly highlighted the importance of various traffic, roadway, and sociodemographic characteristics of the TAZ as well as neighboring TAZs on pedestrian and bicycle crash frequency. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Factor Analysis of Drawings: Application to college student models of the greenhouse effect

    NASA Astrophysics Data System (ADS)

    Libarkin, Julie C.; Thomas, Stephen R.; Ording, Gabriel

    2015-09-01

    Exploratory factor analysis was used to identify models underlying drawings of the greenhouse effect made by over 200 entering university freshmen. Initial content analysis allowed deconstruction of drawings into salient features, with grouping of these features via factor analysis. A resulting 4-factor solution explains 62% of the data variance, suggesting that 4 archetype models of the greenhouse effect dominate thinking within this population. Factor scores, indicating the extent to which each student's drawing aligned with representative models, were compared to performance on conceptual understanding and attitudes measures, demographics, and non-cognitive features of drawings. Student drawings were also compared to drawings made by scientists to ascertain the extent to which models reflect more sophisticated and accurate models. Results indicate that student and scientist drawings share some similarities, most notably the presence of some features of the most sophisticated non-scientific model held among the study population. Prior knowledge, prior attitudes, gender, and non-cognitive components are also predictive of an individual student's model. This work presents a new technique for analyzing drawings, with general implications for the use of drawings in investigating student conceptions.

  13. Comparing GWAS Results of Complex Traits Using Full Genetic Model and Additive Models for Revealing Genetic Architecture

    PubMed Central

    Monir, Md. Mamun; Zhu, Jun

    2017-01-01

    Most of the genome-wide association studies (GWASs) for human complex diseases have ignored dominance, epistasis and ethnic interactions. We conducted comparative GWASs for total cholesterol using full model and additive models, which illustrate the impacts of the ignoring genetic variants on analysis results and demonstrate how genetic effects of multiple loci could differ across different ethnic groups. There were 15 quantitative trait loci with 13 individual loci and 3 pairs of epistasis loci identified by full model, whereas only 14 loci (9 common loci and 5 different loci) identified by multi-loci additive model. Again, 4 full model detected loci were not detected using multi-loci additive model. PLINK-analysis identified two loci and GCTA-analysis detected only one locus with genome-wide significance. Full model identified three previously reported genes as well as several new genes. Bioinformatics analysis showed some new genes are related with cholesterol related chemicals and/or diseases. Analyses of cholesterol data and simulation studies revealed that the full model performs were better than the additive-model performs in terms of detecting power and unbiased estimations of genetic variants of complex traits. PMID:28079101

  14. Model Ambiguities in Configurational Comparative Research

    ERIC Educational Resources Information Center

    Baumgartner, Michael; Thiem, Alrik

    2017-01-01

    For many years, sociologists, political scientists, and management scholars have readily relied on Qualitative Comparative Analysis (QCA) for the purpose of configurational causal modeling. However, this article reveals that a severe problem in the application of QCA has gone unnoticed so far: model ambiguities. These arise when multiple causal…

  15. Comparison of Response Surface and Kriging Models for Multidisciplinary Design Optimization

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Korte, John J.; Mauery, Timothy M.; Mistree, Farrokh

    1998-01-01

    In this paper, we compare and contrast the use of second-order response surface models and kriging models for approximating non-random, deterministic computer analyses. After reviewing the response surface method for constructing polynomial approximations, kriging is presented as an alternative approximation method for the design and analysis of computer experiments. Both methods are applied to the multidisciplinary design of an aerospike nozzle which consists of a computational fluid dynamics model and a finite-element model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations, and four optimization problems m formulated and solved using both sets of approximation models. The second-order response surface models and kriging models-using a constant underlying global model and a Gaussian correlation function-yield comparable results.

  16. Modeling and Analysis of Space Based Transceivers

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Liebetreu, John; Moore, Michael S.; Price, Jeremy C.; Abbott, Ben

    2005-01-01

    This paper presents the tool chain, methodology, and initial results of a study to provide a thorough, objective, and quantitative analysis of the design alternatives for space Software Defined Radio (SDR) transceivers. The approach taken was to develop a set of models and tools for describing communications requirements, the algorithm resource requirements, the available hardware, and the alternative software architectures, and generate analysis data necessary to compare alternative designs. The Space Transceiver Analysis Tool (STAT) was developed to help users identify and select representative designs, calculate the analysis data, and perform a comparative analysis of the representative designs. The tool allows the design space to be searched quickly while permitting incremental refinement in regions of higher payoff.

  17. Modeling and Analysis of Space Based Transceivers

    NASA Technical Reports Server (NTRS)

    Moore, Michael S.; Price, Jeremy C.; Abbott, Ben; Liebetreu, John; Reinhart, Richard C.; Kacpura, Thomas J.

    2007-01-01

    This paper presents the tool chain, methodology, and initial results of a study to provide a thorough, objective, and quantitative analysis of the design alternatives for space Software Defined Radio (SDR) transceivers. The approach taken was to develop a set of models and tools for describing communications requirements, the algorithm resource requirements, the available hardware, and the alternative software architectures, and generate analysis data necessary to compare alternative designs. The Space Transceiver Analysis Tool (STAT) was developed to help users identify and select representative designs, calculate the analysis data, and perform a comparative analysis of the representative designs. The tool allows the design space to be searched quickly while permitting incremental refinement in regions of higher payoff.

  18. Least-Squares Regression and Spectral Residual Augmented Classical Least-Squares Chemometric Models for Stability-Indicating Analysis of Agomelatine and Its Degradation Products: A Comparative Study.

    PubMed

    Naguib, Ibrahim A; Abdelrahman, Maha M; El Ghobashy, Mohamed R; Ali, Nesma A

    2016-01-01

    Two accurate, sensitive, and selective stability-indicating methods are developed and validated for simultaneous quantitative determination of agomelatine (AGM) and its forced degradation products (Deg I and Deg II), whether in pure forms or in pharmaceutical formulations. Partial least-squares regression (PLSR) and spectral residual augmented classical least-squares (SRACLS) are two chemometric models that are being subjected to a comparative study through handling UV spectral data in range (215-350 nm). For proper analysis, a three-factor, four-level experimental design was established, resulting in a training set consisting of 16 mixtures containing different ratios of interfering species. An independent test set consisting of eight mixtures was used to validate the prediction ability of the suggested models. The results presented indicate the ability of mentioned multivariate calibration models to analyze AGM, Deg I, and Deg II with high selectivity and accuracy. The analysis results of the pharmaceutical formulations were statistically compared to the reference HPLC method, with no significant differences observed regarding accuracy and precision. The SRACLS model gives comparable results to the PLSR model; however, it keeps the qualitative spectral information of the classical least-squares algorithm for analyzed components.

  19. Rapid State Space Modeling Tool for Rectangular Wing Aeroservoelastic Studies

    NASA Technical Reports Server (NTRS)

    Suh, Peter M.; Conyers, Howard J.; Mavris, Dimitri N.

    2014-01-01

    This paper introduces a modeling and simulation tool for aeroservoelastic analysis of rectangular wings with trailing edge control surfaces. The inputs to the code are planform design parameters such as wing span, aspect ratio and number of control surfaces. A doublet lattice approach is taken to compute generalized forces. A rational function approximation is computed. The output, computed in a few seconds, is a state space aeroservoelastic model which can be used for analysis and control design. The tool is fully parameterized with default information so there is little required interaction with the model developer. Although, all parameters can be easily modified if desired.The focus of this paper is on tool presentation, verification and validation. This process is carried out in stages throughout the paper. The rational function approximation is verified against computed generalized forces for a plate model. A model composed of finite element plates is compared to a modal analysis from commercial software and an independently conducted experimental ground vibration test analysis. Aeroservoelastic analysis is the ultimate goal of this tool. Therefore the flutter speed and frequency for a clamped plate are computed using V-g and V-f analysis. The computational results are compared to a previously published computational analysis and wind tunnel results for the same structure. Finally a case study of a generic wing model with a single control surface is presented. Verification of the state space model is presented in comparison to V-g and V-f analysis. This also includes the analysis of the model in response to a 1-cos gust.

  20. Rapid State Space Modeling Tool for Rectangular Wing Aeroservoelastic Studies

    NASA Technical Reports Server (NTRS)

    Suh, Peter M.; Conyers, Howard J.; Mavris, Dimitri N.

    2015-01-01

    This paper introduces a modeling and simulation tool for aeroservoelastic analysis of rectangular wings with trailing-edge control surfaces. The inputs to the code are planform design parameters such as wing span, aspect ratio, and number of control surfaces. Using this information, the generalized forces are computed using the doublet-lattice method. Using Roger's approximation, a rational function approximation is computed. The output, computed in a few seconds, is a state space aeroservoelastic model which can be used for analysis and control design. The tool is fully parameterized with default information so there is little required interaction with the model developer. All parameters can be easily modified if desired. The focus of this paper is on tool presentation, verification, and validation. These processes are carried out in stages throughout the paper. The rational function approximation is verified against computed generalized forces for a plate model. A model composed of finite element plates is compared to a modal analysis from commercial software and an independently conducted experimental ground vibration test analysis. Aeroservoelastic analysis is the ultimate goal of this tool, therefore, the flutter speed and frequency for a clamped plate are computed using damping-versus-velocity and frequency-versus-velocity analysis. The computational results are compared to a previously published computational analysis and wind-tunnel results for the same structure. A case study of a generic wing model with a single control surface is presented. Verification of the state space model is presented in comparison to damping-versus-velocity and frequency-versus-velocity analysis, including the analysis of the model in response to a 1-cos gust.

  1. Rapid State Space Modeling Tool for Rectangular Wing Aeroservoelastic Studies

    NASA Technical Reports Server (NTRS)

    Suh, Peter M.; Conyers, Howard Jason; Mavris, Dimitri N.

    2015-01-01

    This report introduces a modeling and simulation tool for aeroservoelastic analysis of rectangular wings with trailing-edge control surfaces. The inputs to the code are planform design parameters such as wing span, aspect ratio, and number of control surfaces. Using this information, the generalized forces are computed using the doublet-lattice method. Using Roger's approximation, a rational function approximation is computed. The output, computed in a few seconds, is a state space aeroservoelastic model which can be used for analysis and control design. The tool is fully parameterized with default information so there is little required interaction with the model developer. All parameters can be easily modified if desired. The focus of this report is on tool presentation, verification, and validation. These processes are carried out in stages throughout the report. The rational function approximation is verified against computed generalized forces for a plate model. A model composed of finite element plates is compared to a modal analysis from commercial software and an independently conducted experimental ground vibration test analysis. Aeroservoelastic analysis is the ultimate goal of this tool, therefore, the flutter speed and frequency for a clamped plate are computed using damping-versus-velocity and frequency-versus-velocity analysis. The computational results are compared to a previously published computational analysis and wind-tunnel results for the same structure. A case study of a generic wing model with a single control surface is presented. Verification of the state space model is presented in comparison to damping-versus-velocity and frequency-versus-velocity analysis, including the analysis of the model in response to a 1-cos gust.

  2. Comparative Analysis of Academic Grades in Compulsory Secondary Education in Spain Using Statistical Techniques

    ERIC Educational Resources Information Center

    Veas, Alejandro; Gilar, Raquel; Miñano, Pablo; Castejón, Juan Luis

    2017-01-01

    The present study, based on the construct comparability approach, performs a comparative analysis of general points average for seven courses, using exploratory factor analysis (EFA) and the Partial Credit model (PCM) with a sample of 1398 student subjects (M = 12.5, SD = 0.67) from 8 schools in the province of Alicante (Spain). EFA confirmed a…

  3. A Model Comparison for Characterizing Protein Motions from Structure

    NASA Astrophysics Data System (ADS)

    David, Charles; Jacobs, Donald

    2011-10-01

    A comparative study is made using three computational models that characterize native state dynamics starting from known protein structures taken from four distinct SCOP classifications. A geometrical simulation is performed, and the results are compared to the elastic network model and molecular dynamics. The essential dynamics is quantified by a direct analysis of a mode subspace constructed from ANM and a principal component analysis on both the FRODA and MD trajectories using root mean square inner product and principal angles. Relative subspace sizes and overlaps are visualized using the projection of displacement vectors on the model modes. Additionally, a mode subspace is constructed from PCA on an exemplar set of X-ray crystal structures in order to determine similarly with respect to the generated ensembles. Quantitative analysis reveals there is significant overlap across the three model subspaces and the model independent subspace. These results indicate that structure is the key determinant for native state dynamics.

  4. Estimating short-period dynamics using an extended Kalman filter

    NASA Technical Reports Server (NTRS)

    Bauer, Jeffrey E.; Andrisani, Dominick

    1990-01-01

    An extended Kalman filter (EKF) is used to estimate the parameters of a low-order model from aircraft transient response data. The low-order model is a state space model derived from the short-period approximation of the longitudinal aircraft dynamics. The model corresponds to the pitch rate to stick force transfer function currently used in flying qualities analysis. Because of the model chosen, handling qualities information is also obtained. The parameters are estimated from flight data as well as from a six-degree-of-freedom, nonlinear simulation of the aircraft. These two estimates are then compared and the discrepancies noted. The low-order model is able to satisfactorily match both flight data and simulation data from a high-order computer simulation. The parameters obtained from the EKF analysis of flight data are compared to those obtained using frequency response analysis of the flight data. Time delays and damping ratios are compared and are in agreement. This technique demonstrates the potential to determine, in near real time, the extent of differences between computer models and the actual aircraft. Precise knowledge of these differences can help to determine the flying qualities of a test aircraft and lead to more efficient envelope expansion.

  5. COMPARING THE UTILITY OF MULTIMEDIA MODELS FOR HUMAN AND ECOLOGICAL EXPOSURE ANALYSIS: TWO CASES

    EPA Science Inventory

    A number of models are available for exposure assessment; however, few are used as tools for both human and ecosystem risks. This discussion will consider two modeling frameworks that have recently been used to support human and ecological decision making. The study will compare ...

  6. Accuracy of Bolton analysis measured in laser scanned digital models compared with plaster models (gold standard) and cone-beam computer tomography images.

    PubMed

    Kim, Jooseong; Lagravére, Manuel O

    2016-01-01

    The aim of this study was to compare the accuracy of Bolton analysis obtained from digital models scanned with the Ortho Insight three-dimensional (3D) laser scanner system to those obtained from cone-beam computed tomography (CBCT) images and traditional plaster models. CBCT scans and plaster models were obtained from 50 patients. Plaster models were scanned using the Ortho Insight 3D laser scanner; Bolton ratios were calculated with its software. CBCT scans were imported and analyzed using AVIZO software. Plaster models were measured with a digital caliper. Data were analyzed with descriptive statistics and the intraclass correlation coefficient (ICC). Anterior and overall Bolton ratios obtained by the three different modalities exhibited excellent agreement (> 0.970). The mean differences between the scanned digital models and physical models and between the CBCT images and scanned digital models for overall Bolton ratios were 0.41 ± 0.305% and 0.45 ± 0.456%, respectively; for anterior Bolton ratios, 0.59 ± 0.520% and 1.01 ± 0.780%, respectively. ICC results showed that intraexaminer error reliability was generally excellent (> 0.858 for all three diagnostic modalities), with < 1.45% discrepancy in the Bolton analysis. Laser scanned digital models are highly accurate compared to physical models and CBCT scans for assessing the spatial relationships of dental arches for orthodontic diagnosis.

  7. A Comparison of Alternative Approaches to the Analysis of Interrupted Time-Series.

    ERIC Educational Resources Information Center

    Harrop, John W.; Velicer, Wayne F.

    1985-01-01

    Computer generated data representative of 16 Auto Regressive Integrated Moving Averages (ARIMA) models were used to compare the results of interrupted time-series analysis using: (1) the known model identification, (2) an assumed (l,0,0) model, and (3) an assumed (3,0,0) model as an approximation to the General Transformation approach. (Author/BW)

  8. A Rational Analysis of Rule-Based Concept Learning

    ERIC Educational Resources Information Center

    Goodman, Noah D.; Tenenbaum, Joshua B.; Feldman, Jacob; Griffiths, Thomas L.

    2008-01-01

    This article proposes a new model of human concept learning that provides a rational analysis of learning feature-based concepts. This model is built upon Bayesian inference for a grammatically structured hypothesis space--a concept language of logical rules. This article compares the model predictions to human generalization judgments in several…

  9. A selection model for accounting for publication bias in a full network meta-analysis.

    PubMed

    Mavridis, Dimitris; Welton, Nicky J; Sutton, Alex; Salanti, Georgia

    2014-12-30

    Copas and Shi suggested a selection model to explore the potential impact of publication bias via sensitivity analysis based on assumptions for the probability of publication of trials conditional on the precision of their results. Chootrakool et al. extended this model to three-arm trials but did not fully account for the implications of the consistency assumption, and their model is difficult to generalize for complex network structures with more than three treatments. Fitting these selection models within a frequentist setting requires maximization of a complex likelihood function, and identification problems are common. We have previously presented a Bayesian implementation of the selection model when multiple treatments are compared with a common reference treatment. We now present a general model suitable for complex, full network meta-analysis that accounts for consistency when adjusting results for publication bias. We developed a design-by-treatment selection model to describe the mechanism by which studies with different designs (sets of treatments compared in a trial) and precision may be selected for publication. We fit the model in a Bayesian setting because it avoids the numerical problems encountered in the frequentist setting, it is generalizable with respect to the number of treatments and study arms, and it provides a flexible framework for sensitivity analysis using external knowledge. Our model accounts for the additional uncertainty arising from publication bias more successfully compared to the standard Copas model or its previous extensions. We illustrate the methodology using a published triangular network for the failure of vascular graft or arterial patency. Copyright © 2014 John Wiley & Sons, Ltd.

  10. A Preliminary Bayesian Analysis of Incomplete Longitudinal Data from a Small Sample: Methodological Advances in an International Comparative Study of Educational Inequality

    ERIC Educational Resources Information Center

    Hsieh, Chueh-An; Maier, Kimberly S.

    2009-01-01

    The capacity of Bayesian methods in estimating complex statistical models is undeniable. Bayesian data analysis is seen as having a range of advantages, such as an intuitive probabilistic interpretation of the parameters of interest, the efficient incorporation of prior information to empirical data analysis, model averaging and model selection.…

  11. 2012 National Park visitor spending effects: economic contributions to local communities, states, and the nation

    USGS Publications Warehouse

    Cullinane Thomas, Catherine; Huber, Christopher C.; Koontz, Lynne

    2014-01-01

    This 2012 analysis marks a major revision to the NPS visitor spending effects analyses, with the development of a new visitor spending effects model (VSE model) that replaces the former Money Generation Model (MGM2). Many of the hallmarks and processes of the MGM2 model are preserved in the new VSE model, but the new model makes significant strides in improving the accuracy and transparency of the analysis. Because of this change from the MGM2 model to the VSE model, estimates from this year’s analysis are not directly comparable to previous analyses.

  12. Cost-effectiveness of rivaroxaban for stroke prevention in atrial fibrillation in the Portuguese setting.

    PubMed

    Morais, João; Aguiar, Carlos; McLeod, Euan; Chatzitheofilou, Ismini; Fonseca Santos, Isabel; Pereira, Sónia

    2014-09-01

    To project the long-term cost-effectiveness of treating non-valvular atrial fibrillation (AF) patients for stroke prevention with rivaroxaban compared to warfarin in Portugal. A Markov model was used that included health and treatment states describing the management and consequences of AF and its treatment. The model's time horizon was set at a patient's lifetime and each cycle at three months. The analysis was conducted from a societal perspective and a 5% discount rate was applied to both costs and outcomes. Treatment effect data were obtained from the pivotal phase III ROCKET AF trial. The model was also populated with utility values obtained from the literature and with cost data derived from official Portuguese sources. The outcomes of the model included life-years, quality-adjusted life-years (QALYs), incremental costs, and associated incremental cost-effectiveness ratios (ICERs). Extensive sensitivity analyses were undertaken to further assess the findings of the model. As there is evidence indicating underuse and underprescription of warfarin in Portugal, an additional analysis was performed using a mixed comparator composed of no treatment, aspirin, and warfarin, which better reflects real-world prescribing in Portugal. This cost-effectiveness analysis produced an ICER of €3895/QALY for the base-case analysis (vs. warfarin) and of €6697/QALY for the real-world prescribing analysis (vs. mixed comparator). The findings were robust when tested in sensitivity analyses. The results showed that rivaroxaban may be a cost-effective alternative compared with warfarin or real-world prescribing in Portugal. Copyright © 2014 Sociedade Portuguesa de Cardiologia. Published by Elsevier España. All rights reserved.

  13. Comparing the Fit of Item Response Theory and Factor Analysis Models

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Alberto; Cai, Li; Hernandez, Adolfo

    2011-01-01

    Linear factor analysis (FA) models can be reliably tested using test statistics based on residual covariances. We show that the same statistics can be used to reliably test the fit of item response theory (IRT) models for ordinal data (under some conditions). Hence, the fit of an FA model and of an IRT model to the same data set can now be…

  14. Accounting for standard errors of vision-specific latent trait in regression models.

    PubMed

    Wong, Wan Ling; Li, Xiang; Li, Jialiang; Wong, Tien Yin; Cheng, Ching-Yu; Lamoureux, Ecosse L

    2014-07-11

    To demonstrate the effectiveness of Hierarchical Bayesian (HB) approach in a modeling framework for association effects that accounts for SEs of vision-specific latent traits assessed using Rasch analysis. A systematic literature review was conducted in four major ophthalmic journals to evaluate Rasch analysis performed on vision-specific instruments. The HB approach was used to synthesize the Rasch model and multiple linear regression model for the assessment of the association effects related to vision-specific latent traits. The effectiveness of this novel HB one-stage "joint-analysis" approach allows all model parameters to be estimated simultaneously and was compared with the frequently used two-stage "separate-analysis" approach in our simulation study (Rasch analysis followed by traditional statistical analyses without adjustment for SE of latent trait). Sixty-six reviewed articles performed evaluation and validation of vision-specific instruments using Rasch analysis, and 86.4% (n = 57) performed further statistical analyses on the Rasch-scaled data using traditional statistical methods; none took into consideration SEs of the estimated Rasch-scaled scores. The two models on real data differed for effect size estimations and the identification of "independent risk factors." Simulation results showed that our proposed HB one-stage "joint-analysis" approach produces greater accuracy (average of 5-fold decrease in bias) with comparable power and precision in estimation of associations when compared with the frequently used two-stage "separate-analysis" procedure despite accounting for greater uncertainty due to the latent trait. Patient-reported data, using Rasch analysis techniques, do not take into account the SE of latent trait in association analyses. The HB one-stage "joint-analysis" is a better approach, producing accurate effect size estimations and information about the independent association of exposure variables with vision-specific latent traits. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  15. ASTROP2 users manual: A program for aeroelastic stability analysis of propfans

    NASA Technical Reports Server (NTRS)

    Narayanan, G. V.; Kaza, K. R. V.

    1991-01-01

    A user's manual is presented for the aeroelastic stability and response of propulsion systems computer program called ASTROP2. The ASTROP2 code preforms aeroelastic stability analysis of rotating propfan blades. This analysis uses a two-dimensional, unsteady cascade aerodynamics model and a three-dimensional, normal-mode structural model. Analytical stability results from this code are compared with published experimental results of a rotating composite advanced turboprop model and of nonrotating metallic wing model.

  16. Display analysis with the optimal control model of the human operator. [pilot-vehicle display interface and information processing

    NASA Technical Reports Server (NTRS)

    Baron, S.; Levison, W. H.

    1977-01-01

    Application of the optimal control model of the human operator to problems in display analysis is discussed. Those aspects of the model pertaining to the operator-display interface and to operator information processing are reviewed and discussed. The techniques are then applied to the analysis of advanced display/control systems for a Terminal Configured Vehicle. Model results are compared with those obtained in a large, fixed-base simulation.

  17. Comparative Reannotation of 21 Aspergillus Genomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salamov, Asaf; Riley, Robert; Kuo, Alan

    2013-03-08

    We used comparative gene modeling to reannotate 21 Aspergillus genomes. Initial automatic annotation of individual genomes may contain some errors of different nature, e.g. missing genes, incorrect exon-intron structures, 'chimeras', which fuse 2 or more real genes or alternatively splitting some real genes into 2 or more models. The main premise behind the comparative modeling approach is that for closely related genomes most orthologous families have the same conserved gene structure. The algorithm maps all gene models predicted in each individual Aspergillus genome to the other genomes and, for each locus, selects from potentially many competing models, the one whichmore » most closely resembles the orthologous genes from other genomes. This procedure is iterated until no further change in gene models is observed. For Aspergillus genomes we predicted in total 4503 new gene models ( ~;;2percent per genome), supported by comparative analysis, additionally correcting ~;;18percent of old gene models. This resulted in a total of 4065 more genes with annotated PFAM domains (~;;3percent increase per genome). Analysis of a few genomes with EST/transcriptomics data shows that the new annotation sets also have a higher number of EST-supported splice sites at exon-intron boundaries.« less

  18. Psychological Implications of Motherhood and Fatherhood in Midlife: Evidence from Sibling Models

    ERIC Educational Resources Information Center

    Pudrovska, Tetyana

    2008-01-01

    Using data from 4,744 full, twin, half-, adopted, and stepsiblings in the Wisconsin Longitudinal Study, I examine psychological consequences of motherhood and fatherhood in midlife. My analysis includes between-family models that compare individuals across families and within-family models comparing siblings from the same family to account for…

  19. Measurement of myocardial blood flow by cardiovascular magnetic resonance perfusion: comparison of distributed parameter and Fermi models with single and dual bolus.

    PubMed

    Papanastasiou, Giorgos; Williams, Michelle C; Kershaw, Lucy E; Dweck, Marc R; Alam, Shirjel; Mirsadraee, Saeed; Connell, Martin; Gray, Calum; MacGillivray, Tom; Newby, David E; Semple, Scott Ik

    2015-02-17

    Mathematical modeling of cardiovascular magnetic resonance perfusion data allows absolute quantification of myocardial blood flow. Saturation of left ventricle signal during standard contrast administration can compromise the input function used when applying these models. This saturation effect is evident during application of standard Fermi models in single bolus perfusion data. Dual bolus injection protocols have been suggested to eliminate saturation but are much less practical in the clinical setting. The distributed parameter model can also be used for absolute quantification but has not been applied in patients with coronary artery disease. We assessed whether distributed parameter modeling might be less dependent on arterial input function saturation than Fermi modeling in healthy volunteers. We validated the accuracy of each model in detecting reduced myocardial blood flow in stenotic vessels versus gold-standard invasive methods. Eight healthy subjects were scanned using a dual bolus cardiac perfusion protocol at 3T. We performed both single and dual bolus analysis of these data using the distributed parameter and Fermi models. For the dual bolus analysis, a scaled pre-bolus arterial input function was used. In single bolus analysis, the arterial input function was extracted from the main bolus. We also performed analysis using both models of single bolus data obtained from five patients with coronary artery disease and findings were compared against independent invasive coronary angiography and fractional flow reserve. Statistical significance was defined as two-sided P value < 0.05. Fermi models overestimated myocardial blood flow in healthy volunteers due to arterial input function saturation in single bolus analysis compared to dual bolus analysis (P < 0.05). No difference was observed in these volunteers when applying distributed parameter-myocardial blood flow between single and dual bolus analysis. In patients, distributed parameter modeling was able to detect reduced myocardial blood flow at stress (<2.5 mL/min/mL of tissue) in all 12 stenotic vessels compared to only 9 for Fermi modeling. Comparison of single bolus versus dual bolus values suggests that distributed parameter modeling is less dependent on arterial input function saturation than Fermi modeling. Distributed parameter modeling showed excellent accuracy in detecting reduced myocardial blood flow in all stenotic vessels.

  20. Genome-scale metabolic modeling of Mucor circinelloides and comparative analysis with other oleaginous species.

    PubMed

    Vongsangnak, Wanwipa; Klanchui, Amornpan; Tawornsamretkit, Iyarest; Tatiyaborwornchai, Witthawin; Laoteng, Kobkul; Meechai, Asawin

    2016-06-01

    We present a novel genome-scale metabolic model iWV1213 of Mucor circinelloides, which is an oleaginous fungus for industrial applications. The model contains 1213 genes, 1413 metabolites and 1326 metabolic reactions across different compartments. We demonstrate that iWV1213 is able to accurately predict the growth rates of M. circinelloides on various nutrient sources and culture conditions using Flux Balance Analysis and Phenotypic Phase Plane analysis. Comparative analysis of three oleaginous genome-scale models, including M. circinelloides (iWV1213), Mortierella alpina (iCY1106) and Yarrowia lipolytica (iYL619_PCP) revealed that iWV1213 possesses a higher number of genes involved in carbohydrate, amino acid, and lipid metabolisms that might contribute to its versatility in nutrient utilization. Moreover, the identification of unique and common active reactions among the Zygomycetes oleaginous models using Flux Variability Analysis unveiled a set of gene/enzyme candidates as metabolic engineering targets for cellular improvement. Thus, iWV1213 offers a powerful metabolic engineering tool for multi-level omics analysis, enabling strain optimization as a cell factory platform of lipid-based production. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Modeling spatiotemporal covariance for magnetoencephalography or electroencephalography source analysis.

    PubMed

    Plis, Sergey M; George, J S; Jun, S C; Paré-Blagoev, J; Ranken, D M; Wood, C C; Schmidt, D M

    2007-01-01

    We propose a new model to approximate spatiotemporal noise covariance for use in neural electromagnetic source analysis, which better captures temporal variability in background activity. As with other existing formalisms, our model employs a Kronecker product of matrices representing temporal and spatial covariance. In our model, spatial components are allowed to have differing temporal covariances. Variability is represented as a series of Kronecker products of spatial component covariances and corresponding temporal covariances. Unlike previous attempts to model covariance through a sum of Kronecker products, our model is designed to have a computationally manageable inverse. Despite increased descriptive power, inversion of the model is fast, making it useful in source analysis. We have explored two versions of the model. One is estimated based on the assumption that spatial components of background noise have uncorrelated time courses. Another version, which gives closer approximation, is based on the assumption that time courses are statistically independent. The accuracy of the structural approximation is compared to an existing model, based on a single Kronecker product, using both Frobenius norm of the difference between spatiotemporal sample covariance and a model, and scatter plots. Performance of ours and previous models is compared in source analysis of a large number of single dipole problems with simulated time courses and with background from authentic magnetoencephalography data.

  2. Modified optimal control pilot model for computer-aided design and analysis

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Schmidt, David K.

    1992-01-01

    This paper presents the theoretical development of a modified optimal control pilot model based upon the optimal control model (OCM) of the human operator developed by Kleinman, Baron, and Levison. This model is input compatible with the OCM and retains other key aspects of the OCM, such as a linear quadratic solution for the pilot gains with inclusion of control rate in the cost function, a Kalman estimator, and the ability to account for attention allocation and perception threshold effects. An algorithm designed for each implementation in current dynamic systems analysis and design software is presented. Example results based upon the analysis of a tracking task using three basic dynamic systems are compared with measured results and with similar analyses performed with the OCM and two previously proposed simplified optimal pilot models. The pilot frequency responses and error statistics obtained with this modified optimal control model are shown to compare more favorably to the measured experimental results than the other previously proposed simplified models evaluated.

  3. Scientific Ballooning Technologies Workshop STO-2 Thermal Design and Analysis

    NASA Technical Reports Server (NTRS)

    Ferguson, Doug

    2016-01-01

    The heritage thermal model for the full STO-2 (Stratospheric Terahertz Observatory II), vehicle has been updated to model the CSBF (Columbia Scientific Balloon Facility) SIP-14 (Scientific Instrument Package) in detail. Analysis of this model has been performed for the Antarctica FY2017 launch season. Model temperature predictions are compared to previous results from STO-2 review documents.

  4. Diagnostic Classification Models: Are They Necessary? Commentary on Rupp and Templin (2008)

    ERIC Educational Resources Information Center

    Gorin, Joanna S.

    2009-01-01

    In their paper "Unique Characteristics of Diagnostic Classification Models: A Comprehensive Review of the Current State-of-the-Art," Andre Rupp and Jonathan Templin (2008) provide a comparative analysis of selected psychometric models useful for the analysis of multidimensional data for purposes of diagnostic score reporting. Recent assessment…

  5. Posttest analysis of LOFT LOCE L2-3 using the ESA RELAP4 blowdown model. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perryman, J.L.; Samuels, T.K.; Cooper, C.H.

    A posttest analysis of the blowdown portion of Loss-of-Coolant Experiment (LOCE) L2-3, which was conducted in the Loss-of-Fluid Test (LOFT) facility, was performed using the experiment safety analysis (ESA) RELAP4/MOD5 computer model. Measured experimental parameters were compared with the calculations in order to assess the conservatisms in the ESA RELAP4/MOD5 model.

  6. Posttest RELAP4 analysis of LOFT experiment L1-4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grush, W.H.; Holmstrom, H.L.O.

    Results of posttest analysis of LOFT loss-of-coolant experiment L1-4 with the RELAP4 code are presented. The results are compared with the pretest prediction and the test data. Differences between the RELAP4 model used for this analysis and that used for the pretest prediction are in the areas of initial conditions, nodalization, emergency core cooling system, broken loop hot leg, and steam generator secondary. In general, these changes made only minor improvement in the comparison of the analytical results to the data. Also presented are the results of a limited study of LOFT downcomer modeling which compared the performance of themore » conventional single downcomer model with that of the new split downcomer model. A RELAP4 sensitivity calculation with artificially elevated emergency core coolant temperature was performed to highlight the need for an ECC mixing model in RELAP4.« less

  7. Efficient finite element modelling for the investigation of the dynamic behaviour of a structure with bolted joints

    NASA Astrophysics Data System (ADS)

    Omar, R.; Rani, M. N. Abdul; Yunus, M. A.; Mirza, W. I. I. Wan Iskandar; Zin, M. S. Mohd

    2018-04-01

    A simple structure with bolted joints consists of the structural components, bolts and nuts. There are several methods to model the structures with bolted joints, however there is no reliable, efficient and economic modelling methods that can accurately predict its dynamics behaviour. Explained in this paper is an investigation that was conducted to obtain an appropriate modelling method for bolted joints. This was carried out by evaluating four different finite element (FE) models of the assembled plates and bolts namely the solid plates-bolts model, plates without bolt model, hybrid plates-bolts model and simplified plates-bolts model. FE modal analysis was conducted for all four initial FE models of the bolted joints. Results of the FE modal analysis were compared with the experimental modal analysis (EMA) results. EMA was performed to extract the natural frequencies and mode shapes of the test physical structure with bolted joints. Evaluation was made by comparing the number of nodes, number of elements, elapsed computer processing unit (CPU) time, and the total percentage of errors of each initial FE model when compared with EMA result. The evaluation showed that the simplified plates-bolts model could most accurately predict the dynamic behaviour of the structure with bolted joints. This study proved that the reliable, efficient and economic modelling of bolted joints, mainly the representation of the bolting, has played a crucial element in ensuring the accuracy of the dynamic behaviour prediction.

  8. A Comparison of Two Balance Calibration Model Building Methods

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard; Ulbrich, Norbert

    2007-01-01

    Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.

  9. Comparative Logic Modeling for Policy Analysis: The Case of HIV Testing Policy Change at the Department of Veterans Affairs

    PubMed Central

    Langer, Erika M; Gifford, Allen L; Chan, Kee

    2011-01-01

    Objective Logic models have been used to evaluate policy programs, plan projects, and allocate resources. Logic Modeling for policy analysis has been used rarely in health services research but can be helpful in evaluating the content and rationale of health policies. Comparative Logic Modeling is used here on human immunodeficiency virus (HIV) policy statements from the Department of Veterans Affairs (VA) and Centers for Disease Control and Prevention (CDC). We created visual representations of proposed HIV screening policy components in order to evaluate their structural logic and research-based justifications. Data Sources and Study Design We performed content analysis of VA and CDC HIV testing policy documents in a retrospective case study. Data Collection Using comparative Logic Modeling, we examined the content and primary sources of policy statements by the VA and CDC. We then quantified evidence-based causal inferences within each statement. Principal Findings VA HIV testing policy structure largely replicated that of the CDC guidelines. Despite similar design choices, chosen research citations did not overlap. The agencies used evidence to emphasize different components of the policies. Conclusion Comparative Logic Modeling can be used by health services researchers and policy analysts more generally to evaluate structural differences in health policies and to analyze research-based rationales used by policy makers. PMID:21689094

  10. A comparative analysis of modeled and monitored ambient hazardous air pollutants in Texas: a novel approach using concordance correlation.

    PubMed

    Lupo, Philip J; Symanski, Elaine

    2009-11-01

    Often, in studies evaluating the health effects of hazardous air pollutants (HAPs), researchers rely on ambient air levels to estimate exposure. Two potential data sources are modeled estimates from the U.S. Environmental Protection Agency (EPA) Assessment System for Population Exposure Nationwide (ASPEN) and ambient air pollutant measurements from monitoring networks. The goal was to conduct comparisons of modeled and monitored estimates of HAP levels in the state of Texas using traditional approaches and a previously unexploited method, concordance correlation analysis, to better inform decisions regarding agreement. Census tract-level ASPEN estimates and monitoring data for all HAPs throughout Texas, available from the EPA Air Quality System, were obtained for 1990, 1996, and 1999. Monitoring sites were mapped to census tracts using U.S. Census data. Exclusions were applied to restrict the monitored data to measurements collected using a common sampling strategy with minimal missing values over time. Comparisons were made for 28 HAPs in 38 census tracts located primarily in urban areas throughout Texas. For each pollutant and by year of assessment, modeled and monitored air pollutant annual levels were compared using standard methods (i.e., ratios of model-to-monitor annual levels). Concordance correlation analysis was also used, which assesses linearity and agreement while providing a formal method of statistical inference. Forty-eight percent of the median model-to-monitor values fell between 0.5 and 2, whereas only 17% of concordance correlation coefficients were significant and greater than 0.5. On the basis of concordance correlation analysis, the findings indicate there is poorer agreement when compared with the previously applied ad hoc methods to assess comparability between modeled and monitored levels of ambient HAPs.

  11. A Comparative of business process modelling techniques

    NASA Astrophysics Data System (ADS)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  12. TRAC-PD2 posttest analysis of the CCTF Evaluation-Model Test C1-19 (Run 38). [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motley, F.

    The results of a Transient Reactor Analysis Code posttest analysis of the Cylindral Core Test Facility Evaluation-Model Test agree very well with the results of the experiment. The good agreement obtained verifies the multidimensional analysis capability of the TRAC code. Because of the steep radial power profile, the importance of using fine noding in the core region was demonstrated (as compared with poorer results obtained from an earlier pretest prediction that used a coarsely noded model).

  13. How Stationary Are the Internal Tides in a High-Resolution Global Ocean Circulation Model?

    DTIC Science & Technology

    2014-05-12

    Egbert et al., 1994] and that the model global internal tide amplitudes compare well with an altimetric-based tidal analysis [Ray and Byrne, 2010]. The... analysis [Foreman, 1977] applied to the HYCOM total SSH. We will follow Shriver et al. [2012], analyzing the tides along satellite altimeter tracks...spots,’’ the comparison between the model and altimetric analysis is not as good due, in part, to two prob- lems, errors in the model barotropic tides and

  14. Predicting Air Permeability of Handloom Fabrics: A Comparative Analysis of Regression and Artificial Neural Network Models

    NASA Astrophysics Data System (ADS)

    Mitra, Ashis; Majumdar, Prabal Kumar; Bannerjee, Debamalya

    2013-03-01

    This paper presents a comparative analysis of two modeling methodologies for the prediction of air permeability of plain woven handloom cotton fabrics. Four basic fabric constructional parameters namely ends per inch, picks per inch, warp count and weft count have been used as inputs for artificial neural network (ANN) and regression models. Out of the four regression models tried, interaction model showed very good prediction performance with a meager mean absolute error of 2.017 %. However, ANN models demonstrated superiority over the regression models both in terms of correlation coefficient and mean absolute error. The ANN model with 10 nodes in the single hidden layer showed very good correlation coefficient of 0.982 and 0.929 and mean absolute error of only 0.923 and 2.043 % for training and testing data respectively.

  15. Accuracy of Bolton analysis measured in laser scanned digital models compared with plaster models (gold standard) and cone-beam computer tomography images

    PubMed Central

    Kim, Jooseong

    2016-01-01

    Objective The aim of this study was to compare the accuracy of Bolton analysis obtained from digital models scanned with the Ortho Insight three-dimensional (3D) laser scanner system to those obtained from cone-beam computed tomography (CBCT) images and traditional plaster models. Methods CBCT scans and plaster models were obtained from 50 patients. Plaster models were scanned using the Ortho Insight 3D laser scanner; Bolton ratios were calculated with its software. CBCT scans were imported and analyzed using AVIZO software. Plaster models were measured with a digital caliper. Data were analyzed with descriptive statistics and the intraclass correlation coefficient (ICC). Results Anterior and overall Bolton ratios obtained by the three different modalities exhibited excellent agreement (> 0.970). The mean differences between the scanned digital models and physical models and between the CBCT images and scanned digital models for overall Bolton ratios were 0.41 ± 0.305% and 0.45 ± 0.456%, respectively; for anterior Bolton ratios, 0.59 ± 0.520% and 1.01 ± 0.780%, respectively. ICC results showed that intraexaminer error reliability was generally excellent (> 0.858 for all three diagnostic modalities), with < 1.45% discrepancy in the Bolton analysis. Conclusions Laser scanned digital models are highly accurate compared to physical models and CBCT scans for assessing the spatial relationships of dental arches for orthodontic diagnosis. PMID:26877978

  16. Using structural equation modeling for network meta-analysis.

    PubMed

    Tu, Yu-Kang; Wu, Yun-Chun

    2017-07-14

    Network meta-analysis overcomes the limitations of traditional pair-wise meta-analysis by incorporating all available evidence into a general statistical framework for simultaneous comparisons of several treatments. Currently, network meta-analyses are undertaken either within the Bayesian hierarchical linear models or frequentist generalized linear mixed models. Structural equation modeling (SEM) is a statistical method originally developed for modeling causal relations among observed and latent variables. As random effect is explicitly modeled as a latent variable in SEM, it is very flexible for analysts to specify complex random effect structure and to make linear and nonlinear constraints on parameters. The aim of this article is to show how to undertake a network meta-analysis within the statistical framework of SEM. We used an example dataset to demonstrate the standard fixed and random effect network meta-analysis models can be easily implemented in SEM. It contains results of 26 studies that directly compared three treatment groups A, B and C for prevention of first bleeding in patients with liver cirrhosis. We also showed that a new approach to network meta-analysis based on the technique of unrestricted weighted least squares (UWLS) method can also be undertaken using SEM. For both the fixed and random effect network meta-analysis, SEM yielded similar coefficients and confidence intervals to those reported in the previous literature. The point estimates of two UWLS models were identical to those in the fixed effect model but the confidence intervals were greater. This is consistent with results from the traditional pairwise meta-analyses. Comparing to UWLS model with common variance adjusted factor, UWLS model with unique variance adjusted factor has greater confidence intervals when the heterogeneity was larger in the pairwise comparison. The UWLS model with unique variance adjusted factor reflects the difference in heterogeneity within each comparison. SEM provides a very flexible framework for univariate and multivariate meta-analysis, and its potential as a powerful tool for advanced meta-analysis is still to be explored.

  17. A Comparative Test of Work-Family Conflict Models and Critical Examination of Work-Family Linkages

    ERIC Educational Resources Information Center

    Michel, Jesse S.; Mitchelson, Jacqueline K.; Kotrba, Lindsey M.; LeBreton, James M.; Baltes, Boris B.

    2009-01-01

    This paper is a comprehensive meta-analysis of over 20 years of work-family conflict research. A series of path analyses were conducted to compare and contrast existing work-family conflict models, as well as a new model we developed which integrates and synthesizes current work-family theory and research. This new model accounted for 40% of the…

  18. The experimentation of LC7E learning model on the linear program material in terms of interpersonal intelligence on Wonogiri vocational school students

    NASA Astrophysics Data System (ADS)

    Antinah; Kusmayadi, T. A.; Husodo, B.

    2018-05-01

    This study aims to determine the effect of learning model on student achievement in terms of interpersonal intelligence. The compared learning models are LC7E and Direct learning model. This type of research is a quasi-experimental with 2x3 factorial design. The population in this study is a Grade XI student of Wonogiri Vocational Schools. The sample selection had done by stratified cluster random sampling. Data collection technique used questionnaires, documentation and tests. The data analysis technique used two different unequal cell variance analysis which previously conducted prerequisite analysis for balance test, normality test and homogeneity test. he conclusions of this research are: 1) student learning achievement of mathematics given by LC7E learning model is better when compared with direct learning; 2) Mathematics learning achievement of students who have a high level of interpersonal intelligence is better than students with interpersonal intelligence in medium and low level. Students' mathematics learning achievement with interpersonal level of intelligence is better than those with low interpersonal intelligence on linear programming; 3) LC7E learning model resulted better on mathematics learning achievement compared with direct learning model for each category of students’ interpersonal intelligence level on linear program material.

  19. The experimentation of LC7E learning model on the linear program material in terms of interpersonal intelligence on Wonogiri Vocational School students

    NASA Astrophysics Data System (ADS)

    Antinah; Kusmayadi, T. A.; Husodo, B.

    2018-03-01

    This study aimed to determine the effect of learning model on student achievement in terms of interpersonal intelligence. The compared learning models are LC7E and Direct learning model. This type of research is a quasi-experimental with 2x3 factorial design. The population in this study is a Grade XI student of Wonogiri Vocational Schools. The sample selection had done by stratified cluster random sampling. Data collection technique used questionnaires, documentation and tests. The data analysis technique used two different unequal cell variance analysis which previously conducted prerequisite analysis for balance test, normality test and homogeneity test. he conclusions of this research are: 1) student learning achievement of mathematics given by LC7E learning model is better when compared with direct learning; 2) Mathematics learning achievement of students who have a high level of interpersonal intelligence is better than students with interpersonal intelligence in medium and low level. Students’ mathematics learning achievement with interpersonal level of intelligence is better than those with low interpersonal intelligence on linear programming; 3) LC7E learning model resulted better on mathematics learning achievement compared with direct learning model for each category of students’ interpersonal intelligence level on linear program material.

  20. Developing Multidimensional Likert Scales Using Item Factor Analysis: The Case of Four-Point Items

    ERIC Educational Resources Information Center

    Asún, Rodrigo A.; Rdz-Navarro, Karina; Alvarado, Jesús M.

    2016-01-01

    This study compares the performance of two approaches in analysing four-point Likert rating scales with a factorial model: the classical factor analysis (FA) and the item factor analysis (IFA). For FA, maximum likelihood and weighted least squares estimations using Pearson correlation matrices among items are compared. For IFA, diagonally weighted…

  1. Royal London space analysis: plaster versus digital model assessment.

    PubMed

    Grewal, Balpreet; Lee, Robert T; Zou, Lifong; Johal, Ama

    2017-06-01

    With the advent of digital study models, the importance of being able to evaluate space requirements becomes valuable to treatment planning and the justification for any required extraction pattern. This study was undertaken to compare the validity and reliability of the Royal London space analysis (RLSA) undertaken on plaster as compared with digital models. A pilot study (n = 5) was undertaken on plaster and digital models to evaluate the feasibility of digital space planning. This also helped to determine the sample size calculation and as a result, 30 sets of study models with specified inclusion criteria were selected. All five components of the RLSA, namely: crowding; depth of occlusal curve; arch expansion/contraction; incisor antero-posterior advancement and inclination (assessed from the pre-treatment lateral cephalogram) were accounted for in relation to both model types. The plaster models served as the gold standard. Intra-operator measurement error (reliability) was evaluated along with a direct comparison of the measured digital values (validity) with the plaster models. The measurement error or coefficient of repeatability was comparable for plaster and digital space analyses and ranged from 0.66 to 0.95mm. No difference was found between the space analysis performed in either the upper or lower dental arch. Hence, the null hypothesis was accepted. The digital model measurements were consistently larger, albeit by a relatively small amount, than the plaster models (0.35mm upper arch and 0.32mm lower arch). No difference was detected in the RLSA when performed using either plaster or digital models. Thus, digital space analysis provides a valid and reproducible alternative method in the new era of digital records. © The Author 2016. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com

  2. Reliability of four models for clinical gait analysis.

    PubMed

    Kainz, Hans; Graham, David; Edwards, Julie; Walsh, Henry P J; Maine, Sheanna; Boyd, Roslyn N; Lloyd, David G; Modenese, Luca; Carty, Christopher P

    2017-05-01

    Three-dimensional gait analysis (3DGA) has become a common clinical tool for treatment planning in children with cerebral palsy (CP). Many clinical gait laboratories use the conventional gait analysis model (e.g. Plug-in-Gait model), which uses Direct Kinematics (DK) for joint kinematic calculations, whereas, musculoskeletal models, mainly used for research, use Inverse Kinematics (IK). Musculoskeletal IK models have the advantage of enabling additional analyses which might improve the clinical decision-making in children with CP. Before any new model can be used in a clinical setting, its reliability has to be evaluated and compared to a commonly used clinical gait model (e.g. Plug-in-Gait model) which was the purpose of this study. Two testers performed 3DGA in eleven CP and seven typically developing participants on two occasions. Intra- and inter-tester standard deviations (SD) and standard error of measurement (SEM) were used to compare the reliability of two DK models (Plug-in-Gait and a six degrees-of-freedom model solved using Vicon software) and two IK models (two modifications of 'gait2392' solved using OpenSim). All models showed good reliability (mean SEM of 3.0° over all analysed models and joint angles). Variations in joint kinetics were less in typically developed than in CP participants. The modified 'gait2392' model which included all the joint rotations commonly reported in clinical 3DGA, showed reasonable reliable joint kinematic and kinetic estimates, and allows additional musculoskeletal analysis on surgically adjustable parameters, e.g. muscle-tendon lengths, and, therefore, is a suitable model for clinical gait analysis. Copyright © 2017. Published by Elsevier B.V.

  3. Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies.

    PubMed

    Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre

    2018-03-15

    Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile-quantile plots. We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not. We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.

  4. Sensitivity of wildlife habitat models to uncertainties in GIS data

    NASA Technical Reports Server (NTRS)

    Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.

    1992-01-01

    Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.

  5. Multi-criteria comparative evaluation of spallation reaction models

    NASA Astrophysics Data System (ADS)

    Andrianov, Andrey; Andrianova, Olga; Konobeev, Alexandr; Korovin, Yury; Kuptsov, Ilya

    2017-09-01

    This paper presents an approach to a comparative evaluation of the predictive ability of spallation reaction models based on widely used, well-proven multiple-criteria decision analysis methods (MAVT/MAUT, AHP, TOPSIS, PROMETHEE) and the results of such a comparison for 17 spallation reaction models in the presence of the interaction of high-energy protons with natPb.

  6. Analysis of a virtual memory model for maintaining database views

    NASA Technical Reports Server (NTRS)

    Kinsley, Kathryn C.; Hughes, Charles E.

    1992-01-01

    This paper presents an analytical model for predicting the performance of a new support strategy for database views. This strategy, called the virtual method, is compared with traditional methods for supporting views. The analytical model's predictions of improved performance by the virtual method are then validated by comparing these results with those achieved in an experimental implementation.

  7. An Operational Model for the Prediction of Jet Blast

    DOT National Transportation Integrated Search

    2012-01-09

    This paper presents an operational model for the prediction of jet blast. The model was : developed based upon three modules including a jet exhaust model, jet centerline decay : model and aircraft motion model. The final analysis was compared with d...

  8. An Ontology of Power: Perception and Reality in Conflict

    DTIC Science & Technology

    2016-12-01

    synthetic model was developed as the constant comparative analysis was resumed through the application of selected theory toward the original source...The synthetic model represents a series of maxims for the analysis of a complex social system, developed through a study of contemporary national...and categories. A model of strategic agency is proposed as an alternative framework for developing security strategy. The strategic agency model draws

  9. Multiscale hidden Markov models for photon-limited imaging

    NASA Astrophysics Data System (ADS)

    Nowak, Robert D.

    1999-06-01

    Photon-limited image analysis is often hindered by low signal-to-noise ratios. A novel Bayesian multiscale modeling and analysis method is developed in this paper to assist in these challenging situations. In addition to providing a very natural and useful framework for modeling an d processing images, Bayesian multiscale analysis is often much less computationally demanding compared to classical Markov random field models. This paper focuses on a probabilistic graph model called the multiscale hidden Markov model (MHMM), which captures the key inter-scale dependencies present in natural image intensities. The MHMM framework presented here is specifically designed for photon-limited imagin applications involving Poisson statistics, and applications to image intensity analysis are examined.

  10. Methods for the Joint Meta-Analysis of Multiple Tests

    ERIC Educational Resources Information Center

    Trikalinos, Thomas A.; Hoaglin, David C.; Small, Kevin M.; Terrin, Norma; Schmid, Christopher H.

    2014-01-01

    Existing methods for meta-analysis of diagnostic test accuracy focus primarily on a single index test. We propose models for the joint meta-analysis of studies comparing multiple index tests on the same participants in paired designs. These models respect the grouping of data by studies, account for the within-study correlation between the tests'…

  11. An Economic Analysis of Investment in the United States Shipbuilding Industry

    DTIC Science & Technology

    2010-06-01

    using U.S. Bureau of Economic Analysis (BEA) input/output data and the “Leontief inversion process” modeled at Carnegie Mellon University. This... modeled at Carnegie Mellon University. This sector was compared with five alternative investments. Second, the benefits of the shipyard-related...EIO-LCA Model ..................................39 2. Shipyard Direct Labor Trends .........................................................43 viii 3

  12. Analysis of Whole-Sky Imager Data to Determine the Validity of PCFLOS models

    DTIC Science & Technology

    1992-12-01

    included in the data sample. 2-5 3.1. Data arrangement for a r x c contingency table ....................... 3-2 3.2. ARIMA models estimated for each...satellites. This model uses the multidimen- sional Boehm Sawtooth Wave Model to establish climatic probabilities through repetitive simula- tions of...analysis techniques to develop an ARIMAe model for each direction at the Columbia and Kirtland sites. Then, the models can be compared and analyzed to

  13. Optimization and uncertainty assessment of strongly nonlinear groundwater models with high parameter dimensionality

    NASA Astrophysics Data System (ADS)

    Keating, Elizabeth H.; Doherty, John; Vrugt, Jasper A.; Kang, Qinjun

    2010-10-01

    Highly parameterized and CPU-intensive groundwater models are increasingly being used to understand and predict flow and transport through aquifers. Despite their frequent use, these models pose significant challenges for parameter estimation and predictive uncertainty analysis algorithms, particularly global methods which usually require very large numbers of forward runs. Here we present a general methodology for parameter estimation and uncertainty analysis that can be utilized in these situations. Our proposed method includes extraction of a surrogate model that mimics key characteristics of a full process model, followed by testing and implementation of a pragmatic uncertainty analysis technique, called null-space Monte Carlo (NSMC), that merges the strengths of gradient-based search and parameter dimensionality reduction. As part of the surrogate model analysis, the results of NSMC are compared with a formal Bayesian approach using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. Such a comparison has never been accomplished before, especially in the context of high parameter dimensionality. Despite the highly nonlinear nature of the inverse problem, the existence of multiple local minima, and the relatively large parameter dimensionality, both methods performed well and results compare favorably with each other. Experiences gained from the surrogate model analysis are then transferred to calibrate the full highly parameterized and CPU intensive groundwater model and to explore predictive uncertainty of predictions made by that model. The methodology presented here is generally applicable to any highly parameterized and CPU-intensive environmental model, where efficient methods such as NSMC provide the only practical means for conducting predictive uncertainty analysis.

  14. Dynamic analysis of I cross beam section dissimilar plate joined by TIG welding

    NASA Astrophysics Data System (ADS)

    Sani, M. S. M.; Nazri, N. A.; Rani, M. N. Abdul; Yunus, M. A.

    2018-04-01

    In this paper, finite element (FE) joint modelling technique for prediction of dynamic properties of sheet metal jointed by tungsten inert gas (TTG) will be presented. I cross section dissimilar flat plate with different series of aluminium alloy; AA7075 and AA6061 joined by TTG are used. In order to find the most optimum set of TTG welding dissimilar plate, the finite element model with three types of joint modelling were engaged in this study; bar element (CBAR), beam element and spot weld element connector (CWELD). Experimental modal analysis (EMA) was carried out by impact hammer excitation on the dissimilar plates that welding by TTG method. Modal properties of FE model with joints were compared and validated with model testing. CWELD element was chosen to represent weld model for TTG joints due to its accurate prediction of mode shapes and contains an updating parameter for weld modelling compare to other weld modelling. Model updating was performed to improve correlation between EMA and FEA and before proceeds to updating, sensitivity analysis was done to select the most sensitive updating parameter. After perform model updating, average percentage of error of the natural frequencies for CWELD model is improved significantly.

  15. Comparative Sensitivity Analysis of Muscle Activation Dynamics

    PubMed Central

    Günther, Michael; Götz, Thomas

    2015-01-01

    We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379

  16. Aerodynamic design and analysis of small horizontal axis wind turbine blades

    NASA Astrophysics Data System (ADS)

    Tang, Xinzi

    This work investigates the aerodynamic design and analysis of small horizontal axis wind turbine blades via the blade element momentum (BEM) based approach and the computational fluid dynamics (CFD) based approach. From this research, it is possible to draw a series of detailed guidelines on small wind turbine blade design and analysis. The research also provides a platform for further comprehensive study using these two approaches. The wake induction corrections and stall corrections of the BEM method were examined through a case study of the NREL/NASA Phase VI wind turbine. A hybrid stall correction model was proposed to analyse wind turbine power performance. The proposed model shows improvement in power prediction for the validation case, compared with the existing stall correction models. The effects of the key rotor parameters of a small wind turbine as well as the blade chord and twist angle distributions on power performance were investigated through two typical wind turbines, i.e. a fixed-pitch variable-speed (FPVS) wind turbine and a fixed-pitch fixed-speed (FPFS) wind turbine. An engineering blade design and analysis code was developed in MATLAB to accommodate aerodynamic design and analysis of the blades.. The linearisation for radial profiles of blade chord and twist angle for the FPFS wind turbine blade design was discussed. Results show that, the proposed linearisation approach leads to reduced manufacturing cost and higher annual energy production (AEP), with minimal effects on the low wind speed performance. Comparative studies of mesh and turbulence models in 2D and 3D CFD modelling were conducted. The CFD predicted lift and drag coefficients of the airfoil S809 were compared with wind tunnel test data and the 3D CFD modelling method of the NREL/NASA Phase VI wind turbine were validated against measurements. Airfoil aerodynamic characterisation and wind turbine power performance as well as 3D flow details were studied. The detailed flow characteristics from the CFD modelling are quantitatively comparable to the measurements, such as blade surface pressure distribution and integrated forces and moments. It is confirmed that the CFD approach is able to provide a more detailed qualitative and quantitative analysis for wind turbine airfoils and rotors..

  17. Safety assessment of plant varieties using transcriptomics profiling and a one-class classifier.

    PubMed

    van Dijk, Jeroen P; de Mello, Carla Souza; Voorhuijzen, Marleen M; Hutten, Ronald C B; Arisi, Ana Carolina Maisonnave; Jansen, Jeroen J; Buydens, Lutgarde M C; van der Voet, Hilko; Kok, Esther J

    2014-10-01

    An important part of the current hazard identification of novel plant varieties is comparative targeted analysis of the novel and reference varieties. Comparative analysis will become much more informative with unbiased analytical approaches, e.g. omics profiling. Data analysis estimating the similarity of new varieties to a reference baseline class of known safe varieties would subsequently greatly facilitate hazard identification. Further biological and eventually toxicological analysis would then only be necessary for varieties that fall outside this reference class. For this purpose, a one-class classifier tool was explored to assess and classify transcriptome profiles of potato (Solanum tuberosum) varieties in a model study. Profiles of six different varieties, two locations of growth, two year of harvest and including biological and technical replication were used to build the model. Two scenarios were applied representing evaluation of a 'different' variety and a 'similar' variety. Within the model higher class distances resulted for the 'different' test set compared with the 'similar' test set. The present study may contribute to a more global hazard identification of novel plant varieties. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. A Comparative Analysis of Financial Reporting Models for Private and Public Sector Organizations.

    DTIC Science & Technology

    1995-12-01

    The objective of this thesis was to describe and compare different existing and evolving financial reporting models used in both the public and...private sector. To accomplish the objective, this thesis identified the existing financial reporting models for private sector business organizations...private sector nonprofit organizations, and state and local governments, as well as the evolving financial reporting model for the federal government

  19. Molecular docking and 3D-QSAR studies on inhibitors of DNA damage signaling enzyme human PARP-1.

    PubMed

    Fatima, Sabiha; Bathini, Raju; Sivan, Sree Kanth; Manga, Vijjulatha

    2012-08-01

    Poly (ADP-ribose) polymerase-1 (PARP-1) operates in a DNA damage signaling network. Molecular docking and three dimensional-quantitative structure activity relationship (3D-QSAR) studies were performed on human PARP-1 inhibitors. Docked conformation obtained for each molecule was used as such for 3D-QSAR analysis. Molecules were divided into a training set and a test set randomly in four different ways, partial least square analysis was performed to obtain QSAR models using the comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA). Derived models showed good statistical reliability that is evident from their r², q²(loo) and r²(pred) values. To obtain a consensus for predictive ability from all the models, average regression coefficient r²(avg) was calculated. CoMFA and CoMSIA models showed a value of 0.930 and 0.936, respectively. Information obtained from the best 3D-QSAR model was applied for optimization of lead molecule and design of novel potential inhibitors.

  20. Dependability and performability analysis

    NASA Technical Reports Server (NTRS)

    Trivedi, Kishor S.; Ciardo, Gianfranco; Malhotra, Manish; Sahner, Robin A.

    1993-01-01

    Several practical issues regarding specifications and solution of dependability and performability models are discussed. Model types with and without rewards are compared. Continuous-time Markov chains (CTMC's) are compared with (continuous-time) Markov reward models (MRM's) and generalized stochastic Petri nets (GSPN's) are compared with stochastic reward nets (SRN's). It is shown that reward-based models could lead to more concise model specifications and solution of a variety of new measures. With respect to the solution of dependability and performability models, three practical issues were identified: largeness, stiffness, and non-exponentiality, and a variety of approaches are discussed to deal with them, including some of the latest research efforts.

  1. A novel integrated framework and improved methodology of computer-aided drug design.

    PubMed

    Chen, Calvin Yu-Chian

    2013-01-01

    Computer-aided drug design (CADD) is a critical initiating step of drug development, but a single model capable of covering all designing aspects remains to be elucidated. Hence, we developed a drug design modeling framework that integrates multiple approaches, including machine learning based quantitative structure-activity relationship (QSAR) analysis, 3D-QSAR, Bayesian network, pharmacophore modeling, and structure-based docking algorithm. Restrictions for each model were defined for improved individual and overall accuracy. An integration method was applied to join the results from each model to minimize bias and errors. In addition, the integrated model adopts both static and dynamic analysis to validate the intermolecular stabilities of the receptor-ligand conformation. The proposed protocol was applied to identifying HER2 inhibitors from traditional Chinese medicine (TCM) as an example for validating our new protocol. Eight potent leads were identified from six TCM sources. A joint validation system comprised of comparative molecular field analysis, comparative molecular similarity indices analysis, and molecular dynamics simulation further characterized the candidates into three potential binding conformations and validated the binding stability of each protein-ligand complex. The ligand pathway was also performed to predict the ligand "in" and "exit" from the binding site. In summary, we propose a novel systematic CADD methodology for the identification, analysis, and characterization of drug-like candidates.

  2. Lease vs. Purchase Analysis of Alternative Fuel Vehicles in the United States Marine Corps

    DTIC Science & Technology

    2009-12-01

    data (2004 to 2009) for the largest populations of AFVs in the light-duty category and then apply a model that will compare the two alternatives based...the largest populations of AFVs in the light-duty category and then apply a model that will compare the two alternatives based on their relative net...28 IV. THE MODEL

  3. Lease VS Purchase Analysis of Alternative Fuel Vehicles in the United States Marine Corps

    DTIC Science & Technology

    2009-10-30

    the light-duty category and then apply a model that will compare the two alternatives based on their relative net present values. An aggregated view of... model that will compare the two alternatives based on their relative net present values. An aggregated view of several different light-duty AFV...Summary .......................................................................................32  IV.  The Model

  4. Multiple-Use Site Demand Analysis: An Application to the Boundary Waters Canoe Area Wilderness.

    ERIC Educational Resources Information Center

    Peterson, George L.; And Others

    1982-01-01

    A single-site, multiple-use model for analyzing trip demand is derived from a multiple site regional model based on utility maximizing choice theory. The model is used to analyze and compare trips to the Boundary Waters Canoe Area Wilderness for several types of use. Travel cost elasticities of demand are compared and discussed. (Authors/JN)

  5. CoopEUS Case Study: Tsunami Modelling and Early Warning Systems for Near Source Areas (Mediterranean, Juan de Fuca).

    NASA Astrophysics Data System (ADS)

    Beranzoli, Laura; Best, Mairi; Chierici, Francesco; Embriaco, Davide; Galbraith, Nan; Heeseman, Martin; Kelley, Deborah; Pirenne, Benoit; Scofield, Oscar; Weller, Robert

    2015-04-01

    There is a need for tsunami modeling and early warning systems for near-source areas. For example this is a common public safety threat in the Mediterranean and Juan de Fuca/NE Pacific Coast of N.A.; Regions covered by the EMSO, OOI, and ONC ocean observatories. Through the CoopEUS international cooperation project, a number of environmental research infrastructures have come together to coordinate efforts on environmental challenges; this tsunami case study tackles one such challenge. There is a mutual need of tsunami event field data and modeling to deepen our experience in testing methodology and developing real-time data processing. Tsunami field data are already available for past events, part of this use case compares these for compatibility, gap analysis, and model groundtruthing. It also reviews sensors needed and harmonizes instrument settings. Sensor metadata and registries are compared, harmonized, and aligned. Data policies and access are also compared and assessed for gap analysis. Modelling algorithms are compared and tested against archived and real-time data. This case study will then be extended to other related tsunami data and model sources globally with similar geographic and seismic scenarios.

  6. A Spiking Neural Network Methodology and System for Learning and Comparative Analysis of EEG Data From Healthy Versus Addiction Treated Versus Addiction Not Treated Subjects.

    PubMed

    Doborjeh, Maryam Gholami; Wang, Grace Y; Kasabov, Nikola K; Kydd, Robert; Russell, Bruce

    2016-09-01

    This paper introduces a method utilizing spiking neural networks (SNN) for learning, classification, and comparative analysis of brain data. As a case study, the method was applied to electroencephalography (EEG) data collected during a GO/NOGO cognitive task performed by untreated opiate addicts, those undergoing methadone maintenance treatment (MMT) for opiate dependence and a healthy control group. the method is based on an SNN architecture called NeuCube, trained on spatiotemporal EEG data. NeuCube was used to classify EEG data across subject groups and across GO versus NOGO trials, but also facilitated a deeper comparative analysis of the dynamic brain processes. This analysis results in a better understanding of human brain functioning across subject groups when performing a cognitive task. In terms of the EEG data classification, a NeuCube model obtained better results (the maximum obtained accuracy: 90.91%) when compared with traditional statistical and artificial intelligence methods (the maximum obtained accuracy: 50.55%). more importantly, new information about the effects of MMT on cognitive brain functions is revealed through the analysis of the SNN model connectivity and its dynamics. this paper presented a new method for EEG data modeling and revealed new knowledge on brain functions associated with mental activity which is different from the brain activity observed in a resting state of the same subjects.

  7. Comparison of Prediction Model for Cardiovascular Autonomic Dysfunction Using Artificial Neural Network and Logistic Regression Analysis

    PubMed Central

    Zeng, Fangfang; Li, Zhongtao; Yu, Xiaoling; Zhou, Linuo

    2013-01-01

    Background This study aimed to develop the artificial neural network (ANN) and multivariable logistic regression (LR) analyses for prediction modeling of cardiovascular autonomic (CA) dysfunction in the general population, and compare the prediction models using the two approaches. Methods and Materials We analyzed a previous dataset based on a Chinese population sample consisting of 2,092 individuals aged 30–80 years. The prediction models were derived from an exploratory set using ANN and LR analysis, and were tested in the validation set. Performances of these prediction models were then compared. Results Univariate analysis indicated that 14 risk factors showed statistically significant association with the prevalence of CA dysfunction (P<0.05). The mean area under the receiver-operating curve was 0.758 (95% CI 0.724–0.793) for LR and 0.762 (95% CI 0.732–0.793) for ANN analysis, but noninferiority result was found (P<0.001). The similar results were found in comparisons of sensitivity, specificity, and predictive values in the prediction models between the LR and ANN analyses. Conclusion The prediction models for CA dysfunction were developed using ANN and LR. ANN and LR are two effective tools for developing prediction models based on our dataset. PMID:23940593

  8. Determination of morphological parameters of biological cells by analysis of scattered-light distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burger, D.E.

    1979-11-01

    The extraction of morphological parameters from biological cells by analysis of light-scatter patterns is described. A light-scattering measurement system has been designed and constructed that allows one to visually examine and photographically record biological cells or cell models and measure the light-scatter pattern of an individual cell or cell model. Using a laser or conventional illumination, the imaging system consists of a modified microscope with a 35 mm camera attached to record the cell image or light-scatter pattern. Models of biological cells were fabricated. The dynamic range and angular distributions of light scattered from these models was compared to calculatedmore » distributions. Spectrum analysis techniques applied on the light-scatter data give the sought after morphological cell parameters. These results compared favorably to shape parameters of the fabricated cell models confirming the mathematical model procedure. For nucleated biological material, correct nuclear and cell eccentricity as well as the nuclear and cytoplasmic diameters were determined. A method for comparing the flow equivalent of nuclear and cytoplasmic size to the actual dimensions is shown. This light-scattering experiment provides baseline information for automated cytology. In its present application, it involves correlating average size as measured in flow cytology to the actual dimensions determined from this technique. (ERB)« less

  9. Geomagnetic field models for satellite angular motion studies

    NASA Astrophysics Data System (ADS)

    Ovchinnikov, M. Yu.; Penkov, V. I.; Roldugin, D. S.; Pichuzhkina, A. V.

    2018-03-01

    Four geomagnetic field models are discussed: IGRF, inclined, direct and simplified dipoles. Geomagnetic induction vector expressions are provided in different reference frames. Induction vector behavior is compared for different models. Models applicability for the analysis of satellite motion is studied from theoretical and engineering perspectives. Relevant satellite dynamics analysis cases using analytical and numerical techniques are provided. These cases demonstrate the benefit of a certain model for a specific dynamics study. Recommendations for models usage are summarized in the end.

  10. Nonlinear analysis of AS4/PEEK thermoplastic composite laminate using a one parameter plasticity model

    NASA Technical Reports Server (NTRS)

    Sun, C. T.; Yoon, K. J.

    1990-01-01

    A one-parameter plasticity model was shown to adequately describe the orthotropic plastic deformation of AS4/PEEK (APC-2) unidirectional thermoplastic composite. This model was verified further for unidirectional and laminated composite panels with and without a hole. The nonlinear stress-strain relations were measured and compared with those predicted by the finite element analysis using the one-parameter elastic-plastic constitutive model. The results show that the one-parameter orthotropic plasticity model is suitable for the analysis of elastic-plastic deformation of AS4/PEEK composite laminates.

  11. Elastic-plastic analysis of AS4/PEEK composite laminate using a one-parameter plasticity model

    NASA Technical Reports Server (NTRS)

    Sun, C. T.; Yoon, K. J.

    1992-01-01

    A one-parameter plasticity model was shown to adequately describe the plastic deformation of AS4/PEEK (APC-2) unidirectional thermoplastic composite. This model was verified further for unidirectional and laminated composite panels with and without a hole. The elastic-plastic stress-strain relations of coupon specimens were measured and compared with those predicted by the finite element analysis using the one-parameter plasticity model. The results show that the one-parameter plasticity model is suitable for the analysis of elastic-plastic deformation of AS4/PEEK composite laminates.

  12. Beauty and the beast: Some perspectives on efficient model analysis, surrogate models, and the future of modeling

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Jakeman, J.; Razavi, S.; Tolson, B.

    2015-12-01

    For many environmental systems model runtimes have remained very long as more capable computers have been used to add more processes and more time and space discretization. Scientists have also added more parameters and kinds of observations, and many model runs are needed to explore the models. Computational demand equals run time multiplied by number of model runs divided by parallelization opportunities. Model exploration is conducted using sensitivity analysis, optimization, and uncertainty quantification. Sensitivity analysis is used to reveal consequences of what may be very complex simulated relations, optimization is used to identify parameter values that fit the data best, or at least better, and uncertainty quantification is used to evaluate the precision of simulated results. The long execution times make such analyses a challenge. Methods for addressing this challenges include computationally frugal analysis of the demanding original model and a number of ingenious surrogate modeling methods. Both commonly use about 50-100 runs of the demanding original model. In this talk we consider the tradeoffs between (1) original model development decisions, (2) computationally frugal analysis of the original model, and (3) using many model runs of the fast surrogate model. Some questions of interest are as follows. If the added processes and discretization invested in (1) are compared with the restrictions and approximations in model analysis produced by long model execution times, is there a net benefit related of the goals of the model? Are there changes to the numerical methods that could reduce the computational demands while giving up less fidelity than is compromised by using computationally frugal methods or surrogate models for model analysis? Both the computationally frugal methods and surrogate models require that the solution of interest be a smooth function of the parameters or interest. How does the information obtained from the local methods typical of (2) and the global averaged methods typical of (3) compare for typical systems? The discussion will use examples of response of the Greenland glacier to global warming and surface and groundwater modeling.

  13. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics.

    PubMed

    Walmsley, Christopher W; McCurry, Matthew R; Clausen, Phillip D; McHenry, Colin R

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be 'reasonable' are often assumed to have little influence on the results and their interpretation. HERE WE REPORT AN EXTENSIVE SENSITIVITY ANALYSIS WHERE HIGH RESOLUTION FINITE ELEMENT (FE) MODELS OF MANDIBLES FROM SEVEN SPECIES OF CROCODILE WERE ANALYSED UNDER LOADS TYPICAL FOR COMPARATIVE ANALYSIS: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results.

  14. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics

    PubMed Central

    McCurry, Matthew R.; Clausen, Phillip D.; McHenry, Colin R.

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation. Here we report an extensive sensitivity analysis where high resolution finite element (FE) models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results. PMID:24255817

  15. Rasch-family models are more valuable than score-based approaches for analysing longitudinal patient-reported outcomes with missing data.

    PubMed

    de Bock, Élodie; Hardouin, Jean-Benoit; Blanchin, Myriam; Le Neel, Tanguy; Kubis, Gildas; Bonnaud-Antignac, Angélique; Dantan, Étienne; Sébille, Véronique

    2016-10-01

    The objective was to compare classical test theory and Rasch-family models derived from item response theory for the analysis of longitudinal patient-reported outcomes data with possibly informative intermittent missing items. A simulation study was performed in order to assess and compare the performance of classical test theory and Rasch model in terms of bias, control of the type I error and power of the test of time effect. The type I error was controlled for classical test theory and Rasch model whether data were complete or some items were missing. Both methods were unbiased and displayed similar power with complete data. When items were missing, Rasch model remained unbiased and displayed higher power than classical test theory. Rasch model performed better than the classical test theory approach regarding the analysis of longitudinal patient-reported outcomes with possibly informative intermittent missing items mainly for power. This study highlights the interest of Rasch-based models in clinical research and epidemiology for the analysis of incomplete patient-reported outcomes data. © The Author(s) 2013.

  16. Combination of multiple model population analysis and mid-infrared technology for the estimation of copper content in Tegillarca granosa

    NASA Astrophysics Data System (ADS)

    Hu, Meng-Han; Chen, Xiao-Jing; Ye, Peng-Chao; Chen, Xi; Shi, Yi-Jian; Zhai, Guang-Tao; Yang, Xiao-Kang

    2016-11-01

    The aim of this study was to use mid-infrared spectroscopy coupled with multiple model population analysis based on Monte Carlo-uninformative variable elimination for rapidly estimating the copper content of Tegillarca granosa. Copper-specific wavelengths were first extracted from the whole spectra, and subsequently, a least square-support vector machine was used to develop the prediction models. Compared with the prediction model based on full wavelengths, models that used 100 multiple MC-UVE selected wavelengths without and with bin operation showed comparable performances with Rp (root mean square error of Prediction) of 0.97 (14.60 mg/kg) and 0.94 (20.85 mg/kg) versus 0.96 (17.27 mg/kg), as well as ratio of percent deviation (number of wavelength) of 2.77 (407) and 1.84 (45) versus 2.32 (1762). The obtained results demonstrated that the mid-infrared technique could be used for estimating copper content in T. granosa. In addition, the proposed multiple model population analysis can eliminate uninformative, weakly informative and interfering wavelengths effectively, that substantially reduced the model complexity and computation time.

  17. Neutronics Conversion Analyses of the Laue-Langevin Institute (ILL) High Flux Reactor (RHF)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergeron, A.; Dionne, B.; Calzavara, Y.

    2014-09-30

    The following report describes the neutronics results obtained with the MCNP model of the RHF U7Mo LEU reference design that has been established in 2010 during the feasibility analysis. This work constitutes a complete and detailed neutronics analysis of that LEU design using models that have been significantly improved since 2010 and the release of the feasibility report. When possible, the credibility of the neutronics model is tested by comparing the HEU model results with experimental data or other codes calculations results. The results obtained with the LEU model are systematically compared to the HEU model. The changes applied tomore » the neutronics model lead to better comparisons with experimental data or improved the calculation efficiency but do not challenge the conclusion of the feasibility analysis. If the U7Mo fuel is commercially available, not cost prohibitive, a back-end solution is established and if it is possible to manufacture the proposed element, neutronics analyses show that the performance of the reactor would not be challenged by the conversion to LEU fuel.« less

  18. Comparative Analysis of Four Manpower Nursing Requirements Models. Health Manpower References. [Nurse Planning Information Series, No. 6].

    ERIC Educational Resources Information Center

    Deane, Robert T.; Ro, Kong-Kyun

    The analysis and description of four manpower nursing requirements models-- the Pugh-Roberts, the Vector, the Community Systems Foundation (CSF), and the Western Interstate Commission of Higher Education (WICHE)--are presented in this report. The introduction provides an overview of the project which was designed to analyze these different models.…

  19. Reporting Practices in Confirmatory Factor Analysis: An Overview and Some Recommendations

    ERIC Educational Resources Information Center

    Jackson, Dennis L.; Gillaspy, J. Arthur, Jr.; Purc-Stephenson, Rebecca

    2009-01-01

    Reporting practices in 194 confirmatory factor analysis studies (1,409 factor models) published in American Psychological Association journals from 1998 to 2006 were reviewed and compared with established reporting guidelines. Three research questions were addressed: (a) how do actual reporting practices compare with published guidelines? (b) how…

  20. The sensitivity of biological finite element models to the resolution of surface geometry: a case study of crocodilian crania

    PubMed Central

    Evans, Alistair R.; McHenry, Colin R.

    2015-01-01

    The reliability of finite element analysis (FEA) in biomechanical investigations depends upon understanding the influence of model assumptions. In producing finite element models, surface mesh resolution is influenced by the resolution of input geometry, and influences the resolution of the ensuing solid mesh used for numerical analysis. Despite a large number of studies incorporating sensitivity studies of the effects of solid mesh resolution there has not yet been any investigation into the effect of surface mesh resolution upon results in a comparative context. Here we use a dataset of crocodile crania to examine the effects of surface resolution on FEA results in a comparative context. Seven high-resolution surface meshes were each down-sampled to varying degrees while keeping the resulting number of solid elements constant. These models were then subjected to bite and shake load cases using finite element analysis. The results show that incremental decreases in surface resolution can result in fluctuations in strain magnitudes, but that it is possible to obtain stable results using lower resolution surface in a comparative FEA study. As surface mesh resolution links input geometry with the resulting solid mesh, the implication of these results is that low resolution input geometry and solid meshes may provide valid results in a comparative context. PMID:26056620

  1. Science-based HRA: experimental comparison of operator performance to IDAC (Information-Decision-Action Crew) simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shirley, Rachel; Smidts, Carol; Boring, Ronald

    Information-Decision-Action Crew (IDAC) operator model simulations of a Steam Generator Tube Rupture are compared to student operator performance in studies conducted in the Ohio State University’s Nuclear Power Plant Simulator Facility. This study is presented as a prototype for conducting simulator studies to validate key aspects of Human Reliability Analysis (HRA) methods. Seven student operator crews are compared to simulation results for crews designed to demonstrate three different decision-making strategies. The IDAC model used in the simulations is modified slightly to capture novice behavior rather that expert operators. Operator actions and scenario pacing are compared. A preliminary review of availablemore » performance shaping factors (PSFs) is presented. After the scenario in the NPP Simulator Facility, student operators review a video of the scenario and evaluate six PSFs at pre-determined points in the scenario. This provides a dynamic record of the PSFs experienced by the OSU student operators. In this preliminary analysis, Time Constraint Load (TCL) calculated in the IDAC simulations is compared to TCL reported by student operators. We identify potential modifications to the IDAC model to develop an “IDAC Student Operator Model.” This analysis provides insights into how similar experiments could be conducted using expert operators to improve the fidelity of IDAC simulations.« less

  2. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach

    PubMed Central

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2018-01-01

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach, and has several attractive features compared to the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, since the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. PMID:26303591

  3. Comparison of Response Surface and Kriging Models in the Multidisciplinary Design of an Aerospike Nozzle

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.

    1998-01-01

    The use of response surface models and kriging models are compared for approximating non-random, deterministic computer analyses. After discussing the traditional response surface approach for constructing polynomial models for approximation, kriging is presented as an alternative statistical-based approximation method for the design and analysis of computer experiments. Both approximation methods are applied to the multidisciplinary design and analysis of an aerospike nozzle which consists of a computational fluid dynamics model and a finite element analysis model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations. Four optimization problems are formulated and solved using both approximation models. While neither approximation technique consistently outperforms the other in this example, the kriging models using only a constant for the underlying global model and a Gaussian correlation function perform as well as the second order polynomial response surface models.

  4. Selecting risk factors: a comparison of discriminant analysis, logistic regression and Cox's regression model using data from the Tromsø Heart Study.

    PubMed

    Brenn, T; Arnesen, E

    1985-01-01

    For comparative evaluation, discriminant analysis, logistic regression and Cox's model were used to select risk factors for total and coronary deaths among 6595 men aged 20-49 followed for 9 years. Groups with mortality between 5 and 93 per 1000 were considered. Discriminant analysis selected variable sets only marginally different from the logistic and Cox methods which always selected the same sets. A time-saving option, offered for both the logistic and Cox selection, showed no advantage compared with discriminant analysis. Analysing more than 3800 subjects, the logistic and Cox methods consumed, respectively, 80 and 10 times more computer time than discriminant analysis. When including the same set of variables in non-stepwise analyses, all methods estimated coefficients that in most cases were almost identical. In conclusion, discriminant analysis is advocated for preliminary or stepwise analysis, otherwise Cox's method should be used.

  5. A phenomenological biological dose model for proton therapy based on linear energy transfer spectra.

    PubMed

    Rørvik, Eivind; Thörnqvist, Sara; Stokkevåg, Camilla H; Dahle, Tordis J; Fjaera, Lars Fredrik; Ytre-Hauge, Kristian S

    2017-06-01

    The relative biological effectiveness (RBE) of protons varies with the radiation quality, quantified by the linear energy transfer (LET). Most phenomenological models employ a linear dependency of the dose-averaged LET (LET d ) to calculate the biological dose. However, several experiments have indicated a possible non-linear trend. Our aim was to investigate if biological dose models including non-linear LET dependencies should be considered, by introducing a LET spectrum based dose model. The RBE-LET relationship was investigated by fitting of polynomials from 1st to 5th degree to a database of 85 data points from aerobic in vitro experiments. We included both unweighted and weighted regression, the latter taking into account experimental uncertainties. Statistical testing was performed to decide whether higher degree polynomials provided better fits to the data as compared to lower degrees. The newly developed models were compared to three published LET d based models for a simulated spread out Bragg peak (SOBP) scenario. The statistical analysis of the weighted regression analysis favored a non-linear RBE-LET relationship, with the quartic polynomial found to best represent the experimental data (P = 0.010). The results of the unweighted regression analysis were on the borderline of statistical significance for non-linear functions (P = 0.053), and with the current database a linear dependency could not be rejected. For the SOBP scenario, the weighted non-linear model estimated a similar mean RBE value (1.14) compared to the three established models (1.13-1.17). The unweighted model calculated a considerably higher RBE value (1.22). The analysis indicated that non-linear models could give a better representation of the RBE-LET relationship. However, this is not decisive, as inclusion of the experimental uncertainties in the regression analysis had a significant impact on the determination and ranking of the models. As differences between the models were observed for the SOBP scenario, both non-linear LET spectrum- and linear LET d based models should be further evaluated in clinically realistic scenarios. © 2017 American Association of Physicists in Medicine.

  6. Separate-channel analysis of two-channel microarrays: recovering inter-spot information.

    PubMed

    Smyth, Gordon K; Altman, Naomi S

    2013-05-26

    Two-channel (or two-color) microarrays are cost-effective platforms for comparative analysis of gene expression. They are traditionally analysed in terms of the log-ratios (M-values) of the two channel intensities at each spot, but this analysis does not use all the information available in the separate channel observations. Mixed models have been proposed to analyse intensities from the two channels as separate observations, but such models can be complex to use and the gain in efficiency over the log-ratio analysis is difficult to quantify. Mixed models yield test statistics for the null distributions can be specified only approximately, and some approaches do not borrow strength between genes. This article reformulates the mixed model to clarify the relationship with the traditional log-ratio analysis, to facilitate information borrowing between genes, and to obtain an exact distributional theory for the resulting test statistics. The mixed model is transformed to operate on the M-values and A-values (average log-expression for each spot) instead of on the log-expression values. The log-ratio analysis is shown to ignore information contained in the A-values. The relative efficiency of the log-ratio analysis is shown to depend on the size of the intraspot correlation. A new separate channel analysis method is proposed that assumes a constant intra-spot correlation coefficient across all genes. This approach permits the mixed model to be transformed into an ordinary linear model, allowing the data analysis to use a well-understood empirical Bayes analysis pipeline for linear modeling of microarray data. This yields statistically powerful test statistics that have an exact distributional theory. The log-ratio, mixed model and common correlation methods are compared using three case studies. The results show that separate channel analyses that borrow strength between genes are more powerful than log-ratio analyses. The common correlation analysis is the most powerful of all. The common correlation method proposed in this article for separate-channel analysis of two-channel microarray data is no more difficult to apply in practice than the traditional log-ratio analysis. It provides an intuitive and powerful means to conduct analyses and make comparisons that might otherwise not be possible.

  7. Comparative Analysis of Smart Meters Deployment Business Models on the Example of the Russian Federation Markets

    NASA Astrophysics Data System (ADS)

    Daminov, Ildar; Tarasova, Ekaterina; Andreeva, Tatyana; Avazov, Artur

    2016-02-01

    This paper presents the comparison of smart meter deployment business models to determine the most suitable option providing smart meters deployment. Authors consider 3 main business model of companies: distribution grid company, energy supplier (energosbyt) and metering company. The goal of the article is to compare the business models of power companies from massive smart metering roll out in power system of Russian Federation.

  8. Comparative analysis of numerical models of pipe handling equipment used in offshore drilling applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pawlus, Witold, E-mail: witold.p.pawlus@ieee.org; Ebbesen, Morten K.; Hansen, Michael R.

    Design of offshore drilling equipment is a task that involves not only analysis of strict machine specifications and safety requirements but also consideration of changeable weather conditions and harsh environment. These challenges call for a multidisciplinary approach and make the design process complex. Various modeling software products are currently available to aid design engineers in their effort to test and redesign equipment before it is manufactured. However, given the number of available modeling tools and methods, the choice of the proper modeling methodology becomes not obvious and – in some cases – troublesome. Therefore, we present a comparative analysis ofmore » two popular approaches used in modeling and simulation of mechanical systems: multibody and analytical modeling. A gripper arm of the offshore vertical pipe handling machine is selected as a case study for which both models are created. In contrast to some other works, the current paper shows verification of both systems by benchmarking their simulation results against each other. Such criteria as modeling effort and results accuracy are evaluated to assess which modeling strategy is the most suitable given its eventual application.« less

  9. Finite element analysis of damped vibrations of laminated composite plates

    NASA Astrophysics Data System (ADS)

    Hu, Baogang

    1992-11-01

    Damped free vibrations of composite laminates are subjected to macromechanical analysis. Two models are developed: a viscoelastic damping model and a specific damping capacity model. The important symmetry property of the damping matrix is retained in both models. A modified modal strain energy method is proposed for evaluating modal damping in the viscoelastic model using a real (instead of a complex) eigenvalue problem solution. Numerical studies of multidegree of freedom systems are conducted to illustrate the improved accuracy of the method compared to the modal strain energy method. The experimental data reported in the literature for damped free vibrations in both polymer matrix and metal matrix composites were used in finite element analysis to test and compare the damping models. The natural frequencies and modal damping were obtained using both the viscoelastic and specific models. Results from both models are in satisfactory agreement with experimental data. Both models were found to be reasonably accurate for systems with low damping. Parametric studies were conducted to examine the effects on damping of the side to thickness ratio, the principal moduli ratio, the total number of layers, the ply angle, and the boundary conditions.

  10. General quantitative genetic methods for comparative biology: phylogenies, taxonomies and multi-trait models for continuous and categorical characters.

    PubMed

    Hadfield, J D; Nakagawa, S

    2010-03-01

    Although many of the statistical techniques used in comparative biology were originally developed in quantitative genetics, subsequent development of comparative techniques has progressed in relative isolation. Consequently, many of the new and planned developments in comparative analysis already have well-tested solutions in quantitative genetics. In this paper, we take three recent publications that develop phylogenetic meta-analysis, either implicitly or explicitly, and show how they can be considered as quantitative genetic models. We highlight some of the difficulties with the proposed solutions, and demonstrate that standard quantitative genetic theory and software offer solutions. We also show how results from Bayesian quantitative genetics can be used to create efficient Markov chain Monte Carlo algorithms for phylogenetic mixed models, thereby extending their generality to non-Gaussian data. Of particular utility is the development of multinomial models for analysing the evolution of discrete traits, and the development of multi-trait models in which traits can follow different distributions. Meta-analyses often include a nonrandom collection of species for which the full phylogenetic tree has only been partly resolved. Using missing data theory, we show how the presented models can be used to correct for nonrandom sampling and show how taxonomies and phylogenies can be combined to give a flexible framework with which to model dependence.

  11. Robustness analysis of a green chemistry-based model for the classification of silver nanoparticles synthesis processes

    EPA Science Inventory

    This paper proposes a robustness analysis based on Multiple Criteria Decision Aiding (MCDA). The ensuing model was used to assess the implementation of green chemistry principles in the synthesis of silver nanoparticles. Its recommendations were also compared to an earlier develo...

  12. The Mathematical Analysis of Style: A Correlation-Based Approach.

    ERIC Educational Resources Information Center

    Oppenheim, Rosa

    1988-01-01

    Examines mathematical models of style analysis, focusing on the pattern in which literary characteristics occur. Describes an autoregressive integrated moving average model (ARIMA) for predicting sentence length in different works by the same author and comparable works by different authors. This technique is valuable in characterizing stylistic…

  13. USE OF WEIBULL FUNCTION FOR NON-LINEAR ANALYSIS OF EFFECTS OF LOW LEVELS OF SIMULATED HERBICIDE DRIFT ON PLANTS

    EPA Science Inventory

    We compared two regression models, which are based on the Weibull and probit functions, for the analysis of pesticide toxicity data from laboratory studies on Illinois crop and native plant species. Both mathematical models are continuous, differentiable, strictly positive, and...

  14. Statistical correlation analysis for comparing vibration data from test and analysis

    NASA Technical Reports Server (NTRS)

    Butler, T. G.; Strang, R. F.; Purves, L. R.; Hershfeld, D. J.

    1986-01-01

    A theory was developed to compare vibration modes obtained by NASTRAN analysis with those obtained experimentally. Because many more analytical modes can be obtained than experimental modes, the analytical set was treated as expansion functions for putting both sources in comparative form. The dimensional symmetry was developed for three general cases: nonsymmetric whole model compared with a nonsymmetric whole structural test, symmetric analytical portion compared with a symmetric experimental portion, and analytical symmetric portion with a whole experimental test. The theory was coded and a statistical correlation program was installed as a utility. The theory is established with small classical structures.

  15. Evaluation of a Stratified National Breast Screening Program in the United Kingdom: An Early Model-Based Cost-Effectiveness Analysis.

    PubMed

    Gray, Ewan; Donten, Anna; Karssemeijer, Nico; van Gils, Carla; Evans, D Gareth; Astley, Sue; Payne, Katherine

    2017-09-01

    To identify the incremental costs and consequences of stratified national breast screening programs (stratified NBSPs) and drivers of relative cost-effectiveness. A decision-analytic model (discrete event simulation) was conceptualized to represent four stratified NBSPs (risk 1, risk 2, masking [supplemental screening for women with higher breast density], and masking and risk 1) compared with the current UK NBSP and no screening. The model assumed a lifetime horizon, the health service perspective to identify costs (£, 2015), and measured consequences in quality-adjusted life-years (QALYs). Multiple data sources were used: systematic reviews of effectiveness and utility, published studies reporting costs, and cohort studies embedded in existing NBSPs. Model parameter uncertainty was assessed using probabilistic sensitivity analysis and one-way sensitivity analysis. The base-case analysis, supported by probabilistic sensitivity analysis, suggested that the risk stratified NBSPs (risk 1 and risk-2) were relatively cost-effective when compared with the current UK NBSP, with incremental cost-effectiveness ratios of £16,689 per QALY and £23,924 per QALY, respectively. Stratified NBSP including masking approaches (supplemental screening for women with higher breast density) was not a cost-effective alternative, with incremental cost-effectiveness ratios of £212,947 per QALY (masking) and £75,254 per QALY (risk 1 and masking). When compared with no screening, all stratified NBSPs could be considered cost-effective. Key drivers of cost-effectiveness were discount rate, natural history model parameters, mammographic sensitivity, and biopsy rates for recalled cases. A key assumption was that the risk model used in the stratification process was perfectly calibrated to the population. This early model-based cost-effectiveness analysis provides indicative evidence for decision makers to understand the key drivers of costs and QALYs for exemplar stratified NBSP. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  16. Modeling the Dynamic Interrelations between Mobility, Utility, and Land Asking Price

    NASA Astrophysics Data System (ADS)

    Hidayat, E.; Rudiarto, I.; Siegert, F.; Vries, W. D.

    2018-02-01

    Limited and insufficient information about the dynamic interrelation among mobility, utility, and land price is the main reason to conduct this research. Several studies, with several approaches, and several variables have been conducted so far in order to model the land price. However, most of these models appear to generate primarily static land prices. Thus, a research is required to compare, design, and validate different models which calculate and/or compare the inter-relational changes of mobility, utility, and land price. The applied method is a combination of analysis of literature review, expert interview, and statistical analysis. The result is newly improved mathematical model which have been validated and is suitable for the case study location. This improved model consists of 12 appropriate variables. This model can be implemented in the Salatiga city as the case study location in order to arrange better land use planning to mitigate the uncontrolled urban growth.

  17. How to Compare the Security Quality Requirements Engineering (SQUARE) Method with Other Methods

    DTIC Science & Technology

    2007-08-01

    Attack Trees for Modeling and Analysis 10 2.8 Misuse and Abuse Cases 10 2.9 Formal Methods 11 2.9.1 Software Cost Reduction 12 2.9.2 Common...modern or efficient techniques. • Requirements analysis typically is either not performed at all (identified requirements are directly specified without...any analysis or modeling) or analysis is restricted to functional re- quirements and ignores quality requirements, other nonfunctional requirements

  18. A statistical simulation model for field testing of non-target organisms in environmental risk assessment of genetically modified plants.

    PubMed

    Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore

    2014-04-01

    Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided.

  19. A statistical simulation model for field testing of non-target organisms in environmental risk assessment of genetically modified plants

    PubMed Central

    Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore

    2014-01-01

    Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided. PMID:24834325

  20. Categorical Data Analysis Using a Skewed Weibull Regression Model

    NASA Astrophysics Data System (ADS)

    Caron, Renault; Sinha, Debajyoti; Dey, Dipak; Polpo, Adriano

    2018-03-01

    In this paper, we present a Weibull link (skewed) model for categorical response data arising from binomial as well as multinomial model. We show that, for such types of categorical data, the most commonly used models (logit, probit and complementary log-log) can be obtained as limiting cases. We further compare the proposed model with some other asymmetrical models. The Bayesian as well as frequentist estimation procedures for binomial and multinomial data responses are presented in details. The analysis of two data sets to show the efficiency of the proposed model is performed.

  1. Preliminary Cost Model for Space Telescopes

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Prince, F. Andrew; Smart, Christian; Stephens, Kyle; Henrichs, Todd

    2009-01-01

    Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. However, great care is required. Some space telescope cost models, such as those based only on mass, lack sufficient detail to support such analysis and may lead to inaccurate conclusions. Similarly, using ground based telescope models which include the dome cost will also lead to inaccurate conclusions. This paper reviews current and historical models. Then, based on data from 22 different NASA space telescopes, this paper tests those models and presents preliminary analysis of single and multi-variable space telescope cost models.

  2. Computer code for off-design performance analysis of radial-inflow turbines with rotor blade sweep

    NASA Technical Reports Server (NTRS)

    Meitner, P. L.; Glassman, A. J.

    1983-01-01

    The analysis procedure of an existing computer program was extended to include rotor blade sweep, to model the flow more accurately at the rotor exit, and to provide more detail to the loss model. The modeling changes are described and all analysis equations and procedures are presented. Program input and output are described and are illustrated by an example problem. Results obtained from this program and from a previous program are compared with experimental data.

  3. Modeling energy/economy interactions for conservation and renewable energy-policy analysis

    NASA Astrophysics Data System (ADS)

    Groncki, P. J.

    Energy policy and the implications for policy analysis and the methodological tools are discussed. The evolution of one methodological approach and the combined modeling system of the component models, their evolution in response to changing analytic needs, and the development of the integrated framework are reported. The analyses performed over the past several years are summarized. The current philosophy behind energy policy is discussed and compared to recent history. Implications for current policy analysis and methodological approaches are drawn.

  4. Assessing model uncertainty using hexavalent chromium and ...

    EPA Pesticide Factsheets

    Introduction: The National Research Council recommended quantitative evaluation of uncertainty in effect estimates for risk assessment. This analysis considers uncertainty across model forms and model parameterizations with hexavalent chromium [Cr(VI)] and lung cancer mortality as an example. The objective of this analysis is to characterize model uncertainty by evaluating the variance in estimates across several epidemiologic analyses.Methods: This analysis compared 7 publications analyzing two different chromate production sites in Ohio and Maryland. The Ohio cohort consisted of 482 workers employed from 1940-72, while the Maryland site employed 2,357 workers from 1950-74. Cox and Poisson models were the only model forms considered by study authors to assess the effect of Cr(VI) on lung cancer mortality. All models adjusted for smoking and included a 5-year exposure lag, however other latency periods and model covariates such as age and race were considered. Published effect estimates were standardized to the same units and normalized by their variances to produce a standardized metric to compare variability in estimates across and within model forms. A total of 7 similarly parameterized analyses were considered across model forms, and 23 analyses with alternative parameterizations were considered within model form (14 Cox; 9 Poisson). Results: Across Cox and Poisson model forms, adjusted cumulative exposure coefficients for 7 similar analyses ranged from 2.47

  5. Cost-effectiveness of oral agents in relapsing-remitting multiple sclerosis compared to interferon-based therapy in Saudi Arabia.

    PubMed

    Alsaqa'aby, Mai F; Vaidya, Varun; Khreis, Noura; Khairallah, Thamer Al; Al-Jedai, Ahmed H

    2017-01-01

    Promising clinical and humanistic outcomes are associated with the use of new oral agents in the treatment of relapsing-remitting multiple sclerosis (RRMS). This is the first cost-effectiveness study comparing these medications in Saudi Arabia. We aimed to compare the cost-effectiveness of fingolimod, teriflunomide, dimethyl fumarate, and interferon (IFN)-b1a products (Avonex and Rebif) as first-line therapies in the treatment of patients with RRMS from a Saudi payer perspective. Cohort Simulation Model (Markov Model). Tertiary care hospital. A hypothetical cohort of 1000 RRMS Saudi patients was assumed to enter a Markov model model with a time horizon of 20 years and an annual cycle length. The model was developed based on an expanded disability status scale (EDSS) to evaluate the cost-effectiveness of the five disease-modifying drugs (DMDs) from a healthcare system perspective. Data on EDSS progression and relapse rates were obtained from the literature; cost data were obtained from King Faisal Specialist Hospital and Research Centre, Riyadh, Saudi Arabia. Results were expressed as incremental cost-effectiveness ratios (ICERs) and net monetary benefits (NMB) in Saudi Riyals and converted to equivalent $US. The base-case willingness-to-pay (WTP) threshold was assumed to be $100000 (SAR375000). One-way sensitivity analysis and probabilistic sensitivity analysis were conducted to test the robustness of the model. ICERs and NMB. The base-case analysis results showed Rebif as the optimal therapy at a WTP threshold of $100000. Avonex had the lowest ICER value of $337282/QALY when compared to Rebif. One-way sensitivity analysis demonstrated that the results were sensitive to utility weights of health state three and four and the cost of Rebif. None of the DMDs were found to be cost-effective in the treatment of RRMS at a WTP threshold of $100000 in this analysis. The DMDs would only be cost-effective at a WTP above $300000. The current analysis did not reflect the Saudi population preference in valuation of health states and did not consider the societal perspective in terms of cost.

  6. Source term model evaluations for the low-level waste facility performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yim, M.S.; Su, S.I.

    1995-12-31

    The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.

  7. Obtaining Content Weights for Test Specifications from Job Analysis Task Surveys: An Application of the Many-Facets Rasch Model

    ERIC Educational Resources Information Center

    Wang, Ning; Stahl, John

    2012-01-01

    This article discusses the use of the Many-Facets Rasch Model, via the FACETS computer program (Linacre, 2006a), to scale job/practice analysis survey data as well as to combine multiple rating scales into single composite weights representing the tasks' relative importance. Results from the Many-Facets Rasch Model are compared with those…

  8. An Analysis of Cross Racial Identity Scale Scores Using Classical Test Theory and Rasch Item Response Models

    ERIC Educational Resources Information Center

    Sussman, Joshua; Beaujean, A. Alexander; Worrell, Frank C.; Watson, Stevie

    2013-01-01

    Item response models (IRMs) were used to analyze Cross Racial Identity Scale (CRIS) scores. Rasch analysis scores were compared with classical test theory (CTT) scores. The partial credit model demonstrated a high goodness of fit and correlations between Rasch and CTT scores ranged from 0.91 to 0.99. CRIS scores are supported by both methods.…

  9. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    PubMed

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  10. Efficient Workflows for Curation of Heterogeneous Data Supporting Modeling of U-Nb Alloy Aging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ward, Logan Timothy; Hackenberg, Robert Errol

    These are slides from a presentation summarizing a graduate research associate's summer project. The following topics are covered in these slides: data challenges in materials, aging in U-Nb Alloys, Building an Aging Model, Different Phase Trans. in U-Nb, the Challenge, Storing Materials Data, Example Data Source, Organizing Data: What is a Schema?, What does a "XML Schema" look like?, Our Data Schema: Nice and Simple, Storing Data: Materials Data Curation System (MDCS), Problem with MDCS: Slow Data Entry, Getting Literature into MDCS, Staging Data in Excel Document, Final Result: MDCS Records, Analyzing Image Data, Process for Making TTT Diagram, Bottleneckmore » Number 1: Image Analysis, Fitting a TTP Boundary, Fitting a TTP Curve: Comparable Results, How Does it Compare to Our Data?, Image Analysis Workflow, Curating Hardness Records, Hardness Data: Two Key Decisions, Before Peak Age? - Automation, Interactive Viz, Which Transformation?, Microstructure-Informed Model, Tracking the Entire Process, General Problem with Property Models, Pinyon: Toolkit for Managing Model Creation, Tracking Individual Decisions, Jupyter: Docs and Code in One File, Hardness Analysis Workflow, Workflow for Aging Models, and conclusions.« less

  11. Pharmacoeconomics of parenteral nutrition in surgical and critically ill patients receiving structured triglycerides in China.

    PubMed

    Wu, Guo Hao; Ehm, Alexandra; Bellone, Marco; Pradelli, Lorenzo

    2017-01-01

    A prior meta-analysis showed favorable metabolic effects of structured triglyceride (STG) lipid emulsions in surgical and critically ill patients compared with mixed medium-chain/long-chain triglycerides (MCT/LCT) emulsions. Limited data on clinical outcomes precluded pharmacoeconomic analysis. We performed an updated meta-analysis and developed a cost model to compare overall costs for STGs vs MCT/LCTs in Chinese hospitals. We searched Medline, Embase, Wanfang Data, the China Hospital Knowledge Database, and Google Scholar for clinical trials comparing STGs to mixed MCT/LCTs in surgical or critically ill adults published between October 10, 2013 and September 19, 2015. Newly identified studies were pooled with the prior studies and an updated meta-analysis was performed. A deterministic simulation model was used to compare the effects of STGs and mixed MCT/LCT's on Chinese hospital costs. The literature search identified six new trials, resulting in a total of 27 studies in the updated meta-analysis. Statistically significant differences favoring STGs were observed for cumulative nitrogen balance, pre- albumin and albumin concentrations, plasma triglycerides, and liver enzymes. STGs were also associated with a significant reduction in the length of hospital stay (mean difference, -1.45 days; 95% confidence interval, -2.48 to -0.43; p=0.005) versus mixed MCT/LCTs. Cost analysis demonstrated a net cost benefit of ¥675 compared with mixed MCT/LCTs. STGs are associated with improvements in metabolic function and reduced length of hospitalization in surgical and critically ill patients compared with mixed MCT/LCT emulsions. Cost analysis using data from Chinese hospitals showed a corresponding cost benefit.

  12. A Wavelet Support Vector Machine Combination Model for Singapore Tourist Arrival to Malaysia

    NASA Astrophysics Data System (ADS)

    Rafidah, A.; Shabri, Ani; Nurulhuda, A.; Suhaila, Y.

    2017-08-01

    In this study, wavelet support vector machine model (WSVM) is proposed and applied for monthly data Singapore tourist time series prediction. The WSVM model is combination between wavelet analysis and support vector machine (SVM). In this study, we have two parts, first part we compare between the kernel function and second part we compare between the developed models with single model, SVM. The result showed that kernel function linear better than RBF while WSVM outperform with single model SVM to forecast monthly Singapore tourist arrival to Malaysia.

  13. Ecological Momentary Assessments and Automated Time Series Analysis to Promote Tailored Health Care: A Proof-of-Principle Study.

    PubMed

    van der Krieke, Lian; Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith Gm; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter

    2015-08-07

    Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher's tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use.

  14. Ecological Momentary Assessments and Automated Time Series Analysis to Promote Tailored Health Care: A Proof-of-Principle Study

    PubMed Central

    Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith GM; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter

    2015-01-01

    Background Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. Objective This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. Methods We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher’s tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). Results An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Conclusions Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use. PMID:26254160

  15. Economic Analysis of Panitumumab Compared With Cetuximab in Patients With Wild-type KRAS Metastatic Colorectal Cancer That Progressed After Standard Chemotherapy.

    PubMed

    Graham, Christopher N; Maglinte, Gregory A; Schwartzberg, Lee S; Price, Timothy J; Knox, Hediyyih N; Hechmati, Guy; Hjelmgren, Jonas; Barber, Beth; Fakih, Marwan G

    2016-06-01

    In this analysis, we compared costs and explored the cost-effectiveness of subsequent-line treatment with cetuximab or panitumumab in patients with wild-type KRAS (exon 2) metastatic colorectal cancer (mCRC) after previous chemotherapy treatment failure. Data were used from ASPECCT (A Study of Panitumumab Efficacy and Safety Compared to Cetuximab in Patients With KRAS Wild-Type Metastatic Colorectal Cancer), a Phase III, head-to-head randomized noninferiority study comparing the efficacy and safety of panitumumab and cetuximab in this population. A decision-analytic model was developed to perform a cost-minimization analysis and a semi-Markov model was created to evaluate the cost-effectiveness of panitumumab monotherapy versus cetuximab monotherapy in chemotherapy-resistant wild-type KRAS (exon 2) mCRC. The cost-minimization model assumed equivalent efficacy (progression-free survival) based on data from ASPECCT. The cost-effectiveness analysis was conducted with the full information (uncertainty) from ASPECCT. Both analyses were conducted from a US third-party payer perspective and calculated average anti-epidermal growth factor receptor doses from ASPECCT. Costs associated with drug acquisition, treatment administration (every 2 weeks for panitumumab, weekly for cetuximab), and incidence of infusion reactions were estimated in both models. The cost-effectiveness model also included physician visits, disease progression monitoring, best supportive care, and end-of-life costs and utility weights estimated from EuroQol 5-Dimension questionnaire responses from ASPECCT. The cost-minimization model results demonstrated lower projected costs for patients who received panitumumab versus cetuximab, with a projected cost savings of $9468 (16.5%) per panitumumab-treated patient. In the cost-effectiveness model, the incremental cost per quality-adjusted life-year gained revealed panitumumab to be less costly, with marginally better outcomes than cetuximab. These economic analyses comparing panitumumab and cetuximab in chemorefractory wild-type KRAS (exon 2) mCRC suggest benefits in favor of panitumumab. ClinicalTrials.gov identifier: NCT01001377. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Investigation of antigen-antibody interactions of sulfonamides with a monoclonal antibody in a fluorescence polarization immunoassay using 3D-QSAR models

    USDA-ARS?s Scientific Manuscript database

    A three-dimensional quantitative structure-activity relationship (3D-QSAR) model of sulfonamide analogs binding a monoclonal antibody (MAbSMR) produced against sulfamerazine was carried out by Distance Comparison (DISCOtech), comparative molecular field analysis (CoMFA), and comparative molecular si...

  17. Comparative analysis of used car price evaluation models

    NASA Astrophysics Data System (ADS)

    Chen, Chuancan; Hao, Lulu; Xu, Cong

    2017-05-01

    An accurate used car price evaluation is a catalyst for the healthy development of used car market. Data mining has been applied to predict used car price in several articles. However, little is studied on the comparison of using different algorithms in used car price estimation. This paper collects more than 100,000 used car dealing records throughout China to do empirical analysis on a thorough comparison of two algorithms: linear regression and random forest. These two algorithms are used to predict used car price in three different models: model for a certain car make, model for a certain car series and universal model. Results show that random forest has a stable but not ideal effect in price evaluation model for a certain car make, but it shows great advantage in the universal model compared with linear regression. This indicates that random forest is an optimal algorithm when handling complex models with a large number of variables and samples, yet it shows no obvious advantage when coping with simple models with less variables.

  18. A Comparative Analysis on Models of Higher Education Massification

    ERIC Educational Resources Information Center

    Pan, Maoyuan; Luo, Dan

    2008-01-01

    Four financial models of massification of higher education are discussed in this essay. They are American model, Western European model, Southeast Asian and Latin American model and the transition countries model. The comparison of the four models comes to the conclusion that taking advantage of nongovernmental funding is fundamental to dealing…

  19. Global sensitivity analysis for urban water quality modelling: Terminology, convergence and comparison of different methods

    NASA Astrophysics Data System (ADS)

    Vanrolleghem, Peter A.; Mannina, Giorgio; Cosenza, Alida; Neumann, Marc B.

    2015-03-01

    Sensitivity analysis represents an important step in improving the understanding and use of environmental models. Indeed, by means of global sensitivity analysis (GSA), modellers may identify both important (factor prioritisation) and non-influential (factor fixing) model factors. No general rule has yet been defined for verifying the convergence of the GSA methods. In order to fill this gap this paper presents a convergence analysis of three widely used GSA methods (SRC, Extended FAST and Morris screening) for an urban drainage stormwater quality-quantity model. After the convergence was achieved the results of each method were compared. In particular, a discussion on peculiarities, applicability, and reliability of the three methods is presented. Moreover, a graphical Venn diagram based classification scheme and a precise terminology for better identifying important, interacting and non-influential factors for each method is proposed. In terms of convergence, it was shown that sensitivity indices related to factors of the quantity model achieve convergence faster. Results for the Morris screening method deviated considerably from the other methods. Factors related to the quality model require a much higher number of simulations than the number suggested in literature for achieving convergence with this method. In fact, the results have shown that the term "screening" is improperly used as the method may exclude important factors from further analysis. Moreover, for the presented application the convergence analysis shows more stable sensitivity coefficients for the Extended-FAST method compared to SRC and Morris screening. Substantial agreement in terms of factor fixing was found between the Morris screening and Extended FAST methods. In general, the water quality related factors exhibited more important interactions than factors related to water quantity. Furthermore, in contrast to water quantity model outputs, water quality model outputs were found to be characterised by high non-linearity.

  20. Application of Interface Technology in Progressive Failure Analysis of Composite Panels

    NASA Technical Reports Server (NTRS)

    Sleight, D. W.; Lotts, C. G.

    2002-01-01

    A progressive failure analysis capability using interface technology is presented. The capability has been implemented in the COMET-AR finite element analysis code developed at the NASA Langley Research Center and is demonstrated on composite panels. The composite panels are analyzed for damage initiation and propagation from initial loading to final failure using a progressive failure analysis capability that includes both geometric and material nonlinearities. Progressive failure analyses are performed on conventional models and interface technology models of the composite panels. Analytical results and the computational effort of the analyses are compared for the conventional models and interface technology models. The analytical results predicted with the interface technology models are in good correlation with the analytical results using the conventional models, while significantly reducing the computational effort.

  1. Comparative Analysis of River Flow Modelling by Using Supervised Learning Technique

    NASA Astrophysics Data System (ADS)

    Ismail, Shuhaida; Mohamad Pandiahi, Siraj; Shabri, Ani; Mustapha, Aida

    2018-04-01

    The goal of this research is to investigate the efficiency of three supervised learning algorithms for forecasting monthly river flow of the Indus River in Pakistan, spread over 550 square miles or 1800 square kilometres. The algorithms include the Least Square Support Vector Machine (LSSVM), Artificial Neural Network (ANN) and Wavelet Regression (WR). The forecasting models predict the monthly river flow obtained from the three models individually for river flow data and the accuracy of the all models were then compared against each other. The monthly river flow of the said river has been forecasted using these three models. The obtained results were compared and statistically analysed. Then, the results of this analytical comparison showed that LSSVM model is more precise in the monthly river flow forecasting. It was found that LSSVM has he higher r with the value of 0.934 compared to other models. This indicate that LSSVM is more accurate and efficient as compared to the ANN and WR model.

  2. [Variable selection methods combined with local linear embedding theory used for optimization of near infrared spectral quantitative models].

    PubMed

    Hao, Yong; Sun, Xu-Dong; Yang, Qiang

    2012-12-01

    Variables selection strategy combined with local linear embedding (LLE) was introduced for the analysis of complex samples by using near infrared spectroscopy (NIRS). Three methods include Monte Carlo uninformation variable elimination (MCUVE), successive projections algorithm (SPA) and MCUVE connected with SPA were used for eliminating redundancy spectral variables. Partial least squares regression (PLSR) and LLE-PLSR were used for modeling complex samples. The results shown that MCUVE can both extract effective informative variables and improve the precision of models. Compared with PLSR models, LLE-PLSR models can achieve more accurate analysis results. MCUVE combined with LLE-PLSR is an effective modeling method for NIRS quantitative analysis.

  3. Stability analysis for a delay differential equations model of a hydraulic turbine speed governor

    NASA Astrophysics Data System (ADS)

    Halanay, Andrei; Safta, Carmen A.; Dragoi, Constantin; Piraianu, Vlad F.

    2017-01-01

    The paper aims to study the dynamic behavior of a speed governor for a hydraulic turbine using a mathematical model. The nonlinear mathematical model proposed consists in a system of delay differential equations (DDE) to be compared with already established mathematical models of ordinary differential equations (ODE). A new kind of nonlinearity is introduced as a time delay. The delays can characterize different running conditions of the speed governor. For example, it is considered that spool displacement of hydraulic amplifier might be blocked due to oil impurities in the oil supply system and so the hydraulic amplifier has a time delay in comparison to the time control. Numerical simulations are presented in a comparative manner. A stability analysis of the hydraulic control system is performed, too. Conclusions of the dynamic behavior using the DDE model of a hydraulic turbine speed governor are useful in modeling and controlling hydropower plants.

  4. Comparison of measured and modeled radiation, heat and water vapor fluxes: FIFE pilot study

    NASA Technical Reports Server (NTRS)

    Blad, Blaine L.; Hubbard, Kenneth G.; Verma, Shashi B.; Starks, Patrick; Norman, John M.; Walter-Shea, Elizabeth

    1987-01-01

    The feasibility of using radio frequency receivers to collect data from automated weather stations to model fluxes of latent heat, sensible heat, and radiation using routine weather data collected by automated weather stations was tested and the estimated fluxes were compared with fluxes measured over wheat. The model Cupid was used to model the fluxes. Two or more automated weather stations, interrogated by radio frequency and other means, were utilized to examine some of the climatic variability of the First ISLSCP (International Satellite Land-Surface Climatology Project) Field Experiment (FIFE) site, to measure and model reflected and emitted radiation streams from various locations at the site and to compare modeled latent and sensible heat fluxes with measured values. Some bidirectional reflected and emitted radiation data were collected from 23 locations throughout the FIFE site. Analysis of these data along with analysis of the measured sensible and latent heat fluxes is just beginning.

  5. A GIS-based approach for comparative analysis of potential fire risk assessment

    NASA Astrophysics Data System (ADS)

    Sun, Ying; Hu, Lieqiu; Liu, Huiping

    2007-06-01

    Urban fires are one of the most important sources of property loss and human casualty and therefore it is necessary to assess the potential fire risk with consideration of urban community safety. Two evaluation models are proposed, both of which are integrated with GIS. One is the single factor model concerning the accessibility of fire passage and the other is grey clustering approach based on the multifactor system. In the latter model, fourteen factors are introduced and divided into four categories involving security management, evacuation facility, construction resistance and fire fighting capability. A case study on campus of Beijing Normal University is presented to express the potential risk assessment models in details. A comparative analysis of the two models is carried out to validate the accuracy. The results are approximately consistent with each other. Moreover, modeling with GIS promotes the efficiency the potential risk assessment.

  6. Process fault detection and nonlinear time series analysis for anomaly detection in safeguards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, T.L.; Mullen, M.F.; Wangen, L.E.

    In this paper we discuss two advanced techniques, process fault detection and nonlinear time series analysis, and apply them to the analysis of vector-valued and single-valued time-series data. We investigate model-based process fault detection methods for analyzing simulated, multivariate, time-series data from a three-tank system. The model-predictions are compared with simulated measurements of the same variables to form residual vectors that are tested for the presence of faults (possible diversions in safeguards terminology). We evaluate two methods, testing all individual residuals with a univariate z-score and testing all variables simultaneously with the Mahalanobis distance, for their ability to detect lossmore » of material from two different leak scenarios from the three-tank system: a leak without and with replacement of the lost volume. Nonlinear time-series analysis tools were compared with the linear methods popularized by Box and Jenkins. We compare prediction results using three nonlinear and two linear modeling methods on each of six simulated time series: two nonlinear and four linear. The nonlinear methods performed better at predicting the nonlinear time series and did as well as the linear methods at predicting the linear values.« less

  7. Analysis of spatial thermal field in a magnetic bearing

    NASA Astrophysics Data System (ADS)

    Wajnert, Dawid; Tomczuk, Bronisław

    2018-03-01

    This paper presents two mathematical models for temperature field analysis in a new hybrid magnetic bearing. Temperature distributions have been calculated using a three dimensional simulation and a two dimensional one. A physical model for temperature testing in the magnetic bearing has been developed. Some results obtained from computer simulations were compared with measurements.

  8. Neutral model analysis of landscape patterns from mathematical morphology

    Treesearch

    Kurt H. Riitters; Peter Vogt; Pierre Soille; Jacek Kozak; Christine Estreguil

    2007-01-01

    Mathematical morphology encompasses methods for characterizing land-cover patterns in ecological research and biodiversity assessments. This paper reports a neutral model analysis of patterns in the absence of a structuring ecological process, to help set standards for comparing and interpreting patterns identified by mathematical morphology on real land-cover maps. We...

  9. Comparative study on DuPont analysis and DEA models for measuring stock performance using financial ratio

    NASA Astrophysics Data System (ADS)

    Arsad, Roslah; Shaari, Siti Nabilah Mohd; Isa, Zaidi

    2017-11-01

    Determining stock performance using financial ratio is challenging for many investors and researchers. Financial ratio can indicate the strengths and weaknesses of a company's stock performance. There are five categories of financial ratios namely liquidity, efficiency, leverage, profitability and market ratios. It is important to interpret the ratio correctly for proper financial decision making. The purpose of this study is to compare the performance of listed companies in Bursa Malaysia using Data Envelopment Analysis (DEA) and DuPont analysis Models. The study is conducted in 2015 involving 116 consumer products companies listed in Bursa Malaysia. The estimation method of Data Envelopment Analysis computes the efficiency scores and ranks the companies accordingly. The Alirezaee and Afsharian's method of analysis based Charnes, Cooper and Rhodes (CCR) where Constant Return to Scale (CRS) is employed. The DuPont analysis is a traditional tool for measuring the operating performance of companies. In this study, DuPont analysis is used to evaluate three different aspects such as profitability, efficiency of assets utilization and financial leverage. Return on Equity (ROE) is also calculated in DuPont analysis. This study finds that both analysis models provide different rankings of the selected samples. Hypothesis testing based on Pearson's correlation, indicates that there is no correlation between rankings produced by DEA and DuPont analysis. The DEA ranking model proposed by Alirezaee and Asharian is unstable. The method cannot provide complete ranking because the values of Balance Index is equal and zero.

  10. An approach for quantitative image quality analysis for CT

    NASA Astrophysics Data System (ADS)

    Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe

    2016-03-01

    An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.

  11. Comparative Statistical Analysis of Auroral Models

    DTIC Science & Technology

    2012-03-22

    was willing to add this project to her extremely busy schedule. Lastly, I must also express my sincere appreciation for the rest of the faculty and...models have been extensively used for estimating GPS and other communication satellite disturbances ( Newell et al., 2010a). The auroral oval...models predict changes in the auroral oval in response to various geomagnetic conditions. In 2010, Newell et al. conducted a comparative study of

  12. Handling incomplete correlated continuous and binary outcomes in meta-analysis of individual participant data.

    PubMed

    Gomes, Manuel; Hatfield, Laura; Normand, Sharon-Lise

    2016-09-20

    Meta-analysis of individual participant data (IPD) is increasingly utilised to improve the estimation of treatment effects, particularly among different participant subgroups. An important concern in IPD meta-analysis relates to partially or completely missing outcomes for some studies, a problem exacerbated when interest is on multiple discrete and continuous outcomes. When leveraging information from incomplete correlated outcomes across studies, the fully observed outcomes may provide important information about the incompleteness of the other outcomes. In this paper, we compare two models for handling incomplete continuous and binary outcomes in IPD meta-analysis: a joint hierarchical model and a sequence of full conditional mixed models. We illustrate how these approaches incorporate the correlation across the multiple outcomes and the between-study heterogeneity when addressing the missing data. Simulations characterise the performance of the methods across a range of scenarios which differ according to the proportion and type of missingness, strength of correlation between outcomes and the number of studies. The joint model provided confidence interval coverage consistently closer to nominal levels and lower mean squared error compared with the fully conditional approach across the scenarios considered. Methods are illustrated in a meta-analysis of randomised controlled trials comparing the effectiveness of implantable cardioverter-defibrillator devices alone to implantable cardioverter-defibrillator combined with cardiac resynchronisation therapy for treating patients with chronic heart failure. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  13. Using multiple group modeling to test moderators in meta-analysis.

    PubMed

    Schoemann, Alexander M

    2016-12-01

    Meta-analysis is a popular and flexible analysis that can be fit in many modeling frameworks. Two methods of fitting meta-analyses that are growing in popularity are structural equation modeling (SEM) and multilevel modeling (MLM). By using SEM or MLM to fit a meta-analysis researchers have access to powerful techniques associated with SEM and MLM. This paper details how to use one such technique, multiple group analysis, to test categorical moderators in meta-analysis. In a multiple group meta-analysis a model is fit to each level of the moderator simultaneously. By constraining parameters across groups any model parameter can be tested for equality. Using multiple groups to test for moderators is especially relevant in random-effects meta-analysis where both the mean and the between studies variance of the effect size may be compared across groups. A simulation study and the analysis of a real data set are used to illustrate multiple group modeling with both SEM and MLM. Issues related to multiple group meta-analysis and future directions for research are discussed. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.

    PubMed

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2016-01-15

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Finite Element Study of a Lumbar Intervertebral Disc Nucleus Replacement Device.

    PubMed

    Coogan, Jessica S; Francis, W Loren; Eliason, Travis D; Bredbenner, Todd L; Stemper, Brian D; Yoganandan, Narayan; Pintar, Frank A; Nicolella, Daniel P

    2016-01-01

    Nucleus replacement technologies are a minimally invasive alternative to spinal fusion and total disc replacement that have the potential to reduce pain and restore motion for patients with degenerative disc disease. Finite element modeling can be used to determine the biomechanics associated with nucleus replacement technologies. The current study focuses on a new nucleus replacement device designed as a conforming silicone implant with an internal void. A validated finite element model of the human lumbar L3-L4 motion segment was developed and used to investigate the influence of the nucleus replacement device on spine biomechanics. In addition, the effect of device design changes on biomechanics was determined. A 3D, L3-L4 finite element model was constructed from medical imaging data. Models were created with the normal intact nucleus, the nucleus replacement device, and a solid silicone implant. Probabilistic analysis was performed on the normal model to provide quantitative validation metrics. Sensitivity analysis was performed on the silicone Shore A durometer of the device. Models were loaded under axial compression followed by flexion/extension, lateral bending, or axial rotation. Compressive displacement, endplate stresses, reaction moment, and annulus stresses were determined and compared between the different models. The novel nucleus replacement device resulted in similar compressive displacement, endplate stress, and annulus stress and slightly higher reaction moment compared with the normal nucleus. The solid implant resulted in decreased displacement, increased endplate stress, decreased annulus stress, and decreased reaction moment compared with the novel device. With increasing silicone durometer, compressive displacement decreased, endplate stress increased, reaction moment increased, and annulus stress decreased. Finite element analysis was used to show that the novel nucleus replacement device results in similar biomechanics compared with the normal intact nucleus.

  16. Finite Element Study of a Lumbar Intervertebral Disc Nucleus Replacement Device

    PubMed Central

    Coogan, Jessica S.; Francis, W. Loren; Eliason, Travis D.; Bredbenner, Todd L.; Stemper, Brian D.; Yoganandan, Narayan; Pintar, Frank A.; Nicolella, Daniel P.

    2016-01-01

    Nucleus replacement technologies are a minimally invasive alternative to spinal fusion and total disc replacement that have the potential to reduce pain and restore motion for patients with degenerative disc disease. Finite element modeling can be used to determine the biomechanics associated with nucleus replacement technologies. The current study focuses on a new nucleus replacement device designed as a conforming silicone implant with an internal void. A validated finite element model of the human lumbar L3–L4 motion segment was developed and used to investigate the influence of the nucleus replacement device on spine biomechanics. In addition, the effect of device design changes on biomechanics was determined. A 3D, L3–L4 finite element model was constructed from medical imaging data. Models were created with the normal intact nucleus, the nucleus replacement device, and a solid silicone implant. Probabilistic analysis was performed on the normal model to provide quantitative validation metrics. Sensitivity analysis was performed on the silicone Shore A durometer of the device. Models were loaded under axial compression followed by flexion/extension, lateral bending, or axial rotation. Compressive displacement, endplate stresses, reaction moment, and annulus stresses were determined and compared between the different models. The novel nucleus replacement device resulted in similar compressive displacement, endplate stress, and annulus stress and slightly higher reaction moment compared with the normal nucleus. The solid implant resulted in decreased displacement, increased endplate stress, decreased annulus stress, and decreased reaction moment compared with the novel device. With increasing silicone durometer, compressive displacement decreased, endplate stress increased, reaction moment increased, and annulus stress decreased. Finite element analysis was used to show that the novel nucleus replacement device results in similar biomechanics compared with the normal intact nucleus. PMID:27990418

  17. Computational fluid dynamic modelling of cavitation

    NASA Technical Reports Server (NTRS)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.

    1993-01-01

    Models in sheet cavitation in cryogenic fluids are developed for use in Euler and Navier-Stokes codes. The models are based upon earlier potential-flow models but enable the cavity inception point, length, and shape to be determined as part of the computation. In the present paper, numerical solutions are compared with experimental measurements for both pressure distribution and cavity length. Comparisons between models are also presented. The CFD model provides a relatively simple modification to an existing code to enable cavitation performance predictions to be included. The analysis also has the added ability of incorporating thermodynamic effects of cryogenic fluids into the analysis. Extensions of the current two-dimensional steady state analysis to three-dimensions and/or time-dependent flows are, in principle, straightforward although geometrical issues become more complicated. Linearized models, however offer promise of providing effective cavitation modeling in three-dimensions. This analysis presents good potential for improved understanding of many phenomena associated with cavity flows.

  18. Comparative Analysis of InSAR Digital Surface Models for Test Area Bucharest

    NASA Astrophysics Data System (ADS)

    Dana, Iulia; Poncos, Valentin; Teleaga, Delia

    2010-03-01

    This paper presents the results of the interferometric processing of ERS Tandem, ENVISAT and TerraSAR- X for digital surface model (DSM) generation. The selected test site is Bucharest (Romania), a built-up area characterized by the usual urban complex pattern: mixture of buildings with different height levels, paved roads, vegetation, and water bodies. First, the DSMs were generated following the standard interferometric processing chain. Then, the accuracy of the DSMs was analyzed against the SPOT HRS model (30 m resolution at the equator). A DSM derived by optical stereoscopic processing of SPOT 5 HRG data and also the SRTM (3 arc seconds resolution at the equator) DSM have been included in the comparative analysis.

  19. (16) {C}16C-elastic scattering examined using several models at different energies

    NASA Astrophysics Data System (ADS)

    El-hammamy, M. N.; Attia, A.

    2018-05-01

    In the present paper, the first results concerning the theoretical analysis of the ^{16}C + p reaction by investigating two elastic scattering angular distributions measured at high energy compared to low energy for this system are reported. Several models for the real part of the nuclear potential are tested within the optical model formalism. The imaginary potential has a Woods-Saxon shape with three free parameters. Two types of density distribution and three different cluster structures for ^{16}C are assumed in the analysis. The results are compared with each other as well as with the experimental data to give evidence of the importance of these studied items.

  20. Effects of monocortical and bicortical mini-implant anchorage on bone-borne palatal expansion using finite element analysis.

    PubMed

    Lee, Robert J; Moon, Won; Hong, Christine

    2017-05-01

    Bone-borne palatal expansion relies on mini-implant stability for successful orthopedic expansion. The large magnitude of applied force experienced by mini-implants during bone-borne expansion may lead to high failure rates. Use of bicortical mini-implant anchorage rather than monocortical anchorage may improve mini-implant stability. The aims of this study were to analyze and compare the effects of bicortical and monocortical anchorages on stress distribution and displacement during bone-borne palatal expansion using finite element analysis. Two skull models were constructed to represent expansion before and after midpalatal suture opening. Three clinical situations with varying mini-implant insertion depths were studied in each skull model: monocortical, 1-mm bicortical, and 2.5-mm bicortical. Finite element analysis simulations were performed for each clinical situation in both skull models. Von Mises stress distribution and transverse displacement were evaluated for all models. Peri-implant stress was greater in the monocortical anchorage model compared with both bicortical anchorage models. In addition, transverse displacement was greater and more parallel in the coronal plane for both bicortical models compared with the monocortical model. Minimal differences were observed between the 1-mm and the 2.5-mm bicortical models for both peri-implant stress and transverse displacement. Bicortical mini-implant anchorage results in improved mini-implant stability, decreased mini-implant deformation and fracture, more parallel expansion in the coronal plane, and increased expansion during bone-borne palatal expansion. However, the depth of bicortical mini-implant anchorage was not significant. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  1. A perturbation analysis of a mechanical model for stable spatial patterning in embryology

    NASA Astrophysics Data System (ADS)

    Bentil, D. E.; Murray, J. D.

    1992-12-01

    We investigate a mechanical cell-traction mechanism that generates stationary spatial patterns. A linear analysis highlights the model's potential for these heterogeneous solutions. We use multiple-scale perturbation techniques to study the evolution of these solutions and compare our solutions with numerical simulations of the model system. We discuss some potential biological applications among which are the formation of ridge patterns, dermatoglyphs, and wound healing.

  2. Cost-effectiveness of unicondylar versus total knee arthroplasty: a Markov model analysis.

    PubMed

    Peersman, Geert; Jak, Wouter; Vandenlangenbergh, Tom; Jans, Christophe; Cartier, Philippe; Fennema, Peter

    2014-01-01

    Unicondylar knee arthroplasty (UKA) is believed to lead to less morbidity and enhanced functional outcomes when compared with total knee arthroplasty (TKA). Conversely, UKA is also associated with a higher revision risk than TKA. In order to further clarify the key differences between these separate procedures, the current study assessing the cost-effectiveness of UKA versus TKA was undertaken. A state-transition Markov model was developed to compare the cost-effectiveness of UKA versus TKA for unicondylar osteoarthritis using a Belgian payer's perspective. The model was designed to include the possibility of two revision procedures. Model estimates were obtained through literature review and revision rates were based on registry data. Threshold analysis and probabilistic sensitivity analysis were performed to assess the model's robustness. UKA was associated with a cost reduction of €2,807 and a utility gain of 0.04 quality-adjusted life years in comparison with TKA. Analysis determined that the model is sensitive to clinical effectiveness, and that a marginal reduction in the clinical performance of UKA would lead to TKA being the more cost-effective solution. UKA yields clear advantages in terms of costs and marginal advantages in terms of health effects, in comparison with TKA. © 2014 Elsevier B.V. All rights reserved.

  3. QSAR studies on triazole derivatives as sglt inhibitors via CoMFA and CoMSIA

    NASA Astrophysics Data System (ADS)

    Zhi, Hui; Zheng, Junxia; Chang, Yiqun; Li, Qingguo; Liao, Guochao; Wang, Qi; Sun, Pinghua

    2015-10-01

    Forty-six sodium-dependent glucose cotransporters-2 (SGLT-2) inhibitors with hypoglycemic activity were selected to develop three-dimensional quantitative structure-activity relationship (3D-QSAR) using comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) models. A training set of 39 compounds were used to build up the models, which were then evaluated by a series of internal and external cross-validation techniques. A test set of 7 compounds was used for the external validation. The CoMFA model predicted a q2 value of 0.792 and an r2 value of 0.985. The best CoMSIA model predicted a q2 value of 0.633 and an r2 value of 0.895 based on a combination of steric, electrostatic, hydrophobic and hydrogen-bond acceptor effects. The predictive correlation coefficients (rpred2) of CoMFA and CoMSIA models were 0.872 and 0.839, respectively. The analysis of the contour maps from each model provided insight into the structural requirements for the development of more active sglt inhibitors, and on the basis of the models 8 new sglt inhibitors were designed and predicted.

  4. Learning in a game-based virtual environment: a comparative evaluation in higher education

    NASA Astrophysics Data System (ADS)

    Mayer, Igor; Warmelink, Harald; Bekebrede, Geertje

    2013-03-01

    The authors define the requirements and a conceptual model for comparative evaluation research of simulation games and serious games (SGs) in a learning context. A first operationalisation of the model was used to comparatively evaluate a suite of 14 SGs on varying topics played between 2004 and 2009 in 13 institutes of higher education in the Netherlands. The questions in this research were: what is the perceived learning effectiveness of the games and what factors explain it? How can we comparatively evaluate games for learning? Data were gathered through pre- and post-game questionnaires among 1000 students, leading to 500 useful datasets and 230 complete datasets for analysis (factor analysis, scaling, t-test and correlation analysis) to give an explorative, structural model. The findings are discussed and a number of propositions for further research are formulated. The conclusion of the analysis is that the students' motivation and attitudes towards game-based learning before the game, their actual enjoyment, their efforts during the game and the quality of the facilitator/teacher are most strongly correlated with their learning satisfaction. The degree to which the experiences during the game were translated back into the underlying theories significantly determines the students' learning satisfaction. The quality of the virtual game environment did not matter so much. The authors reflect upon the general methodology used and offer suggestions for further research and development.

  5. Conceptual model of iCAL4LA: Proposing the components using comparative analysis

    NASA Astrophysics Data System (ADS)

    Ahmad, Siti Zulaiha; Mutalib, Ariffin Abdul

    2016-08-01

    This paper discusses an on-going study that initiates an initial process in determining the common components for a conceptual model of interactive computer-assisted learning that is specifically designed for low achieving children. This group of children needs a specific learning support that can be used as an alternative learning material in their learning environment. In order to develop the conceptual model, this study extracts the common components from 15 strongly justified computer assisted learning studies. A comparative analysis has been conducted to determine the most appropriate components by using a set of specific indication classification to prioritize the applicability. The results of the extraction process reveal 17 common components for consideration. Later, based on scientific justifications, 16 of them were selected as the proposed components for the model.

  6. Comparative transcriptome analysis reveals vertebrate phylotypic period during organogenesis

    PubMed Central

    Irie, Naoki; Kuratani, Shigeru

    2011-01-01

    One of the central issues in evolutionary developmental biology is how we can formulate the relationships between evolutionary and developmental processes. Two major models have been proposed: the 'funnel-like' model, in which the earliest embryo shows the most conserved morphological pattern, followed by diversifying later stages, and the 'hourglass' model, in which constraints are imposed to conserve organogenesis stages, which is called the phylotypic period. Here we perform a quantitative comparative transcriptome analysis of several model vertebrate embryos and show that the pharyngula stage is most conserved, whereas earlier and later stages are rather divergent. These results allow us to predict approximate developmental timetables between different species, and indicate that pharyngula embryos have the most conserved gene expression profiles, which may be the source of the basic body plan of vertebrates. PMID:21427719

  7. Systematic comparison of the behaviors produced by computational models of epileptic neocortex.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warlaumont, A. S.; Lee, H. C.; Benayoun, M.

    2010-12-01

    Two existing models of brain dynamics in epilepsy, one detailed (i.e., realistic) and one abstract (i.e., simplified) are compared in terms of behavioral range and match to in vitro mouse recordings. A new method is introduced for comparing across computational models that may have very different forms. First, high-level metrics were extracted from model and in vitro output time series. A principal components analysis was then performed over these metrics to obtain a reduced set of derived features. These features define a low-dimensional behavior space in which quantitative measures of behavioral range and degree of match to real data canmore » be obtained. The detailed and abstract models and the mouse recordings overlapped considerably in behavior space. Both the range of behaviors and similarity to mouse data were similar between the detailed and abstract models. When no high-level metrics were used and principal components analysis was computed over raw time series, the models overlapped minimally with the mouse recordings. The method introduced here is suitable for comparing across different kinds of model data and across real brain recordings. It appears that, despite differences in form and computational expense, detailed and abstract models do not necessarily differ in their behaviors.« less

  8. Exploration of freely available web-interfaces for comparative homology modelling of microbial proteins.

    PubMed

    Nema, Vijay; Pal, Sudhir Kumar

    2013-01-01

    This study was conducted to find the best suited freely available software for modelling of proteins by taking a few sample proteins. The proteins used were small to big in size with available crystal structures for the purpose of benchmarking. Key players like Phyre2, Swiss-Model, CPHmodels-3.0, Homer, (PS)2, (PS)(2)-V(2), Modweb were used for the comparison and model generation. Benchmarking process was done for four proteins, Icl, InhA, and KatG of Mycobacterium tuberculosis and RpoB of Thermus Thermophilus to get the most suited software. Parameters compared during analysis gave relatively better values for Phyre2 and Swiss-Model. This comparative study gave the information that Phyre2 and Swiss-Model make good models of small and large proteins as compared to other screened software. Other software was also good but is often not very efficient in providing full-length and properly folded structure.

  9. Comparative Modelling of the Spectra of Cool Giants

    NASA Technical Reports Server (NTRS)

    Lebzelter, T.; Heiter, U.; Abia, C.; Eriksson, K.; Ireland, M.; Neilson, H.; Nowotny, W; Maldonado, J; Merle, T.; Peterson, R.; hide

    2012-01-01

    Our ability to extract information from the spectra of stars depends on reliable models of stellar atmospheres and appropriate techniques for spectral synthesis. Various model codes and strategies for the analysis of stellar spectra are available today. Aims. We aim to compare the results of deriving stellar parameters using different atmosphere models and different analysis strategies. The focus is set on high-resolution spectroscopy of cool giant stars. Methods. Spectra representing four cool giant stars were made available to various groups and individuals working in the area of spectral synthesis, asking them to derive stellar parameters from the data provided. The results were discussed at a workshop in Vienna in 2010. Most of the major codes currently used in the astronomical community for analyses of stellar spectra were included in this experiment. Results. We present the results from the different groups, as well as an additional experiment comparing the synthetic spectra produced by various codes for a given set of stellar parameters. Similarities and differences of the results are discussed. Conclusions. Several valid approaches to analyze a given spectrum of a star result in quite a wide range of solutions. The main causes for the differences in parameters derived by different groups seem to lie in the physical input data and in the details of the analysis method. This clearly shows how far from a definitive abundance analysis we still are.

  10. Cost-Effectiveness Analysis Comparing Pre-Diagnosis Autism Spectrum Disorder (ASD)-Targeted Intervention with Ontario's Autism Intervention Program

    ERIC Educational Resources Information Center

    Penner, Melanie; Rayar, Meera; Bashir, Naazish; Roberts, S. Wendy; Hancock-Howard, Rebecca L.; Coyte, Peter C.

    2015-01-01

    Novel management strategies for autism spectrum disorder (ASD) propose providing interventions before diagnosis. We performed a cost-effectiveness analysis comparing the costs and dependency-free life years (DFLYs) generated by pre-diagnosis intensive Early Start Denver Model (ESDM-I); pre-diagnosis parent-delivered ESDM (ESDM-PD); and the Ontario…

  11. Validation of numerical models for flow simulation in labyrinth seals

    NASA Astrophysics Data System (ADS)

    Frączek, D.; Wróblewski, W.

    2016-10-01

    CFD results were compared with the results of experiments for the flow through the labyrinth seal. RANS turbulence models (k-epsilon, k-omega, SST and SST-SAS) were selected for the study. Steady and transient results were analyzed. ANSYS CFX was used for numerical computation. The analysis included flow through sealing section with the honeycomb land. Leakage flows and velocity profiles in the seal were compared. In addition to the comparison of computational models, the divergence of modeling and experimental results has been determined. Tips for modeling these problems were formulated.

  12. Developments in the application of the geometrical theory of diffraction and computer graphics to aircraft inter-antenna coupling analysis

    NASA Astrophysics Data System (ADS)

    Bogusz, Michael

    1993-01-01

    The need for a systematic methodology for the analysis of aircraft electromagnetic compatibility (EMC) problems is examined. The available computer aids used in aircraft EMC analysis are assessed and a theoretical basis is established for the complex algorithms which identify and quantify electromagnetic interactions. An overview is presented of one particularly well established aircraft antenna to antenna EMC analysis code, the Aircraft Inter-Antenna Propagation with Graphics (AAPG) Version 07 software. The specific new algorithms created to compute cone geodesics and their associated path losses and to graph the physical coupling path are discussed. These algorithms are validated against basic principles. Loss computations apply the uniform geometrical theory of diffraction and are subsequently compared to measurement data. The increased modelling and analysis capabilities of the newly developed AAPG Version 09 are compared to those of Version 07. Several models of real aircraft, namely the Electronic Systems Trainer Challenger, are generated and provided as a basis for this preliminary comparative assessment. Issues such as software reliability, algorithm stability, and quality of hardcopy output are also discussed.

  13. The use of docking-based comparative intermolecular contacts analysis to identify optimal docking conditions within glucokinase and to discover of new GK activators

    NASA Astrophysics Data System (ADS)

    Taha, Mutasem O.; Habash, Maha; Khanfar, Mohammad A.

    2014-05-01

    Glucokinase (GK) is involved in normal glucose homeostasis and therefore it is a valid target for drug design and discovery efforts. GK activators (GKAs) have excellent potential as treatments of hyperglycemia and diabetes. The combined recent interest in GKAs, together with docking limitations and shortages of docking validation methods prompted us to use our new 3D-QSAR analysis, namely, docking-based comparative intermolecular contacts analysis (dbCICA), to validate docking configurations performed on a group of GKAs within GK binding site. dbCICA assesses the consistency of docking by assessing the correlation between ligands' affinities and their contacts with binding site spots. Optimal dbCICA models were validated by receiver operating characteristic curve analysis and comparative molecular field analysis. dbCICA models were also converted into valid pharmacophores that were used as search queries to mine 3D structural databases for new GKAs. The search yielded several potent bioactivators that experimentally increased GK bioactivity up to 7.5-folds at 10 μM.

  14. Static analysis of a sonar dome rubber window

    NASA Technical Reports Server (NTRS)

    Lai, J. L.

    1978-01-01

    The application of NASTRAN (level 16.0.1) to the static analysis of a sonar dome rubber window (SDRW) was demonstrated. The assessment of the conventional model (neglecting the enclosed fluid) for the stress analysis of the SDRW was made by comparing its results to those based on a sophisticated model (including the enclosed fluid). The fluid was modeled with isoparametric linear hexahedron elements with approximate material properties whose shear modulus was much smaller than its bulk modulus. The effect of the chosen material property for the fluid is discussed.

  15. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  16. Distribution of lod scores in oligogenic linkage analysis.

    PubMed

    Williams, J T; North, K E; Martin, L J; Comuzzie, A G; Göring, H H; Blangero, J

    2001-01-01

    In variance component oligogenic linkage analysis it can happen that the residual additive genetic variance bounds to zero when estimating the effect of the ith quantitative trait locus. Using quantitative trait Q1 from the Genetic Analysis Workshop 12 simulated general population data, we compare the observed lod scores from oligogenic linkage analysis with the empirical lod score distribution under a null model of no linkage. We find that zero residual additive genetic variance in the null model alters the usual distribution of the likelihood-ratio statistic.

  17. A general numerical model for wave rotor analysis

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel W.

    1992-01-01

    Wave rotors represent one of the promising technologies for achieving very high core temperatures and pressures in future gas turbine engines. Their operation depends upon unsteady gas dynamics and as such, their analysis is quite difficult. This report describes a numerical model which has been developed to perform such an analysis. Following a brief introduction, a summary of the wave rotor concept is given. The governing equations are then presented, along with a summary of the assumptions used to obtain them. Next, the numerical integration technique is described. This is an explicit finite volume technique based on the method of Roe. The discussion then focuses on the implementation of appropriate boundary conditions. Following this, some results are presented which first compare the numerical approximation to the governing differential equations and then compare the overall model to an actual wave rotor experiment. Finally, some concluding remarks are presented concerning the limitations of the simplifying assumptions and areas where the model may be improved.

  18. Nonlinear multi-analysis of agent-based financial market dynamics by epidemic system

    NASA Astrophysics Data System (ADS)

    Lu, Yunfan; Wang, Jun; Niu, Hongli

    2015-10-01

    Based on the epidemic dynamical system, we construct a new agent-based financial time series model. In order to check and testify its rationality, we compare the statistical properties of the time series model with the real stock market indices, Shanghai Stock Exchange Composite Index and Shenzhen Stock Exchange Component Index. For analyzing the statistical properties, we combine the multi-parameter analysis with the tail distribution analysis, the modified rescaled range analysis, and the multifractal detrended fluctuation analysis. For a better perspective, the three-dimensional diagrams are used to present the analysis results. The empirical research in this paper indicates that the long-range dependence property and the multifractal phenomenon exist in the real returns and the proposed model. Therefore, the new agent-based financial model can recurrence some important features of real stock markets.

  19. The Communication Model Perspective of Oral Interpretation.

    ERIC Educational Resources Information Center

    Peterson, Eric E.

    Communication models suggest that oral interpretation is a communicative process, that this process may be represented by specification of implicit and explicit content and structure, and that the models themselves are useful. This paper examines these assumptions through a comparative analysis of communication models employed by oral…

  20. The effects of videotape modeling on staff acquisition of functional analysis methodology.

    PubMed

    Moore, James W; Fisher, Wayne W

    2007-01-01

    Lectures and two types of video modeling were compared to determine their relative effectiveness in training 3 staff members to conduct functional analysis sessions. Video modeling that contained a larger number of therapist exemplars resulted in mastery-level performance eight of the nine times it was introduced, whereas neither lectures nor partial video modeling produced significant improvements in performance. Results demonstrated that video modeling provided an effective training strategy but only when a wide range of exemplars of potential therapist behaviors were depicted in the videotape.

  1. The Effects of Videotape Modeling on Staff Acquisition of Functional Analysis Methodology

    PubMed Central

    Moore, James W; Fisher, Wayne W

    2007-01-01

    Lectures and two types of video modeling were compared to determine their relative effectiveness in training 3 staff members to conduct functional analysis sessions. Video modeling that contained a larger number of therapist exemplars resulted in mastery-level performance eight of the nine times it was introduced, whereas neither lectures nor partial video modeling produced significant improvements in performance. Results demonstrated that video modeling provided an effective training strategy but only when a wide range of exemplars of potential therapist behaviors were depicted in the videotape. PMID:17471805

  2. A study of the extended-range forecasting problem blocking

    NASA Technical Reports Server (NTRS)

    Chen, T. C.; Marshall, H. G.; Shukla, J.

    1981-01-01

    Wavenumber frequency spectral analysis of a 90 day winter (Jan. 15 - April 14) wind field simulated by a climate experiment of the GLAS atmospheric circulation model is made using the space time Fourier analysis which is modified with Tukey's numerical spectral analysis. Computations are also made to examine how the model wave disturbances in the wavenumber frequency domain are maintained by nonlinear interactions. Results are compared with observation. It is found that equatorial easterlies do not show up in this climate experiment at 200 mb. The zonal kinetic energy and momentum transport of stationary waves are too small in the model's Northern Hemisphere. The wavenumber and frequency spectra of the model are generally in good agreement with observation. However, some distinct features of the model's spectra are revealed. The wavenumber spectra of kinetic energy show that the eastward moving waves of low wavenumbers have stronger zonal motion while the eastward moving waves of intermediate wavenumbers have larger meridional motion compared with observation. Furthermore, the eastward moving waves show a band of large spectral value in the medium frequency regime.

  3. Feasibility analysis of a Commercial HPWH with CO 2 Refrigerant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nawaz, Kashif; Shen, Bo; Elatar, Ahmed F.

    2017-02-12

    A scoping-level analysis has conducted to establish the feasibility of using CO 2 as refrigerant for a commercial heat pump water heater (HPWH) for U.S. applications. The DOE/ORNL Heat Pump Design Model (HPDM) modeling tool was used for the assessment with data from a Japanese heat pump water heater (Sanden) using CO 2 as refrigerant for calibration. A CFD modeling tool was used to further refine the HPDM tank model. After calibration, the model was used to simulate the performance of commercial HPWHs using CO 2 and R-134a (baseline). The parametric analysis concluded that compressor discharge pressure and water temperaturemore » stratification are critical parameters for the system. For comparable performance the compressor size and water-heater size can be significantly different for R-134 and CO 2 HPWHs. The proposed design deploying a gas-cooler configuration not only exceeds the Energy Star Energy Factor criteria i.e. 2.20, but is also comparable to some of the most efficient products in the market using conventional refrigerants.« less

  4. Analysis of SMA Hybrid Composite Structures in MSC.Nastran and ABAQUS

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.; Patel, Hemant D.

    2005-01-01

    A thermoelastic constitutive model for shape memory alloy (SMA) actuators and SMA hybrid composite (SMAHC) structures was recently implemented in the commercial finite element codes MSC.Nastran and ABAQUS. The model may be easily implemented in any code that has the capability for analysis of laminated composite structures with temperature dependent material properties. The model is also relatively easy to use and requires input of only fundamental engineering properties. A brief description of the model is presented, followed by discussion of implementation and usage in the commercial codes. Results are presented from static and dynamic analysis of SMAHC beams of two types; a beam clamped at each end and a cantilever beam. Nonlinear static (post-buckling) and random response analyses are demonstrated for the first specimen. Static deflection (shape) control is demonstrated for the cantilever beam. Approaches for modeling SMAHC material systems with embedded SMA in ribbon and small round wire product forms are demonstrated and compared. The results from the commercial codes are compared to those from a research code as validation of the commercial implementations; excellent correlation is achieved in all cases.

  5. Analysis of SMA Hybrid Composite Structures using Commercial Codes

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.; Patel, Hemant D.

    2004-01-01

    A thermomechanical model for shape memory alloy (SMA) actuators and SMA hybrid composite (SMAHC) structures has been recently implemented in the commercial finite element codes MSC.Nastran and ABAQUS. The model may be easily implemented in any code that has the capability for analysis of laminated composite structures with temperature dependent material properties. The model is also relatively easy to use and requires input of only fundamental engineering properties. A brief description of the model is presented, followed by discussion of implementation and usage in the commercial codes. Results are presented from static and dynamic analysis of SMAHC beams of two types; a beam clamped at each end and a cantilevered beam. Nonlinear static (post-buckling) and random response analyses are demonstrated for the first specimen. Static deflection (shape) control is demonstrated for the cantilevered beam. Approaches for modeling SMAHC material systems with embedded SMA in ribbon and small round wire product forms are demonstrated and compared. The results from the commercial codes are compared to those from a research code as validation of the commercial implementations; excellent correlation is achieved in all cases.

  6. Comparative evaluation of 1D and quasi-2D hydraulic models based on benchmark and real-world applications for uncertainty assessment in flood mapping

    NASA Astrophysics Data System (ADS)

    Dimitriadis, Panayiotis; Tegos, Aristoteles; Oikonomou, Athanasios; Pagana, Vassiliki; Koukouvinos, Antonios; Mamassis, Nikos; Koutsoyiannis, Demetris; Efstratiadis, Andreas

    2016-03-01

    One-dimensional and quasi-two-dimensional hydraulic freeware models (HEC-RAS, LISFLOOD-FP and FLO-2d) are widely used for flood inundation mapping. These models are tested on a benchmark test with a mixed rectangular-triangular channel cross section. Using a Monte-Carlo approach, we employ extended sensitivity analysis by simultaneously varying the input discharge, longitudinal and lateral gradients and roughness coefficients, as well as the grid cell size. Based on statistical analysis of three output variables of interest, i.e. water depths at the inflow and outflow locations and total flood volume, we investigate the uncertainty enclosed in different model configurations and flow conditions, without the influence of errors and other assumptions on topography, channel geometry and boundary conditions. Moreover, we estimate the uncertainty associated to each input variable and we compare it to the overall one. The outcomes of the benchmark analysis are further highlighted by applying the three models to real-world flood propagation problems, in the context of two challenging case studies in Greece.

  7. Point-based and model-based geolocation analysis of airborne laser scanning data

    NASA Astrophysics Data System (ADS)

    Sefercik, Umut Gunes; Buyuksalih, Gurcan; Jacobsen, Karsten; Alkan, Mehmet

    2017-01-01

    Airborne laser scanning (ALS) is one of the most effective remote sensing technologies providing precise three-dimensional (3-D) dense point clouds. A large-size ALS digital surface model (DSM) covering the whole Istanbul province was analyzed by point-based and model-based comprehensive statistical approaches. Point-based analysis was performed using checkpoints on flat areas. Model-based approaches were implemented in two steps as strip to strip comparing overlapping ALS DSMs individually in three subareas and comparing the merged ALS DSMs with terrestrial laser scanning (TLS) DSMs in four other subareas. In the model-based approach, the standard deviation of height and normalized median absolute deviation were used as the accuracy indicators combined with the dependency of terrain inclination. The results demonstrate that terrain roughness has a strong impact on the vertical accuracy of ALS DSMs. From the relative horizontal shifts determined and partially improved by merging the overlapping strips and comparison of the ALS, and the TLS, data were found not to be negligible. The analysis of ALS DSM in relation to TLS DSM allowed us to determine the characteristics of the DSM in detail.

  8. Penetration analysis of projectile with inclined concrete target

    NASA Astrophysics Data System (ADS)

    Kim, S. B.; Kim, H. W.; Yoo, Y. H.

    2015-09-01

    This paper presents numerical analysis result of projectile penetration with concrete target. We applied dynamic material properties of 4340 steels, aluminium and explosive for projectile body. Dynamic material properties were measured with static tensile testing machine and Hopkinson pressure bar tests. Moreover, we used three concrete damage models included in LS-DYNA 3D, such as SOIL_CONCRETE, CSCM (cap model with smooth interaction) and CONCRETE_DAMAGE (K&C concrete) models. Strain rate effect for concrete material is important to predict the fracture deformation and shape of concrete, and penetration depth for projectiles. CONCRETE_DAMAGE model with strain rate effect also applied to penetration analysis. Analysis result with CSCM model shows good agreement with penetration experimental data. The projectile trace and fracture shapes of concrete target were compared with experimental data.

  9. The multiple complex exponential model and its application to EEG analysis

    NASA Astrophysics Data System (ADS)

    Chen, Dao-Mu; Petzold, J.

    The paper presents a novel approach to the analysis of the EEG signal, which is based on a multiple complex exponential (MCE) model. Parameters of the model are estimated using a nonharmonic Fourier expansion algorithm. The central idea of the algorithm is outlined, and the results, estimated on the basis of simulated data, are presented and compared with those obtained by the conventional methods of signal analysis. Preliminary work on various application possibilities of the MCE model in EEG data analysis is described. It is shown that the parameters of the MCE model reflect the essential information contained in an EEG segment. These parameters characterize the EEG signal in a more objective way because they are closer to the recent supposition of the nonlinear character of the brain's dynamic behavior.

  10. Landscape patterns from mathematical morphology on maps with contagion

    Treesearch

    Kurt Riitters; Peter Vogt; Pierre Soille; Christine Estreguil

    2009-01-01

    The perceived realism of simulated maps with contagion (spatial autocorrelation) has led to their use for comparing landscape pattern metrics and as habitat maps for modeling organism movement across landscapes. The objective of this study was to conduct a neutral model analysis of pattern metrics defined by morphological spatial pattern analysis (MSPA) on maps with...

  11. The Effectiveness of Physical Models in Teaching Anatomy: A Meta-Analysis of Comparative Studies

    ERIC Educational Resources Information Center

    Yammine, Kaissar; Violato, Claudio

    2016-01-01

    There are various educational methods used in anatomy teaching. While three dimensional (3D) visualization technologies are gaining ground due to their ever-increasing realism, reports investigating physical models as a low-cost 3D traditional method are still the subject of considerable interest. The aim of this meta-analysis is to quantitatively…

  12. Analysis of high-rise constructions with the using of three-dimensional models of rods in the finite element program PRINS

    NASA Astrophysics Data System (ADS)

    Agapov, Vladimir

    2018-03-01

    The necessity of new approaches to the modeling of rods in the analysis of high-rise constructions is justified. The possibility of the application of the three-dimensional superelements of rods with rectangular cross section for the static and dynamic calculation of the bar and combined structures is considered. The results of the eighteen-story spatial frame free vibrations analysis using both one-dimensional and three-dimensional models of rods are presented. A comparative analysis of the obtained results is carried out and the conclusions on the possibility of three-dimensional superelements application in static and dynamic analysis of high-rise constructions are given on its basis.

  13. Comparing efficacy of reduced-toxicity allogeneic hematopoietic cell transplantation with conventional chemo-(immuno) therapy in patients with relapsed or refractory CLL: a Markov decision analysis.

    PubMed

    Kharfan-Dabaja, M A; Pidala, J; Kumar, A; Terasawa, T; Djulbegovic, B

    2012-09-01

    Despite therapeutic advances, relapsed/refractory CLL, particularly after fludarabine-based regimens, remains a major challenge for which optimal therapy is undefined. No randomized comparative data exist to suggest the superiority of reduced-toxicity allogeneic hematopoietic cell transplantation (RT-allo-HCT) over conventional chemo-(immuno) therapy (CCIT). By using estimates from a systematic review and by meta-analysis of available published evidence, we constructed a Markov decision model to examine these competing modalities. Cohort analysis demonstrated superior outcome for RT-allo-HCT, with a 10-month overall life expectancy (and 6-month quality-adjusted life expectancy (QALE)) advantage over CCIT. Although the model was sensitive to changes in base-case assumptions and transition probabilities, RT-allo-HCT provided superior overall life expectancy through a range of values supported by the meta-analysis. QALE was superior for RT-allo-HCT compared with CCIT. This conclusion was sensitive to change in the anticipated state utility associated with the post-allogeneic HCT state; however, RT-allo-HCT remained the optimal strategy for values supported by existing literature. This analysis provides a quantitative comparison of outcomes between RT-allo-HCT and CCIT for relapsed/refractory CLL in the absence of randomized comparative trials. Confirmation of these findings requires a prospective randomized trial, which compares the most effective RT-allo-HCT and CCIT regimens for relapsed/refractory CLL.

  14. Comparing models for perfluorooctanoic acid pharmacokinetics using Bayesian analysis

    EPA Science Inventory

    Selecting the appropriate pharmacokinetic (PK) model given the available data is investigated for perfluorooctanoic acid (PFOA), which has been widely analyzed with an empirical, one-compartment model. This research examined the results of experiments [Kemper R. A., DuPont Haskel...

  15. Comparative Study on the Prediction of Aerodynamic Characteristics of Aircraft with Turbulence Models

    NASA Astrophysics Data System (ADS)

    Jang, Yujin; Huh, Jinbum; Lee, Namhun; Lee, Seungsoo; Park, Youngmin

    2018-04-01

    The RANS equations are widely used to analyze complex flows over aircraft. The equations require a turbulence model for turbulent flow analyses. A suitable turbulence must be selected for accurate predictions of aircraft aerodynamic characteristics. In this study, numerical analyses of three-dimensional aircraft are performed to compare the results of various turbulence models for the prediction of aircraft aerodynamic characteristics. A 3-D RANS solver, MSAPv, is used for the aerodynamic analysis. The four turbulence models compared are the Sparlart-Allmaras (SA) model, Coakley's q-ω model, Huang and Coakley's k-ɛ model, and Menter's k-ω SST model. Four aircrafts are considered: an ARA-M100, DLR-F6 wing-body, DLR-F6 wing-body-nacelle-pylon from the second drag prediction workshop, and a high wing aircraft with nacelles. The CFD results are compared with experimental data and other published computational results. The details of separation patterns, shock positions, and Cp distributions are discussed to find the characteristics of the turbulence models.

  16. Comparative study of predicted and experimentally detected interplanetary shocks

    NASA Astrophysics Data System (ADS)

    Kartalev, M. D.; Grigorov, K. G.; Smith, Z.; Dryer, M.; Fry, C. D.; Sun, Wei; Deehr, C. S.

    2002-03-01

    We compare the real time space weather prediction shock arrival times at 1 AU made by the USAF/NOAA Shock Time of Arrival (STOA) and Interplanetary Shock Propagation Model (ISPM) models, and the Exploration Physics International/University of Alaska Hakamada-Akasofu-Fry Solar Wind Model (HAF-v2) to a real time analysis analysis of plasma and field ACE data. The comparison is made using an algorithm that was developed on the basis of wavelet data analysis and MHD identification procedure. The shock parameters are estimated for selected "candidate events". An appropriate automatically performing Web-based interface periodically utilizes solar wind observations made by the ACE at L1. Near real time results as well an archive of the registered interesting events are available on a specially developed web site. A number of events are considered. These studies are essential for the validation of real time space weather forecasts made from solar data.

  17. 3D analysis of eddy current loss in the permanent magnet coupling.

    PubMed

    Zhu, Zina; Meng, Zhuo

    2016-07-01

    This paper first presents a 3D analytical model for analyzing the radial air-gap magnetic field between the inner and outer magnetic rotors of the permanent magnet couplings by using the Amperian current model. Based on the air-gap field analysis, the eddy current loss in the isolation cover is predicted according to the Maxwell's equations. A 3D finite element analysis model is constructed to analyze the magnetic field spatial distributions and vector eddy currents, and then the simulation results obtained are analyzed and compared with the analytical method. Finally, the current losses of two types of practical magnet couplings are measured in the experiment to compare with the theoretical results. It is concluded that the 3D analytical method of eddy current loss in the magnet coupling is viable and could be used for the eddy current loss prediction of magnet couplings.

  18. Bayesian inference on risk differences: an application to multivariate meta-analysis of adverse events in clinical trials.

    PubMed

    Chen, Yong; Luo, Sheng; Chu, Haitao; Wei, Peng

    2013-05-01

    Multivariate meta-analysis is useful in combining evidence from independent studies which involve several comparisons among groups based on a single outcome. For binary outcomes, the commonly used statistical models for multivariate meta-analysis are multivariate generalized linear mixed effects models which assume risks, after some transformation, follow a multivariate normal distribution with possible correlations. In this article, we consider an alternative model for multivariate meta-analysis where the risks are modeled by the multivariate beta distribution proposed by Sarmanov (1966). This model have several attractive features compared to the conventional multivariate generalized linear mixed effects models, including simplicity of likelihood function, no need to specify a link function, and has a closed-form expression of distribution functions for study-specific risk differences. We investigate the finite sample performance of this model by simulation studies and illustrate its use with an application to multivariate meta-analysis of adverse events of tricyclic antidepressants treatment in clinical trials.

  19. Video analysis of the flight of a model aircraft

    NASA Astrophysics Data System (ADS)

    Tarantino, Giovanni; Fazio, Claudio

    2011-11-01

    A video-analysis software tool has been employed in order to measure the steady-state values of the kinematics variables describing the longitudinal behaviour of a radio-controlled model aircraft during take-off, climbing and gliding. These experimental results have been compared with the theoretical steady-state configurations predicted by the phugoid model for longitudinal flight. A comparison with the parameters and performance of the full-size aircraft has also been outlined.

  20. Critical Factors Analysis for Offshore Software Development Success by Structural Equation Modeling

    NASA Astrophysics Data System (ADS)

    Wada, Yoshihisa; Tsuji, Hiroshi

    In order to analyze the success/failure factors in offshore software development service by the structural equation modeling, this paper proposes to follow two approaches together; domain knowledge based heuristic analysis and factor analysis based rational analysis. The former works for generating and verifying of hypothesis to find factors and causalities. The latter works for verifying factors introduced by theory to build the model without heuristics. Following the proposed combined approaches for the responses from skilled project managers of the questionnaire, this paper found that the vendor property has high causality for the success compared to software property and project property.

  1. Cluster-based upper body marker models for three-dimensional kinematic analysis: Comparison with an anatomical model and reliability analysis.

    PubMed

    Boser, Quinn A; Valevicius, Aïda M; Lavoie, Ewen B; Chapman, Craig S; Pilarski, Patrick M; Hebert, Jacqueline S; Vette, Albert H

    2018-04-27

    Quantifying angular joint kinematics of the upper body is a useful method for assessing upper limb function. Joint angles are commonly obtained via motion capture, tracking markers placed on anatomical landmarks. This method is associated with limitations including administrative burden, soft tissue artifacts, and intra- and inter-tester variability. An alternative method involves the tracking of rigid marker clusters affixed to body segments, calibrated relative to anatomical landmarks or known joint angles. The accuracy and reliability of applying this cluster method to the upper body has, however, not been comprehensively explored. Our objective was to compare three different upper body cluster models with an anatomical model, with respect to joint angles and reliability. Non-disabled participants performed two standardized functional upper limb tasks with anatomical and cluster markers applied concurrently. Joint angle curves obtained via the marker clusters with three different calibration methods were compared to those from an anatomical model, and between-session reliability was assessed for all models. The cluster models produced joint angle curves which were comparable to and highly correlated with those from the anatomical model, but exhibited notable offsets and differences in sensitivity for some degrees of freedom. Between-session reliability was comparable between all models, and good for most degrees of freedom. Overall, the cluster models produced reliable joint angles that, however, cannot be used interchangeably with anatomical model outputs to calculate kinematic metrics. Cluster models appear to be an adequate, and possibly advantageous alternative to anatomical models when the objective is to assess trends in movement behavior. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. [Analysis of the stability and adaptability of near infrared spectra qualitative analysis model].

    PubMed

    Cao, Wu; Li, Wei-jun; Wang, Ping; Zhang, Li-ping

    2014-06-01

    The stability and adaptability of model of near infrared spectra qualitative analysis were studied. Method of separate modeling can significantly improve the stability and adaptability of model; but its ability of improving adaptability of model is limited. Method of joint modeling can not only improve the adaptability of the model, but also the stability of model, at the same time, compared to separate modeling, the method can shorten the modeling time, reduce the modeling workload; extend the term of validity of model, and improve the modeling efficiency. The experiment of model adaptability shows that, the correct recognition rate of separate modeling method is relatively low, which can not meet the requirements of application, and joint modeling method can reach the correct recognition rate of 90%, and significantly enhances the recognition effect. The experiment of model stability shows that, the identification results of model by joint modeling are better than the model by separate modeling, and has good application value.

  3. Estimating the variance for heterogeneity in arm-based network meta-analysis.

    PubMed

    Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R

    2018-04-19

    Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.

  4. Replica Analysis for Portfolio Optimization with Single-Factor Model

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2017-06-01

    In this paper, we use replica analysis to investigate the influence of correlation among the return rates of assets on the solution of the portfolio optimization problem. We consider the behavior of an optimal solution for the case where the return rate is described with a single-factor model and compare the findings obtained from our proposed methods with correlated return rates with those obtained with independent return rates. We then analytically assess the increase in the investment risk when correlation is included. Furthermore, we also compare our approach with analytical procedures for minimizing the investment risk from operations research.

  5. An Analysis of Simulated Wet Deposition of Mercury from the North American Mercury Model Intercomparison Study

    EPA Science Inventory

    A previous intercomparison of atmospheric mercury models in North America has been extended to compare simulated and observed wet deposition of mercury. Three regional-scale atmospheric mercury models were tested; CMAQ, REMSAD and TEAM. These models were each employed using thr...

  6. Analysis of transient fission gas behaviour in oxide fuel using BISON and TRANSURANUS

    NASA Astrophysics Data System (ADS)

    Barani, T.; Bruschi, E.; Pizzocri, D.; Pastore, G.; Van Uffelen, P.; Williamson, R. L.; Luzzi, L.

    2017-04-01

    The modelling of fission gas behaviour is a crucial aspect of nuclear fuel performance analysis in view of the related effects on the thermo-mechanical performance of the fuel rod, which can be particularly significant during transients. In particular, experimental observations indicate that substantial fission gas release (FGR) can occur on a small time scale during transients (burst release). To accurately reproduce the rapid kinetics of the burst release process in fuel performance calculations, a model that accounts for non-diffusional mechanisms such as fuel micro-cracking is needed. In this work, we present and assess a model for transient fission gas behaviour in oxide fuel, which is applied as an extension of conventional diffusion-based models to introduce the burst release effect. The concept and governing equations of the model are presented, and the sensitivity of results to the newly introduced parameters is evaluated through an analytic sensitivity analysis. The model is assessed for application to integral fuel rod analysis by implementation in two structurally different fuel performance codes: BISON (multi-dimensional finite element code) and TRANSURANUS (1.5D code). Model assessment is based on the analysis of 19 light water reactor fuel rod irradiation experiments from the OECD/NEA IFPE (International Fuel Performance Experiments) database, all of which are simulated with both codes. The results point out an improvement in both the quantitative predictions of integral fuel rod FGR and the qualitative representation of the FGR kinetics with the transient model relative to the canonical, purely diffusion-based models of the codes. The overall quantitative improvement of the integral FGR predictions in the two codes is comparable. Moreover, calculated radial profiles of xenon concentration after irradiation are investigated and compared to experimental data, illustrating the underlying representation of the physical mechanisms of burst release.

  7. Meta-analysis for the comparison of two diagnostic tests to a common gold standard: A generalized linear mixed model approach.

    PubMed

    Hoyer, Annika; Kuss, Oliver

    2018-05-01

    Meta-analysis of diagnostic studies is still a rapidly developing area of biostatistical research. Especially, there is an increasing interest in methods to compare different diagnostic tests to a common gold standard. Restricting to the case of two diagnostic tests, in these meta-analyses the parameters of interest are the differences of sensitivities and specificities (with their corresponding confidence intervals) between the two diagnostic tests while accounting for the various associations across single studies and between the two tests. We propose statistical models with a quadrivariate response (where sensitivity of test 1, specificity of test 1, sensitivity of test 2, and specificity of test 2 are the four responses) as a sensible approach to this task. Using a quadrivariate generalized linear mixed model naturally generalizes the common standard bivariate model of meta-analysis for a single diagnostic test. If information on several thresholds of the tests is available, the quadrivariate model can be further generalized to yield a comparison of full receiver operating characteristic (ROC) curves. We illustrate our model by an example where two screening methods for the diagnosis of type 2 diabetes are compared.

  8. Analysis of aerobatic aircraft noise using the FAA's Integrated Noise Model

    DOT National Transportation Integrated Search

    2012-09-30

    This project has three main objectives. The first objective is to model noise from complete aerobatic routines for a range of aircraft. The second is to compare modeled and previously measured aircraft noise from complete aerobatic routines for a ran...

  9. Comparing microscopic activity-based and traditional models of travel demand : an Austin area case study

    DOT National Transportation Integrated Search

    2007-09-01

    Two competing approaches to travel demand modeling exist today. The more traditional 4-step travel demand models rely on aggregate demographic data at a traffic analysis zone (TAZ) level. Activity-based microsimulation methods employ more robus...

  10. A RETROSPECTIVE ANALYSIS OF MODEL UNCERTAINTY FOR FORECASTING HYDROLOGIC CHANGE

    EPA Science Inventory

    GIS-based hydrologic modeling offers a convenient means of assessing the impacts associated with land-cover/use change for environmental planning efforts. Alternative future scenarios can be used as input to hydrologic models and compared with existing conditions to evaluate pot...

  11. EVALUATION OF ACID DEPOSITION MODELS USING PRINCIPAL COMPONENT SPACES

    EPA Science Inventory

    An analytical technique involving principal components analysis is proposed for use in the evaluation of acid deposition models. elationships among model predictions are compared to those among measured data, rather than the more common one-to-one comparison of predictions to mea...

  12. A Comparative Meta-Analysis of 5E and Traditional Approaches in Turkey

    ERIC Educational Resources Information Center

    Anil, Özgür; Batdi, Veli

    2015-01-01

    The aim of this study is to compare the 5E learning model with traditional learning methods in terms of their effect on students' academic achievement, retention and attitude scores. In this context, the meta-analytic method known as the "analysis of analyses" was used and a review undertaken of the studies and theses (N = 14) executed…

  13. A Comparative Analysis of Collaborative Leadership Skills Employed by Graduates of Cohort Based and Non-Cohort Based Doctoral Programs in Eduational Leadership

    ERIC Educational Resources Information Center

    Breton Caminos, Michelle Evangeline

    2015-01-01

    This qualitative comparative case analysis investigates the leadership approaches of the graduates of two educational leadership doctoral programs in Upstate New York--one a cohort-modeled program, the other a non-cohort program--with specific attention to collaboration. Responses from participants indicate key differences in Engaging Communities,…

  14. The aquatic animals' transcriptome resource for comparative functional analysis.

    PubMed

    Chou, Chih-Hung; Huang, Hsi-Yuan; Huang, Wei-Chih; Hsu, Sheng-Da; Hsiao, Chung-Der; Liu, Chia-Yu; Chen, Yu-Hung; Liu, Yu-Chen; Huang, Wei-Yun; Lee, Meng-Lin; Chen, Yi-Chang; Huang, Hsien-Da

    2018-05-09

    Aquatic animals have great economic and ecological importance. Among them, non-model organisms have been studied regarding eco-toxicity, stress biology, and environmental adaptation. Due to recent advances in next-generation sequencing techniques, large amounts of RNA-seq data for aquatic animals are publicly available. However, currently there is no comprehensive resource exist for the analysis, unification, and integration of these datasets. This study utilizes computational approaches to build a new resource of transcriptomic maps for aquatic animals. This aquatic animal transcriptome map database dbATM provides de novo assembly of transcriptome, gene annotation and comparative analysis of more than twenty aquatic organisms without draft genome. To improve the assembly quality, three computational tools (Trinity, Oases and SOAPdenovo-Trans) were employed to enhance individual transcriptome assembly, and CAP3 and CD-HIT-EST software were then used to merge these three assembled transcriptomes. In addition, functional annotation analysis provides valuable clues to gene characteristics, including full-length transcript coding regions, conserved domains, gene ontology and KEGG pathways. Furthermore, all aquatic animal genes are essential for comparative genomics tasks such as constructing homologous gene groups and blast databases and phylogenetic analysis. In conclusion, we establish a resource for non model organism aquatic animals, which is great economic and ecological importance and provide transcriptomic information including functional annotation and comparative transcriptome analysis. The database is now publically accessible through the URL http://dbATM.mbc.nctu.edu.tw/ .

  15. The October 1973 NASA mission model analysis and economic assessment

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Results are presented of the 1973 NASA Mission Model Analysis. The purpose was to obtain an economic assessment of using the Shuttle to accommodate the payloads and requirements as identified by the NASA Program Offices and the DoD. The 1973 Payload Model represents a baseline candidate set of future payloads which can be used as a reference base for planning purposes. The cost of implementing these payload programs utilizing the capabilities of the shuttle system is analyzed and compared with the cost of conducting the same payload effort using expendable launch vehicles. There is a net benefit of 14.1 billion dollars as a result of using the shuttle during the 12-year period as compared to using an expendable launch vehicle fleet.

  16. Comparative dynamic analysis of the full Grossman model.

    PubMed

    Ried, W

    1998-08-01

    The paper applies the method of comparative dynamic analysis to the full Grossman model. For a particular class of solutions, it derives the equations implicitly defining the complete trajectories of the endogenous variables. Relying on the concept of Frisch decision functions, the impact of any parametric change on an endogenous variable can be decomposed into a direct and an indirect effect. The focus of the paper is on marginal changes in the rate of health capital depreciation. It also analyses the impact of either initial financial wealth or the initial stock of health capital. While the direction of most effects remains ambiguous in the full model, the assumption of a zero consumption benefit of health is sufficient to obtain a definite for any direct or indirect effect.

  17. Transitions in state public health law: comparative analysis of state public health law reform following the Turning Point Model State Public Health Act.

    PubMed

    Meier, Benjamin Mason; Hodge, James G; Gebbie, Kristine M

    2009-03-01

    Given the public health importance of law modernization, we undertook a comparative analysis of policy efforts in 4 states (Alaska, South Carolina, Wisconsin, and Nebraska) that have considered public health law reform based on the Turning Point Model State Public Health Act. Through national legislative tracking and state case studies, we investigated how the Turning Point Act's model legal language has been considered for incorporation into state law and analyzed key facilitating and inhibiting factors for public health law reform. Our findings provide the practice community with a research base to facilitate further law reform and inform future scholarship on the role of law as a determinant of the public's health.

  18. A Comparative Analysis of Spatial Visualization Ability and Drafting Models for Industrial and Technology Education Students

    ERIC Educational Resources Information Center

    Katsioloudis, Petros; Jovanovic, Vukica; Jones, Mildred

    2014-01-01

    The main purpose of this study was to determine significant positive effects among the use of three different types of drafting models, and to identify whether any differences exist towards promotion of spatial visualization ability for students in Industrial Technology and Technology Education courses. In particular, the study compared the use of…

  19. 3D-QSAR modeling and molecular docking studies on a series of 2,5 disubstituted 1,3,4-oxadiazoles

    NASA Astrophysics Data System (ADS)

    Ghaleb, Adib; Aouidate, Adnane; Ghamali, Mounir; Sbai, Abdelouahid; Bouachrine, Mohammed; Lakhlifi, Tahar

    2017-10-01

    3D-QSAR (comparative molecular field analysis (CoMFA)) and comparative molecular similarity indices analysis (CoMSIA) were performed on novel 2,5 disubstituted 1,3,4-oxadiazoles analogues as anti-fungal agents. The CoMFA and CoMSIA models using 13 compounds in the training set gives Q2 values of 0.52 and 0.51 respectively, while R2 values of 0.92. The adapted alignment method with the suitable parameters resulted in reliable models. The contour maps produced by the CoMFA and CoMSIA models were employed to determine a three-dimensional quantitative structure-activity relationship. Based on this study a set of new molecules with high predicted activities were designed. Surflex-docking confirmed the stability of predicted molecules in the receptor.

  20. Creep analysis of silicone for podiatry applications.

    PubMed

    Janeiro-Arocas, Julia; Tarrío-Saavedra, Javier; López-Beceiro, Jorge; Naya, Salvador; López-Canosa, Adrián; Heredia-García, Nicolás; Artiaga, Ramón

    2016-10-01

    This work shows an effective methodology to characterize the creep-recovery behavior of silicones before their application in podiatry. The aim is to characterize, model and compare the creep-recovery properties of different types of silicone used in podiatry orthotics. Creep-recovery phenomena of silicones used in podiatry orthotics is characterized by dynamic mechanical analysis (DMA). Silicones provided by Herbitas are compared by observing their viscoelastic properties by Functional Data Analysis (FDA) and nonlinear regression. The relationship between strain and time is modeled by fixed and mixed effects nonlinear regression to compare easily and intuitively podiatry silicones. Functional ANOVA and Kohlrausch-Willians-Watts (KWW) model with fixed and mixed effects allows us to compare different silicones observing the values of fitting parameters and their physical meaning. The differences between silicones are related to the variations of breadth of creep-recovery time distribution and instantaneous deformation-permanent strain. Nevertheless, the mean creep-relaxation time is the same for all the studied silicones. Silicones used in palliative orthoses have higher instantaneous deformation-permanent strain and narrower creep-recovery distribution. The proposed methodology based on DMA, FDA and nonlinear regression is an useful tool to characterize and choose the proper silicone for each podiatry application according to their viscoelastic properties. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Usefulness of high resolution coastal models for operational oil spill forecast: the Full City accident

    NASA Astrophysics Data System (ADS)

    Broström, G.; Carrasco, A.; Hole, L. R.; Dick, S.; Janssen, F.; Mattsson, J.; Berger, S.

    2011-06-01

    Oil spill modeling is considered to be an important decision support system (DeSS) useful for remedial action in case of accidents, as well as for designing the environmental monitoring system that is frequently set up after major accidents. Many accidents take place in coastal areas implying that low resolution basin scale ocean models is of limited use for predicting the trajectories of an oil spill. In this study, we target the oil spill in connection with the Full City accident on the Norwegian south coast and compare three different oil spill models for the area. The result of the analysis is that all models do a satisfactory job. The "standard" operational model for the area is shown to have severe flaws but including an analysis based on a higher resolution model (1.5 km resolution) for the area the model system show results that compare well with observations. The study also shows that an ensemble using three different models is useful when predicting/analyzing oil spill in coastal areas.

  2. Robotic Versus Open Renal Transplantation in Obese Patients: Protocol for a Cost-Benefit Markov Model Analysis

    PubMed Central

    Puttarajappa, Chethan; Wijkstrom, Martin; Ganoza, Armando; Lopez, Roberto; Tevar, Amit

    2018-01-01

    Background Recent studies have reported a significant decrease in wound problems and hospital stay in obese patients undergoing renal transplantation by robotic-assisted minimally invasive techniques with no difference in graft function. Objective Due to the lack of cost-benefit studies on the use of robotic-assisted renal transplantation versus open surgical procedure, the primary aim of our study is to develop a Markov model to analyze the cost-benefit of robotic surgery versus open traditional surgery in obese patients in need of a renal transplant. Methods Electronic searches will be conducted to identify studies comparing open renal transplantation versus robotic-assisted renal transplantation. Costs associated with the two surgical techniques will incorporate the expenses of the resources used for the operations. A decision analysis model will be developed to simulate a randomized controlled trial comparing three interventional arms: (1) continuation of renal replacement therapy for patients who are considered non-suitable candidates for renal transplantation due to obesity, (2) transplant recipients undergoing open transplant surgery, and (3) transplant patients undergoing robotic-assisted renal transplantation. TreeAge Pro 2017 R1 TreeAge Software, Williamstown, MA, USA) will be used to create a Markov model and microsimulation will be used to compare costs and benefits for the two competing surgical interventions. Results The model will simulate a randomized controlled trial of adult obese patients affected by end-stage renal disease undergoing renal transplantation. The absorbing state of the model will be patients' death from any cause. By choosing death as the absorbing state, we will be able simulate the population of renal transplant recipients from the day of their randomization to transplant surgery or continuation on renal replacement therapy to their death and perform sensitivity analysis around patients' age at the time of randomization to determine if age is a critical variable for cost-benefit analysis or cost-effectiveness analysis comparing renal replacement therapy, robotic-assisted surgery or open renal transplant surgery. After running the model, one of the three competing strategies will result as the most cost-beneficial or cost-effective under common circumstances. To assess the robustness of the results of the model, a multivariable probabilistic sensitivity analysis will be performed by modifying the mean values and confidence intervals of key parameters with the main intent of assessing if the winning strategy is sensitive to rigorous and plausible variations of those values. Conclusions After running the model, one of the three competing strategies will result as the most cost-beneficial or cost-effective under common circumstances. To assess the robustness of the results of the model, a multivariable probabilistic sensitivity analysis will be performed by modifying the mean values and confidence intervals of key parameters with the main intent of assessing if the winning strategy is sensitive to rigorous and plausible variations of those values. PMID:29519780

  3. Structural models of antibody variable fragments: A method for investigating binding mechanisms

    NASA Astrophysics Data System (ADS)

    Petit, Samuel; Brard, Frédéric; Coquerel, Gérard; Perez, Guy; Tron, François

    1998-03-01

    The value of comparative molecular modeling for elucidating structure-function relationships was demonstrated by analyzing six anti-nucleosome autoantibody variable fragments. Structural models were built using the automated procedure developed in the COMPOSER software, subsequently minimized with the AMBER force field, and validated according to several standard geometric and chemical criteria. Canonical class assignment from Chothia and Lesk's [Chottin and Lesk, J. Mol. Biol., 196 (1987) 901; Chothia et al., Nature, 342 (1989) 877] work was used as a supplementary validation tool for five of the six hypervariable loops. The analysis, based on the hypothesis that antigen binding could occur through electrostatic interactions, reveals a diversity of possible binding mechanisms of anti-nucleosome or anti-histone antibodies to their cognate antigen. These results lead us to postulate that anti-nucleosome autoantibodies could have different origins. Since both anti-DNA and anti-nculeosome autoantibodies are produced during the course of systemic lupus erythematosus, a non-organ specific autoimmune disease, a comparative structural and electrostatic analysis of the two populations of autoantibodies may constitute a way to elucidate their origin and the role of the antigen in tolerance breakdown. The present study illustrates some interests, advantages and limits of a methodology based on the use of comparative modeling and analysis of molecular surface properties.

  4. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    PubMed

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  5. Premium analysis for copula model: A case study for Malaysian motor insurance claims

    NASA Astrophysics Data System (ADS)

    Resti, Yulia; Ismail, Noriszura; Jaaman, Saiful Hafizah

    2014-06-01

    This study performs premium analysis for copula models with regression marginals. For illustration purpose, the copula models are fitted to the Malaysian motor insurance claims data. In this study, we consider copula models from Archimedean and Elliptical families, and marginal distributions of Gamma and Inverse Gaussian regression models. The simulated results from independent model, which is obtained from fitting regression models separately to each claim category, and dependent model, which is obtained from fitting copula models to all claim categories, are compared. The results show that the dependent model using Frank copula is the best model since the risk premiums estimated under this model are closely approximate to the actual claims experience relative to the other copula models.

  6. Cost-effectiveness analysis of trastuzumab emtansine (T-DM1) in human epidermal growth factor receptor 2 (HER2): positive advanced breast cancer.

    PubMed

    Le, Quang A; Bae, Yuna H; Kang, Jenny H

    2016-10-01

    The EMILIA trial demonstrated that trastuzumab emtansine (T-DM1) significantly increased the median profession-free and overall survival relative to combination therapy with lapatinib plus capecitabine (LC) in patients with HER2-positive advanced breast cancer (ABC) previously treated with trastuzumab and a taxane. We performed an economic analysis of T-DM1 as a second-line therapy compared to LC and monotherapy with capecitabine (C) from both perspectives of the US payer and society. We developed four possible Markov models for ABC to compare the projected life-time costs and outcomes of T-DM1, LC, and C. Model transition probabilities were estimated from the EMILIA and EGF100151 clinical trials. Direct costs of the therapies, major adverse events, laboratory tests, and disease progression, indirect costs (productivity losses due to morbidity and mortality), and health utilities were obtained from published sources. The models used 3 % discount rate and reported in 2015 US dollars. Probabilistic sensitivity analysis and model averaging were used to account for model parametric and structural uncertainty. When incorporating both model parametric and structural uncertainty, the resulting incremental cost-effectiveness ratios (ICER) comparing T-DM1 to LC and T-DM1 to C were $183,828 per quality-adjusted life year (QALY) and $126,001/QALY from the societal perspective, respectively. From the payer's perspective, the ICERs were $220,385/QALY (T-DM1 vs. LC) and $168,355/QALY (T-DM1 vs. C). From both perspectives of the US payer and society, T-DM1 is not cost-effective when comparing to the LC combination therapy at a willingness-to-pay threshold of $150,000/QALY. T-DM1 might have a better chance to be cost-effective compared to capecitabine monotherapy from the US societal perspective.

  7. Anchor Selection Strategies for DIF Analysis: Review, Assessment, and New Approaches

    ERIC Educational Resources Information Center

    Kopf, Julia; Zeileis, Achim; Strobl, Carolin

    2015-01-01

    Differential item functioning (DIF) indicates the violation of the invariance assumption, for instance, in models based on item response theory (IRT). For item-wise DIF analysis using IRT, a common metric for the item parameters of the groups that are to be compared (e.g., for the reference and the focal group) is necessary. In the Rasch model,…

  8. [Analysis of binary classification repeated measurement data with GEE and GLMMs using SPSS software].

    PubMed

    An, Shengli; Zhang, Yanhong; Chen, Zheng

    2012-12-01

    To analyze binary classification repeated measurement data with generalized estimating equations (GEE) and generalized linear mixed models (GLMMs) using SPSS19.0. GEE and GLMMs models were tested using binary classification repeated measurement data sample using SPSS19.0. Compared with SAS, SPSS19.0 allowed convenient analysis of categorical repeated measurement data using GEE and GLMMs.

  9. Application of Cognitive Apprenticeship Model to a Graduate Course in Performance Systems Analysis: A Case Study

    ERIC Educational Resources Information Center

    Darabi, A. Aubteen

    2005-01-01

    This article reports a case study describing how the principles of a cognitive apprenticeship (CA) model developed by Collins, Brown, and Holum (1991) were applied to a graduate course on performance systems analysis (PSA), and the differences this application made in student performance and evaluation of the course compared to the previous…

  10. Optimization of data analysis for the in vivo neutron activation analysis of aluminum in bone.

    PubMed

    Mohseni, H K; Matysiak, W; Chettle, D R; Byun, S H; Priest, N; Atanackovic, J; Prestwich, W V

    2016-10-01

    An existing system at McMaster University has been used for the in vivo measurement of aluminum in human bone. Precise and detailed analysis approaches are necessary to determine the aluminum concentration because of the low levels of aluminum found in the bone and the challenges associated with its detection. Phantoms resembling the composition of the human hand with varying concentrations of aluminum were made for testing the system prior to the application to human studies. A spectral decomposition model and a photopeak fitting model involving the inverse-variance weighted mean and a time-dependent analysis were explored to analyze the results and determine the model with the best performance and lowest minimum detection limit. The results showed that the spectral decomposition and the photopeak fitting model with the inverse-variance weighted mean both provided better results compared to the other methods tested. The spectral decomposition method resulted in a marginally lower detection limit (5μg Al/g Ca) compared to the inverse-variance weighted mean (5.2μg Al/g Ca), rendering both equally applicable to human measurements. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Convection in Extratropical Cyclones: Analysis of GPM, NexRAD, GCMs and Re-Analysis

    NASA Astrophysics Data System (ADS)

    Jeyaratnam, J.; Booth, J. F.; Naud, C. M.; Luo, J.

    2017-12-01

    Extratropical Cyclones (ETCs) are the most common cause of extreme precipitation in mid-latitudes and are important in the general atmospheric circulation as they redistribute moisture and heat. Isentropic lifting, upright convection, and slantwise convection are mechanisms of vertical motion within an ETC, which deliver different rain rates and might respond differently to global warming. In this study we compare different metrics for identifying convection within the ETC's and calculate the relative contribution of convection to total ETC precipitation. We determine if convection occurs preferentially in specific regions of the storm and decide how to best utilize GPM retrievals covering other parts of the mid-latitudes. Additionally, mid-latitude cyclones are tracked and composites of these tracked cyclones are compared amongst multiple versions of Global Circulation Models (GCMs) from Coupled Model Intercomparison Project Phase 6 (CMIP6) prototype models and re-analysis data; Model Diagnostic Task Force (MDTF) Geophysical Fluid Dynamics Laboratory (GFDL) using a two-plume convection scheme, MDTF GFDL using the Donner convection scheme, Modern-Era Retrospective analysis for Research and Applications, version 2 (MERRA-2), and European Reanalysis produced by the European Center for Medium-Range Weather Forecasts (ECMWF).

  12. Functional approach to high-throughput plant growth analysis

    PubMed Central

    2013-01-01

    Method Taking advantage of the current rapid development in imaging systems and computer vision algorithms, we present HPGA, a high-throughput phenotyping platform for plant growth modeling and functional analysis, which produces better understanding of energy distribution in regards of the balance between growth and defense. HPGA has two components, PAE (Plant Area Estimation) and GMA (Growth Modeling and Analysis). In PAE, by taking the complex leaf overlap problem into consideration, the area of every plant is measured from top-view images in four steps. Given the abundant measurements obtained with PAE, in the second module GMA, a nonlinear growth model is applied to generate growth curves, followed by functional data analysis. Results Experimental results on model plant Arabidopsis thaliana show that, compared to an existing approach, HPGA reduces the error rate of measuring plant area by half. The application of HPGA on the cfq mutant plants under fluctuating light reveals the correlation between low photosynthetic rates and small plant area (compared to wild type), which raises a hypothesis that knocking out cfq changes the sensitivity of the energy distribution under fluctuating light conditions to repress leaf growth. Availability HPGA is available at http://www.msu.edu/~jinchen/HPGA. PMID:24565437

  13. Cost-utility of quadrivalent versus trivalent influenza vaccine in Brazil - comparison of outcomes from different static model types.

    PubMed

    Van Bellinghen, Laure-Anne; Marijam, Alen; Tannus Branco de Araujo, Gabriela; Gomez, Jorge; Van Vlaenderen, Ilse

    Influenza burden in Brazil is considerable with 4.2-6.4 million cases in 2008 and influenza-like-illness responsible for 16.9% of hospitalizations. Cost-effectiveness of influenza vaccination may be assessed by different types of models, with limitations due to data availability, assumptions, and modelling approach. To understand the impact of model complexity, the cost-utility of quadrivalent versus trivalent influenza vaccines in Brazil was estimated using three distinct models: a 1-year decision tree population model with three age groups (FLOU); a more detailed 1-year population model with five age groups (FLORA); and a more complex lifetime multi-cohort Markov model with nine age groups (FLORENCE). Analysis 1 (impact of model structure) compared each model using the same data inputs (i.e., best available data for FLOU). Analysis 2 (impact of increasing granularity) compared each model populated with the best available data for that model. Using the best data for each model, the discounted cost-utility ratio of quadrivalent versus trivalent influenza vaccine was R$20,428 with FLOU, R$22,768 with FLORA (versus R$20,428 in Analysis 1), and, R$19,257 with FLORENCE (versus R$22,490 in Analysis 1) using a lifetime horizon. Conceptual differences between FLORA and FLORENCE meant the same assumption regarding increased all-cause mortality in at-risk individuals had an opposite effect on the incremental cost-effectiveness ratio in Analysis 2 versus 1, and a proportionally higher number of vaccinated elderly in FLORENCE reduced this ratio in Analysis 2. FLOU provided adequate cost-effectiveness estimates with data in broad age groups. FLORA increased insights (e.g., in healthy versus at-risk, paediatric, respiratory/non-respiratory complications). FLORENCE provided greater insights and precision (e.g., in elderly, costs and complications, lifetime cost-effectiveness). All three models predicted a cost per quality-adjusted life year gained for quadrivalent versus trivalent influenza vaccine in the range of R$19,257 (FLORENCE) to R$22,768 (FLORA) with the best available data in Brazil (Appendix A). Copyright © 2018 Sociedade Brasileira de Infectologia. Published by Elsevier Editora Ltda. All rights reserved.

  14. (abstract) Generic Modeling of a Life Support System for Process Technology Comparisons

    NASA Technical Reports Server (NTRS)

    Ferrall, J. F.; Seshan, P. K.; Rohatgi, N. K.; Ganapathi, G. B.

    1993-01-01

    This paper describes a simulation model called the Life Support Systems Analysis Simulation Tool (LiSSA-ST), the spreadsheet program called the Life Support Systems Analysis Trade Tool (LiSSA-TT), and the Generic Modular Flow Schematic (GMFS) modeling technique. Results of using the LiSSA-ST and the LiSSA-TT will be presented for comparing life support systems and process technology options for a Lunar Base and a Mars Exploration Mission.

  15. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was built using the OpenMDAO framework. Pycycle provides analytic derivatives allowing for an efficient use of gradient-based optimization methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  16. Methods and theory in bone modeling drift: comparing spatial analyses of primary bone distributions in the human humerus.

    PubMed

    Maggiano, Corey M; Maggiano, Isabel S; Tiesler, Vera G; Chi-Keb, Julio R; Stout, Sam D

    2016-01-01

    This study compares two novel methods quantifying bone shaft tissue distributions, and relates observations on human humeral growth patterns for applications in anthropological and anatomical research. Microstructural variation in compact bone occurs due to developmental and mechanically adaptive circumstances that are 'recorded' by forming bone and are important for interpretations of growth, health, physical activity, adaptation, and identity in the past and present. Those interpretations hinge on a detailed understanding of the modeling process by which bones achieve their diametric shape, diaphyseal curvature, and general position relative to other elements. Bone modeling is a complex aspect of growth, potentially causing the shaft to drift transversely through formation and resorption on opposing cortices. Unfortunately, the specifics of modeling drift are largely unknown for most skeletal elements. Moreover, bone modeling has seen little quantitative methodological development compared with secondary bone processes, such as intracortical remodeling. The techniques proposed here, starburst point-count and 45° cross-polarization hand-drawn histomorphometry, permit the statistical and populational analysis of human primary tissue distributions and provide similar results despite being suitable for different applications. This analysis of a pooled archaeological and modern skeletal sample confirms the importance of extreme asymmetry in bone modeling as a major determinant of microstructural variation in diaphyses. Specifically, humeral drift is posteromedial in the human humerus, accompanied by a significant rotational trend. In general, results encourage the usage of endocortical primary bone distributions as an indicator and summary of bone modeling drift, enabling quantitative analysis by direction and proportion in other elements and populations. © 2015 Anatomical Society.

  17. Application of clustering analysis in the prediction of photovoltaic power generation based on neural network

    NASA Astrophysics Data System (ADS)

    Cheng, K.; Guo, L. M.; Wang, Y. K.; Zafar, M. T.

    2017-11-01

    In order to select effective samples in the large number of data of PV power generation years and improve the accuracy of PV power generation forecasting model, this paper studies the application of clustering analysis in this field and establishes forecasting model based on neural network. Based on three different types of weather on sunny, cloudy and rainy days, this research screens samples of historical data by the clustering analysis method. After screening, it establishes BP neural network prediction models using screened data as training data. Then, compare the six types of photovoltaic power generation prediction models before and after the data screening. Results show that the prediction model combining with clustering analysis and BP neural networks is an effective method to improve the precision of photovoltaic power generation.

  18. Quantitative image analysis of immunohistochemical stains using a CMYK color model

    PubMed Central

    Pham, Nhu-An; Morrison, Andrew; Schwock, Joerg; Aviel-Ronen, Sarit; Iakovlev, Vladimir; Tsao, Ming-Sound; Ho, James; Hedley, David W

    2007-01-01

    Background Computer image analysis techniques have decreased effects of observer biases, and increased the sensitivity and the throughput of immunohistochemistry (IHC) as a tissue-based procedure for the evaluation of diseases. Methods We adapted a Cyan/Magenta/Yellow/Key (CMYK) model for automated computer image analysis to quantify IHC stains in hematoxylin counterstained histological sections. Results The spectral characteristics of the chromogens AEC, DAB and NovaRed as well as the counterstain hematoxylin were first determined using CMYK, Red/Green/Blue (RGB), normalized RGB and Hue/Saturation/Lightness (HSL) color models. The contrast of chromogen intensities on a 0–255 scale (24-bit image file) as well as compared to the hematoxylin counterstain was greatest using the Yellow channel of a CMYK color model, suggesting an improved sensitivity for IHC evaluation compared to other color models. An increase in activated STAT3 levels due to growth factor stimulation, quantified using the Yellow channel image analysis was associated with an increase detected by Western blotting. Two clinical image data sets were used to compare the Yellow channel automated method with observer-dependent methods. First, a quantification of DAB-labeled carbonic anhydrase IX hypoxia marker in 414 sections obtained from 138 biopsies of cervical carcinoma showed strong association between Yellow channel and positive color selection results. Second, a linear relationship was also demonstrated between Yellow intensity and visual scoring for NovaRed-labeled epidermal growth factor receptor in 256 non-small cell lung cancer biopsies. Conclusion The Yellow channel image analysis method based on a CMYK color model is independent of observer biases for threshold and positive color selection, applicable to different chromogens, tolerant of hematoxylin, sensitive to small changes in IHC intensity and is applicable to simple automation procedures. These characteristics are advantageous for both basic as well as clinical research in an unbiased, reproducible and high throughput evaluation of IHC intensity. PMID:17326824

  19. Nontidal Loading Applied in VLBI Geodetic Analysis

    NASA Astrophysics Data System (ADS)

    MacMillan, D. S.

    2015-12-01

    We investigate the application of nontidal atmosphere pressure, hydrology, and ocean loading series in the analysis of VLBI data. The annual amplitude of VLBI scale variation is reduced to less than 0.1 ppb, a result of the annual components of the vertical loading series. VLBI site vertical scatter and baseline length scatter is reduced when these loading models are applied. We operate nontidal loading services for hydrology loading (GLDAS model), atmospheric pressure loading (NCEP), and nontidal ocean loading (JPL ECCO model). As an alternative validation, we compare these loading series with corresponding series generated by other analysis centers.

  20. A Comparative Analysis of Bicycle Lanes Versus Wide Curb Lanes

    DOT National Transportation Integrated Search

    2013-11-01

    Analysis Modeling and Simulation (AMS) Testbeds can make significant contributions in identifying the benefits of more effective, more active systems management, resulting from integrating transformative applications enabled by new data from wireless...

  1. Choice-Based Conjoint Analysis: Classification vs. Discrete Choice Models

    NASA Astrophysics Data System (ADS)

    Giesen, Joachim; Mueller, Klaus; Taneva, Bilyana; Zolliker, Peter

    Conjoint analysis is a family of techniques that originated in psychology and later became popular in market research. The main objective of conjoint analysis is to measure an individual's or a population's preferences on a class of options that can be described by parameters and their levels. We consider preference data obtained in choice-based conjoint analysis studies, where one observes test persons' choices on small subsets of the options. There are many ways to analyze choice-based conjoint analysis data. Here we discuss the intuition behind a classification based approach, and compare this approach to one based on statistical assumptions (discrete choice models) and to a regression approach. Our comparison on real and synthetic data indicates that the classification approach outperforms the discrete choice models.

  2. Compression After Impact on Honeycomb Core Sandwich Panels with Thin Facesheets, Part 2: Analysis

    NASA Technical Reports Server (NTRS)

    Mcquigg, Thomas D.; Kapania, Rakesh K.; Scotti, Stephen J.; Walker, Sandra P.

    2012-01-01

    A two part research study has been completed on the topic of compression after impact (CAI) of thin facesheet honeycomb core sandwich panels. The research has focused on both experiments and analysis in an effort to establish and validate a new understanding of the damage tolerance of these materials. Part 2, the subject of the current paper, is focused on the analysis, which corresponds to the CAI testings described in Part 1. Of interest, are sandwich panels, with aerospace applications, which consist of very thin, woven S2-fiberglass (with MTM45-1 epoxy) facesheets adhered to a Nomex honeycomb core. Two sets of materials, which were identical with the exception of the density of the honeycomb core, were tested in Part 1. The results highlighted the need for analysis methods which taken into account multiple failure modes. A finite element model (FEM) is developed here, in Part 2. A commercial implementation of the Multicontinuum Failure Theory (MCT) for progressive failure analysis (PFA) in composite laminates, Helius:MCT, is included in this model. The inclusion of PFA in the present model provided a new, unique ability to account for multiple failure modes. In addition, significant impact damage detail is included in the model. A sensitivity study, used to assess the effect of each damage parameter on overall analysis results, is included in an appendix. Analysis results are compared to the experimental results for each of the 32 CAI sandwich panel specimens tested to failure. The failure of each specimen is predicted using the high-fidelity, physicsbased analysis model developed here, and the results highlight key improvements in the understanding of honeycomb core sandwich panel CAI failure. Finally, a parametric study highlights the strength benefits compared to mass penalty for various core densities.

  3. A comparison of multivariate and univariate time series approaches to modelling and forecasting emergency department demand in Western Australia.

    PubMed

    Aboagye-Sarfo, Patrick; Mai, Qun; Sanfilippo, Frank M; Preen, David B; Stewart, Louise M; Fatovich, Daniel M

    2015-10-01

    To develop multivariate vector-ARMA (VARMA) forecast models for predicting emergency department (ED) demand in Western Australia (WA) and compare them to the benchmark univariate autoregressive moving average (ARMA) and Winters' models. Seven-year monthly WA state-wide public hospital ED presentation data from 2006/07 to 2012/13 were modelled. Graphical and VARMA modelling methods were used for descriptive analysis and model fitting. The VARMA models were compared to the benchmark univariate ARMA and Winters' models to determine their accuracy to predict ED demand. The best models were evaluated by using error correction methods for accuracy. Descriptive analysis of all the dependent variables showed an increasing pattern of ED use with seasonal trends over time. The VARMA models provided a more precise and accurate forecast with smaller confidence intervals and better measures of accuracy in predicting ED demand in WA than the ARMA and Winters' method. VARMA models are a reliable forecasting method to predict ED demand for strategic planning and resource allocation. While the ARMA models are a closely competing alternative, they under-estimated future ED demand. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Informing policy makers about future health spending: a comparative analysis of forecasting methods in OECD countries.

    PubMed

    Astolfi, Roberto; Lorenzoni, Luca; Oderkirk, Jillian

    2012-09-01

    Concerns about health care expenditure growth and its long-term sustainability have risen to the top of the policy agenda in many OECD countries. As continued growth in spending places pressure on government budgets, health services provision and patients' personal finances, policy makers have launched forecasting projects to support policy planning. This comparative analysis reviewed 25 models that were developed for policy analysis in OECD countries by governments, research agencies, academics and international organisations. We observed that the policy questions that need to be addressed drive the choice of forecasting model and the model's specification. By considering both the level of aggregation of the units analysed and the level of detail of health expenditure to be projected, we identified three classes of models: micro, component-based, and macro. Virtually all models account for demographic shifts in the population, while two important influences on health expenditure growth that are the least understood include technological innovation and health-seeking behaviour. The landscape for health forecasting models is dynamic and evolving. Advances in computing technology and increases in data granularity are opening up new possibilities for the generation of system of models which become an on-going decision support tool capable of adapting to new questions as they arise. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  5. Bayesian Poisson hierarchical models for crash data analysis: Investigating the impact of model choice on site-specific predictions.

    PubMed

    Khazraee, S Hadi; Johnson, Valen; Lord, Dominique

    2018-08-01

    The Poisson-gamma (PG) and Poisson-lognormal (PLN) regression models are among the most popular means for motor vehicle crash data analysis. Both models belong to the Poisson-hierarchical family of models. While numerous studies have compared the overall performance of alternative Bayesian Poisson-hierarchical models, little research has addressed the impact of model choice on the expected crash frequency prediction at individual sites. This paper sought to examine whether there are any trends among candidate models predictions e.g., that an alternative model's prediction for sites with certain conditions tends to be higher (or lower) than that from another model. In addition to the PG and PLN models, this research formulated a new member of the Poisson-hierarchical family of models: the Poisson-inverse gamma (PIGam). Three field datasets (from Texas, Michigan and Indiana) covering a wide range of over-dispersion characteristics were selected for analysis. This study demonstrated that the model choice can be critical when the calibrated models are used for prediction at new sites, especially when the data are highly over-dispersed. For all three datasets, the PIGam model would predict higher expected crash frequencies than would the PLN and PG models, in order, indicating a clear link between the models predictions and the shape of their mixing distributions (i.e., gamma, lognormal, and inverse gamma, respectively). The thicker tail of the PIGam and PLN models (in order) may provide an advantage when the data are highly over-dispersed. The analysis results also illustrated a major deficiency of the Deviance Information Criterion (DIC) in comparing the goodness-of-fit of hierarchical models; models with drastically different set of coefficients (and thus predictions for new sites) may yield similar DIC values, because the DIC only accounts for the parameters in the lowest (observation) level of the hierarchy and ignores the higher levels (regression coefficients). Copyright © 2018. Published by Elsevier Ltd.

  6. Python package for model STructure ANalysis (pySTAN)

    NASA Astrophysics Data System (ADS)

    Van Hoey, Stijn; van der Kwast, Johannes; Nopens, Ingmar; Seuntjens, Piet

    2013-04-01

    The selection and identification of a suitable hydrological model structure is more than fitting parameters of a model structure to reproduce a measured hydrograph. The procedure is highly dependent on various criteria, i.e. the modelling objective, the characteristics and the scale of the system under investigation as well as the available data. Rigorous analysis of the candidate model structures is needed to support and objectify the selection of the most appropriate structure for a specific case (or eventually justify the use of a proposed ensemble of structures). This holds both in the situation of choosing between a limited set of different structures as well as in the framework of flexible model structures with interchangeable components. Many different methods to evaluate and analyse model structures exist. This leads to a sprawl of available methods, all characterized by different assumptions, changing conditions of application and various code implementations. Methods typically focus on optimization, sensitivity analysis or uncertainty analysis, with backgrounds from optimization, machine-learning or statistics amongst others. These methods also need an evaluation metric (objective function) to compare the model outcome with some observed data. However, for current methods described in literature, implementations are not always transparent and reproducible (if available at all). No standard procedures exist to share code and the popularity (and amount of applications) of the methods is sometimes more dependent on the availability than the merits of the method. Moreover, new implementations of existing methods are difficult to verify and the different theoretical backgrounds make it difficult for environmental scientists to decide about the usefulness of a specific method. A common and open framework with a large set of methods can support users in deciding about the most appropriate method. Hence, it enables to simultaneously apply and compare different methods on a fair basis. We developed and present pySTAN (python framework for STructure Analysis), a python package containing a set of functions for model structure evaluation to provide the analysis of (hydrological) model structures. A selected set of algorithms for optimization, uncertainty and sensitivity analysis is currently available, together with a set of evaluation (objective) functions and input distributions to sample from. The methods are implemented model-independent and the python language provides the wrapper functions to apply administer external model codes. Different objective functions can be considered simultaneously with both statistical metrics and more hydrology specific metrics. By using so-called reStructuredText (sphinx documentation generator) and Python documentation strings (docstrings), the generation of manual pages is semi-automated and a specific environment is available to enhance both the readability and transparency of the code. It thereby enables a larger group of users to apply and compare these methods and to extend the functionalities.

  7. Compartmental and Data-Based Modeling of Cerebral Hemodynamics: Linear Analysis.

    PubMed

    Henley, B C; Shin, D C; Zhang, R; Marmarelis, V Z

    Compartmental and data-based modeling of cerebral hemodynamics are alternative approaches that utilize distinct model forms and have been employed in the quantitative study of cerebral hemodynamics. This paper examines the relation between a compartmental equivalent-circuit and a data-based input-output model of dynamic cerebral autoregulation (DCA) and CO2-vasomotor reactivity (DVR). The compartmental model is constructed as an equivalent-circuit utilizing putative first principles and previously proposed hypothesis-based models. The linear input-output dynamics of this compartmental model are compared with data-based estimates of the DCA-DVR process. This comparative study indicates that there are some qualitative similarities between the two-input compartmental model and experimental results.

  8. Transient analysis of a superconducting AC generator using the compensated 2-D model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chun, Y.D.; Lee, H.W.; Lee, J.

    1999-09-01

    A SCG has many advantages over conventional generators, such as reduction in width and size, improvement in efficiency, and better steady-state stability. The paper presents a 2-D transient analysis of a superconducting AC generator (SCG) using the finite element method (FEM). The compensated 2-D model obtained by lengthening the airgap of the original 2-D model is proposed for the accurate and efficient transient analysis. The accuracy of the compensated 2-D model is verified by the small error 6.4% compared to experimental data. The transient characteristics of the 30 KVA SCG model have been investigated in detail and the damper performancemore » on various design parameters is examined.« less

  9. Analysis of Magnitude Correlations in a Self-Similar model of Seismicity

    NASA Astrophysics Data System (ADS)

    Zambrano, A.; Joern, D.

    2017-12-01

    A recent model of seismicity that incorporates a self-similar Omori-Utsu relation, which is used to describe the temporal evolution of earthquake triggering, has been shown to provide a more accurate description of seismicity in Southern California when compared to epidemic type aftershock sequence models. Forecasting of earthquakes is an active research area where one of the debated points is whether magnitude correlations of earthquakes exist within real world seismic data. Prior to this work, the analysis of magnitude correlations of the aforementioned self-similar model had not been addressed. Here we present statistical properties of the magnitude correlations for the self-similar model along with an analytical analysis of the branching ratio and criticality parameters.

  10. Theory of the lattice Boltzmann Method: Dispersion, Dissipation, Isotropy, Galilean Invariance, and Stability

    NASA Technical Reports Server (NTRS)

    Lallemand, Pierre; Luo, Li-Shi

    2000-01-01

    The generalized hydrodynamics (the wave vector dependence of the transport coefficients) of a generalized lattice Boltzmann equation (LBE) is studied in detail. The generalized lattice Boltzmann equation is constructed in moment space rather than in discrete velocity space. The generalized hydrodynamics of the model is obtained by solving the dispersion equation of the linearized LBE either analytically by using perturbation technique or numerically. The proposed LBE model has a maximum number of adjustable parameters for the given set of discrete velocities. Generalized hydrodynamics characterizes dispersion, dissipation (hyper-viscosities), anisotropy, and lack of Galilean invariance of the model, and can be applied to select the values of the adjustable parameters which optimize the properties of the model. The proposed generalized hydrodynamic analysis also provides some insights into stability and proper initial conditions for LBE simulations. The stability properties of some 2D LBE models are analyzed and compared with each other in the parameter space of the mean streaming velocity and the viscous relaxation time. The procedure described in this work can be applied to analyze other LBE models. As examples, LBE models with various interpolation schemes are analyzed. Numerical results on shear flow with an initially discontinuous velocity profile (shock) with or without a constant streaming velocity are shown to demonstrate the dispersion effects in the LBE model; the results compare favorably with our theoretical analysis. We also show that whereas linear analysis of the LBE evolution operator is equivalent to Chapman-Enskog analysis in the long wave-length limit (wave vector k = 0), it can also provide results for large values of k. Such results are important for the stability and other hydrodynamic properties of the LBE method and cannot be obtained through Chapman-Enskog analysis.

  11. Predicting nonstationary flood frequencies: Evidence supports an updated stationarity thesis in the United States

    NASA Astrophysics Data System (ADS)

    Luke, Adam; Vrugt, Jasper A.; AghaKouchak, Amir; Matthew, Richard; Sanders, Brett F.

    2017-07-01

    Nonstationary extreme value analysis (NEVA) can improve the statistical representation of observed flood peak distributions compared to stationary (ST) analysis, but management of flood risk relies on predictions of out-of-sample distributions for which NEVA has not been comprehensively evaluated. In this study, we apply split-sample testing to 1250 annual maximum discharge records in the United States and compare the predictive capabilities of NEVA relative to ST extreme value analysis using a log-Pearson Type III (LPIII) distribution. The parameters of the LPIII distribution in the ST and nonstationary (NS) models are estimated from the first half of each record using Bayesian inference. The second half of each record is reserved to evaluate the predictions under the ST and NS models. The NS model is applied for prediction by (1) extrapolating the trend of the NS model parameters throughout the evaluation period and (2) using the NS model parameter values at the end of the fitting period to predict with an updated ST model (uST). Our analysis shows that the ST predictions are preferred, overall. NS model parameter extrapolation is rarely preferred. However, if fitting period discharges are influenced by physical changes in the watershed, for example from anthropogenic activity, the uST model is strongly preferred relative to ST and NS predictions. The uST model is therefore recommended for evaluation of current flood risk in watersheds that have undergone physical changes. Supporting information includes a MATLAB® program that estimates the (ST/NS/uST) LPIII parameters from annual peak discharge data through Bayesian inference.

  12. MDR-TB patients in KwaZulu-Natal, South Africa: Cost-effectiveness of 5 models of care

    PubMed Central

    Wallengren, Kristina; Reddy, Tarylee; Besada, Donela; Brust, James C. M.; Voce, Anna; Desai, Harsha; Ngozo, Jacqueline; Radebe, Zanele; Master, Iqbal; Padayatchi, Nesri; Daviaud, Emmanuelle

    2018-01-01

    Background South Africa has a high burden of MDR-TB, and to provide accessible treatment the government has introduced different models of care. We report the most cost-effective model after comparing cost per patient successfully treated across 5 models of care: centralized hospital, district hospitals (2), and community-based care through clinics or mobile injection teams. Methods In an observational study five cohorts were followed prospectively. The cost analysis adopted a provider perspective and economic cost per patient successfully treated was calculated based on country protocols and length of treatment per patient per model of care. Logistic regression was used to calculate propensity score weights, to compare pairs of treatment groups, whilst adjusting for baseline imbalances between groups. Propensity score weighted costs and treatment success rates were used in the ICER analysis. Sensitivity analysis focused on varying treatment success and length of hospitalization within each model. Results In 1,038 MDR-TB patients 75% were HIV-infected and 56% were successfully treated. The cost per successfully treated patient was 3 to 4.5 times lower in the community-based models with no hospitalization. Overall, the Mobile model was the most cost-effective. Conclusion Reducing the length of hospitalization and following community-based models of care improves the affordability of MDR-TB treatment without compromising its effectiveness. PMID:29668748

  13. A single factor underlies the metabolic syndrome: a confirmatory factor analysis.

    PubMed

    Pladevall, Manel; Singal, Bonita; Williams, L Keoki; Brotons, Carlos; Guyer, Heidi; Sadurni, Josep; Falces, Carles; Serrano-Rios, Manuel; Gabriel, Rafael; Shaw, Jonathan E; Zimmet, Paul Z; Haffner, Steven

    2006-01-01

    Confirmatory factor analysis (CFA) was used to test the hypothesis that the components of the metabolic syndrome are manifestations of a single common factor. Three different datasets were used to test and validate the model. The Spanish and Mauritian studies included 207 men and 203 women and 1,411 men and 1,650 women, respectively. A third analytical dataset including 847 men was obtained from a previously published CFA of a U.S. population. The one-factor model included the metabolic syndrome core components (central obesity, insulin resistance, blood pressure, and lipid measurements). We also tested an expanded one-factor model that included uric acid and leptin levels. Finally, we used CFA to compare the goodness of fit of one-factor models with the fit of two previously published four-factor models. The simplest one-factor model showed the best goodness-of-fit indexes (comparative fit index 1, root mean-square error of approximation 0.00). Comparisons of one-factor with four-factor models in the three datasets favored the one-factor model structure. The selection of variables to represent the different metabolic syndrome components and model specification explained why previous exploratory and confirmatory factor analysis, respectively, failed to identify a single factor for the metabolic syndrome. These analyses support the current clinical definition of the metabolic syndrome, as well as the existence of a single factor that links all of the core components.

  14. Clinical and multiple gene expression variables in survival analysis of breast cancer: Analysis with the hypertabastic survival model

    PubMed Central

    2012-01-01

    Background We explore the benefits of applying a new proportional hazard model to analyze survival of breast cancer patients. As a parametric model, the hypertabastic survival model offers a closer fit to experimental data than Cox regression, and furthermore provides explicit survival and hazard functions which can be used as additional tools in the survival analysis. In addition, one of our main concerns is utilization of multiple gene expression variables. Our analysis treats the important issue of interaction of different gene signatures in the survival analysis. Methods The hypertabastic proportional hazards model was applied in survival analysis of breast cancer patients. This model was compared, using statistical measures of goodness of fit, with models based on the semi-parametric Cox proportional hazards model and the parametric log-logistic and Weibull models. The explicit functions for hazard and survival were then used to analyze the dynamic behavior of hazard and survival functions. Results The hypertabastic model provided the best fit among all the models considered. Use of multiple gene expression variables also provided a considerable improvement in the goodness of fit of the model, as compared to use of only one. By utilizing the explicit survival and hazard functions provided by the model, we were able to determine the magnitude of the maximum rate of increase in hazard, and the maximum rate of decrease in survival, as well as the times when these occurred. We explore the influence of each gene expression variable on these extrema. Furthermore, in the cases of continuous gene expression variables, represented by a measure of correlation, we were able to investigate the dynamics with respect to changes in gene expression. Conclusions We observed that use of three different gene signatures in the model provided a greater combined effect and allowed us to assess the relative importance of each in determination of outcome in this data set. These results point to the potential to combine gene signatures to a greater effect in cases where each gene signature represents some distinct aspect of the cancer biology. Furthermore we conclude that the hypertabastic survival models can be an effective survival analysis tool for breast cancer patients. PMID:23241496

  15. FAST Mast Structural Response to Axial Loading: Modeling and Verification

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Elliott, Kenny B.; Templeton, Justin D.; Song, Kyongchan; Rayburn, Jeffery T.

    2012-01-01

    The International Space Station s solar array wing mast shadowing problem is the focus of this paper. A building-block approach to modeling and analysis is pursued for the primary structural components of the solar array wing mast structure. Starting with an ANSYS (Registered Trademark) finite element model, a verified MSC.Nastran (Trademark) model is established for a single longeron. This finite element model translation requires the conversion of several modeling and analysis features for the two structural analysis tools to produce comparable results for the single-longeron configuration. The model is then reconciled using test data. The resulting MSC.Nastran (Trademark) model is then extended to a single-bay configuration and verified using single-bay test data. Conversion of the MSC. Nastran (Trademark) single-bay model to Abaqus (Trademark) is also performed to simulate the elastic-plastic longeron buckling response of the single bay prior to folding.

  16. Advancing Ecological Models to Compare Scale in Multi-Level Educational Change

    ERIC Educational Resources Information Center

    Woo, David James

    2016-01-01

    Education systems as units of analysis have been metaphorically likened to ecologies to model change. However, ecological models to date have been ineffective in modelling educational change that is multi-scale and occurs across multiple levels of an education system. Thus, this paper advances two innovative, ecological frameworks that improve on…

  17. An Objective Verification of the North American Mesoscale Model for Kennedy Space Center and Cape Canaveral Air Force Station

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III

    2010-01-01

    The 45th Weather Squadron (45 WS) Launch Weather Officers use the 12-km resolution North American Mesoscale (NAM) model (MesoNAM) text and graphical product forecasts extensively to support launch weather operations. However, the actual performance of the model at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) has not been measured objectively. In order to have tangible evidence of model performance, the 45 WS tasked the Applied Meteorology Unit to conduct a detailed statistical analysis of model output compared to observed values. The model products are provided to the 45 WS by ACTA, Inc. and include hourly forecasts from 0 to 84 hours based on model initialization times of 00, 06, 12 and 18 UTC. The objective analysis compared the MesoNAM forecast winds, temperature and dew point, as well as the changes in these parameters over time, to the observed values from the sensors in the KSC/CCAFS wind tower network. Objective statistics will give the forecasters knowledge of the model's strength and weaknesses, which will result in improved forecasts for operations.

  18. A segmentation/clustering model for the analysis of array CGH data.

    PubMed

    Picard, F; Robin, S; Lebarbier, E; Daudin, J-J

    2007-09-01

    Microarray-CGH (comparative genomic hybridization) experiments are used to detect and map chromosomal imbalances. A CGH profile can be viewed as a succession of segments that represent homogeneous regions in the genome whose representative sequences share the same relative copy number on average. Segmentation methods constitute a natural framework for the analysis, but they do not provide a biological status for the detected segments. We propose a new model for this segmentation/clustering problem, combining a segmentation model with a mixture model. We present a new hybrid algorithm called dynamic programming-expectation maximization (DP-EM) to estimate the parameters of the model by maximum likelihood. This algorithm combines DP and the EM algorithm. We also propose a model selection heuristic to select the number of clusters and the number of segments. An example of our procedure is presented, based on publicly available data sets. We compare our method to segmentation methods and to hidden Markov models, and we show that the new segmentation/clustering model is a promising alternative that can be applied in the more general context of signal processing.

  19. Impact of assimilation of INSAT cloud motion vector (CMV) wind for the prediction of a monsoon depression over Indian Ocean using a mesoscale model

    NASA Astrophysics Data System (ADS)

    Xavier, V. F.; Chandrasekar, A.; Singh, Devendra

    2006-12-01

    The present study utilized the Penn State/NCAR mesoscale model (MM5), to assimilate the INSAT-CMV (Indian National Satellite System-Cloud Motion Vector) wind observations using analysis nudging to improve the prediction of a monsoon depression which occurred over the Arabian Sea, India during 14 September 2005 to 17 September 2005. NCEP-FNL analysis has been utilized as the initial and lateral boundary conditions and two sets of numerical experiments were designed to reveal the impact of assimilation of satellite-derived winds. The model was integrated from 14 September 2005 00 UTC to 17 September 2005 00 UTC, with just the NCEP FNL analysis in the NOFDDA run. In the FDDA run, the NCEP FNL analysis fields were improved by assimilating the INSAT-CMV (wind speed and wind direction) as well as QuickSCAT sea surface winds during the 24 hour pre-forecast period (14 September 2005 00 UTC to 15 September 2005 00 UTC) using analysis nudging. The model was subsequently run in the free forecast mode from 15 September 2005 00 UTC to 17 September 2005 12 UTC. The simulated sea level pressure field from the NOFDDA run reveals a relatively stronger system as compared to the FDDA run. However, the sea level pressure fields corresponding to the FDDA run are closer to the analysis. The simulated lower tropospheric winds from both experiments reveal a well-developed cyclonic circulation as compared to the analysis.

  20. Validating the WRF-Chem model for wind energy applications using High Resolution Doppler Lidar data from a Utah 2012 field campaign

    NASA Astrophysics Data System (ADS)

    Mitchell, M. J.; Pichugina, Y. L.; Banta, R. M.

    2015-12-01

    Models are important tools for assessing potential of wind energy sites, but the accuracy of these projections has not been properly validated. In this study, High Resolution Doppler Lidar (HRDL) data obtained with high temporal and spatial resolution at heights of modern turbine rotors were compared to output from the WRF-chem model in order to help improve the performance of the model in producing accurate wind forecasts for the industry. HRDL data were collected from January 23-March 1, 2012 during the Uintah Basin Winter Ozone Study (UBWOS) field campaign. A model validation method was based on the qualitative comparison of the wind field images, time-series analysis and statistical analysis of the observed and modeled wind speed and direction, both for case studies and for the whole experiment. To compare the WRF-chem model output to the HRDL observations, the model heights and forecast times were interpolated to match the observed times and heights. Then, time-height cross-sections of the HRDL and WRF-Chem wind speed and directions were plotted to select case studies. Cross-sections of the differences between the observed and forecasted wind speed and directions were also plotted to visually analyze the model performance in different wind flow conditions. A statistical analysis includes the calculation of vertical profiles and time series of bias, correlation coefficient, root mean squared error, and coefficient of determination between two datasets. The results from this analysis reveals where and when the model typically struggles in forecasting winds at heights of modern turbine rotors so that in the future the model can be improved for the industry.

  1. Exploration of freely available web-interfaces for comparative homology modelling of microbial proteins

    PubMed Central

    Nema, Vijay; Pal, Sudhir Kumar

    2013-01-01

    Aim: This study was conducted to find the best suited freely available software for modelling of proteins by taking a few sample proteins. The proteins used were small to big in size with available crystal structures for the purpose of benchmarking. Key players like Phyre2, Swiss-Model, CPHmodels-3.0, Homer, (PS)2, (PS)2-V2, Modweb were used for the comparison and model generation. Results: Benchmarking process was done for four proteins, Icl, InhA, and KatG of Mycobacterium tuberculosis and RpoB of Thermus Thermophilus to get the most suited software. Parameters compared during analysis gave relatively better values for Phyre2 and Swiss-Model. Conclusion: This comparative study gave the information that Phyre2 and Swiss-Model make good models of small and large proteins as compared to other screened software. Other software was also good but is often not very efficient in providing full-length and properly folded structure. PMID:24023424

  2. A more accurate analysis and design of coaxial-to-rectangular waveguide end launcher

    NASA Astrophysics Data System (ADS)

    Saad, Saad Michael

    1990-02-01

    An electromagnetic model is developed for the analysis of the coaxial-to-rectangular waveguide transition of the end-launcher type. The model describes the coupling mechanism in terms of an excitation probe which is fed by a transmission line intermediate section. The model is compared with a coupling loop model. The two models have a few analytical steps in common, but expressions for the probe model are easier to derive and compute. The two models are presented together with numerical examples and experimental verification. The superiority of the probe model is illustrated, and a design method yielding a maximum voltage standing wave ratio of 1.035 over 13 percent bandwidth is outlined.

  3. Support vector machine in crash prediction at the level of traffic analysis zones: Assessing the spatial proximity effects.

    PubMed

    Dong, Ni; Huang, Helai; Zheng, Liang

    2015-09-01

    In zone-level crash prediction, accounting for spatial dependence has become an extensively studied topic. This study proposes Support Vector Machine (SVM) model to address complex, large and multi-dimensional spatial data in crash prediction. Correlation-based Feature Selector (CFS) was applied to evaluate candidate factors possibly related to zonal crash frequency in handling high-dimension spatial data. To demonstrate the proposed approaches and to compare them with the Bayesian spatial model with conditional autoregressive prior (i.e., CAR), a dataset in Hillsborough county of Florida was employed. The results showed that SVM models accounting for spatial proximity outperform the non-spatial model in terms of model fitting and predictive performance, which indicates the reasonableness of considering cross-zonal spatial correlations. The best model predictive capability, relatively, is associated with the model considering proximity of the centroid distance by choosing the RBF kernel and setting the 10% of the whole dataset as the testing data, which further exhibits SVM models' capacity for addressing comparatively complex spatial data in regional crash prediction modeling. Moreover, SVM models exhibit the better goodness-of-fit compared with CAR models when utilizing the whole dataset as the samples. A sensitivity analysis of the centroid-distance-based spatial SVM models was conducted to capture the impacts of explanatory variables on the mean predicted probabilities for crash occurrence. While the results conform to the coefficient estimation in the CAR models, which supports the employment of the SVM model as an alternative in regional safety modeling. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Particle simulation of Coulomb collisions: Comparing the methods of Takizuka and Abe and Nanbu

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Chiaming; Lin, Tungyou; Caflisch, Russel

    2008-04-20

    The interactions of charged particles in a plasma are governed by long-range Coulomb collision. We compare two widely used Monte Carlo models for Coulomb collisions. One was developed by Takizuka and Abe in 1977, the other was developed by Nanbu in 1997. We perform deterministic and statistical error analysis with respect to particle number and time step. The two models produce similar stochastic errors, but Nanbu's model gives smaller time step errors. Error comparisons between these two methods are presented.

  5. A Lumped Computational Model for Sodium Sulfur Battery Analysis

    NASA Astrophysics Data System (ADS)

    Wu, Fan

    Due to the cost of materials and time consuming testing procedures, development of new batteries is a slow and expensive practice. The purpose of this study is to develop a computational model and assess the capabilities of such a model designed to aid in the design process and control of sodium sulfur batteries. To this end, a transient lumped computational model derived from an integral analysis of the transport of species, energy and charge throughout the battery has been developed. The computation processes are coupled with the use of Faraday's law, and solutions for the species concentrations, electrical potential and current are produced in a time marching fashion. Properties required for solving the governing equations are calculated and updated as a function of time based on the composition of each control volume. The proposed model is validated against multi- dimensional simulations and experimental results from literatures, and simulation results using the proposed model is presented and analyzed. The computational model and electrochemical model used to solve the equations for the lumped model are compared with similar ones found in the literature. The results obtained from the current model compare favorably with those from experiments and other models.

  6. Two new methods to fit models for network meta-analysis with random inconsistency effects.

    PubMed

    Law, Martin; Jackson, Dan; Turner, Rebecca; Rhodes, Kirsty; Viechtbauer, Wolfgang

    2016-07-28

    Meta-analysis is a valuable tool for combining evidence from multiple studies. Network meta-analysis is becoming more widely used as a means to compare multiple treatments in the same analysis. However, a network meta-analysis may exhibit inconsistency, whereby the treatment effect estimates do not agree across all trial designs, even after taking between-study heterogeneity into account. We propose two new estimation methods for network meta-analysis models with random inconsistency effects. The model we consider is an extension of the conventional random-effects model for meta-analysis to the network meta-analysis setting and allows for potential inconsistency using random inconsistency effects. Our first new estimation method uses a Bayesian framework with empirically-based prior distributions for both the heterogeneity and the inconsistency variances. We fit the model using importance sampling and thereby avoid some of the difficulties that might be associated with using Markov Chain Monte Carlo (MCMC). However, we confirm the accuracy of our importance sampling method by comparing the results to those obtained using MCMC as the gold standard. The second new estimation method we describe uses a likelihood-based approach, implemented in the metafor package, which can be used to obtain (restricted) maximum-likelihood estimates of the model parameters and profile likelihood confidence intervals of the variance components. We illustrate the application of the methods using two contrasting examples. The first uses all-cause mortality as an outcome, and shows little evidence of between-study heterogeneity or inconsistency. The second uses "ear discharge" as an outcome, and exhibits substantial between-study heterogeneity and inconsistency. Both new estimation methods give results similar to those obtained using MCMC. The extent of heterogeneity and inconsistency should be assessed and reported in any network meta-analysis. Our two new methods can be used to fit models for network meta-analysis with random inconsistency effects. They are easily implemented using the accompanying R code in the Additional file 1. Using these estimation methods, the extent of inconsistency can be assessed and reported.

  7. Sherrington's Model of Successive Induction for Comparative Analysis of Zebrafish Motor Response

    EPA Science Inventory

    The responses in motor activity of zebrafish to sudden changes in lighting conditions may be modeled by Sherrington’s model of successive induction. Fish left in the dark exhibit very little motion, when exposed to light zebrafish motion increases towards an apparent horizo...

  8. Diagnostic Analysis of Ozone Concentrations Simulated by Two Regional-Scale Air Quality Models

    EPA Science Inventory

    Since the Community Multiscale Air Quality modeling system (CMAQ) and the Weather Research and Forecasting with Chemistry model (WRF/Chem) use different approaches to simulate the interaction of meteorology and chemistry, this study compares the CMAQ and WRF/Chem air quality simu...

  9. HYDROLOGIC MODEL UNCERTAINTY ASSOCIATED WITH SIMULATING FUTURE LAND-COVER/USE SCENARIOS: A RETROSPECTIVE ANALYSIS

    EPA Science Inventory

    GIS-based hydrologic modeling offers a convenient means of assessing the impacts associated with land-cover/use change for environmental planning efforts. Alternative future scenarios can be used as input to hydrologic models and compared with existing conditions to evaluate pot...

  10. Wellness Model of Supervision: A Preliminary Analysis

    ERIC Educational Resources Information Center

    Lenz, Alan Stephen, Jr.

    2011-01-01

    This study compared the effectiveness of the Wellness Model of Supervision (WELMS; Lenz & Smith, 2010) against other models of supervision for developing the wellness constructs, total personal wellness, and helping skills among CITs. Participants in were 44 masters level Caucasian counseling students (9 men) completing their practicum and…

  11. Linearised and non-linearised isotherm models optimization analysis by error functions and statistical means

    PubMed Central

    2014-01-01

    In adsorption study, to describe sorption process and evaluation of best-fitting isotherm model is a key analysis to investigate the theoretical hypothesis. Hence, numerous statistically analysis have been extensively used to estimate validity of the experimental equilibrium adsorption values with the predicted equilibrium values. Several statistical error analysis were carried out. In the present study, the following statistical analysis were carried out to evaluate the adsorption isotherm model fitness, like the Pearson correlation, the coefficient of determination and the Chi-square test, have been used. The ANOVA test was carried out for evaluating significance of various error functions and also coefficient of dispersion were evaluated for linearised and non-linearised models. The adsorption of phenol onto natural soil (Local name Kalathur soil) was carried out, in batch mode at 30 ± 20 C. For estimating the isotherm parameters, to get a holistic view of the analysis the models were compared between linear and non-linear isotherm models. The result reveled that, among above mentioned error functions and statistical functions were designed to determine the best fitting isotherm. PMID:25018878

  12. A comparative appraisal of hydrological behavior of SRTM DEM at catchment level

    NASA Astrophysics Data System (ADS)

    Sharma, Arabinda; Tiwari, K. N.

    2014-11-01

    The Shuttle Radar Topography Mission (SRTM) data has emerged as a global elevation data in the past one decade because of its free availability, homogeneity and consistent accuracy compared to other global elevation dataset. The present study explores the consistency in hydrological behavior of the SRTM digital elevation model (DEM) with reference to easily available regional 20 m contour interpolated DEM (TOPO DEM). Analysis ranging from simple vertical accuracy assessment to hydrological simulation of the studied Maithon catchment, using empirical USLE model and semidistributed, physical SWAT model, were carried out. Moreover, terrain analysis involving hydrological indices was performed for comparative assessment of the SRTM DEM with respect to TOPO DEM. Results reveal that the vertical accuracy of SRTM DEM (±27.58 m) in the region is less than the specified standard (±16 m). Statistical analysis of hydrological indices such as topographic wetness index (TWI), stream power index (SPI), slope length factor (SLF) and geometry number (GN) shows a significant differences in hydrological properties of the two studied DEMs. Estimation of soil erosion potentials of the catchment and conservation priorities of microwatersheds of the catchment using SRTM DEM and TOPO DEM produce considerably different results. Prediction of soil erosion potential using SRTM DEM is far higher than that obtained using TOPO DEM. Similarly, conservation priorities determined using the two DEMs are found to be agreed for only 34% of microwatersheds of the catchment. ArcSWAT simulation reveals that runoff predictions are less sensitive to selection of the two DEMs as compared to sediment yield prediction. The results obtained in the present study are vital to hydrological analysis as it helps understanding the hydrological behavior of the DEM without being influenced by the model structural as well as parameter uncertainty. It also reemphasized that SRTM DEM can be a valuable dataset for hydrological analysis provided any error/uncertainty therein is being properly evaluated and characterized.

  13. Population Pharmacokinetic and Pharmacodynamic Model-Based Comparability Assessment of a Recombinant Human Epoetin Alfa and the Biosimilar HX575

    PubMed Central

    Yan, Xiaoyu; Lowe, Philip J.; Fink, Martin; Berghout, Alexander; Balser, Sigrid; Krzyzanski, Wojciech

    2012-01-01

    The aim of this study was to develop an integrated pharmacokinetic and pharmacodynamic (PK/PD) model and assess the comparability between epoetin alfa HEXAL/Binocrit (HX575) and a comparator epoetin alfa by a model-based approach. PK/PD data—including serum drug concentrations, reticulocyte counts, red blood cells, and hemoglobin levels—were obtained from 2 clinical studies. In sum, 149 healthy men received multiple intravenous or subcutaneous doses of HX575 (100 IU/kg) and the comparator 3 times a week for 4 weeks. A population model based on pharmacodynamics-mediated drug disposition and cell maturation processes was used to characterize the PK/PD data for the 2 drugs. Simulations showed that due to target amount changes, total clearance may increase up to 2.4-fold as compared with the baseline. Further simulations suggested that once-weekly and thrice-weekly subcutaneous dosing regimens would result in similar efficacy. The findings from the model-based analysis were consistent with previous results using the standard noncompartmental approach demonstrating PK/PD comparability between HX575 and comparator. However, due to complexity of the PK/PD model, control of random effects was not straightforward. Whereas population PK/PD model-based analyses are suited for studying complex biological systems, such models have their limitations (statistical), and their comparability results should be interpreted carefully. PMID:22162538

  14. Linear and Nonlinear Analysis of Magnetic Bearing Bandwidth Due to Eddy Current Limitations

    NASA Technical Reports Server (NTRS)

    Kenny, Andrew; Palazzolo, Alan

    2000-01-01

    Finite element analysis was used to study the bandwidth of alloy hyperco50a and silicon iron laminated rotors and stators in magnetic bearings. A three dimensional model was made of a heteropolar bearing in which all the flux circulated in the plane of the rotor and stator laminate. A three dimensional model of a plate similar to the region of a pole near the gap was also studied with a very fine mesh. Nonlinear time transient solutions for the net flux carried by the plate were compared to steady state time harmonic solutions. Both linear and quasi-nonlinear steady state time harmonic solutions were calculated and compared. The finite element solutions for power loss and flux bandwidth were compared to those determined from classical analytical solutions to Maxwell's equations.

  15. Data warehouse model design technology analysis and research

    NASA Astrophysics Data System (ADS)

    Jiang, Wenhua; Li, Qingshui

    2012-01-01

    Existing data storage format can not meet the needs of information analysis, data warehouse onto the historical stage, the data warehouse is to support business decision making and the creation of specially designed data collection. With the data warehouse, the companies will all collected information is stored in the data warehouse. The data warehouse is organized according to some, making information easy to access and has value. This paper focuses on the establishment of data warehouse and analysis, design, data warehouse, two barrier models, and compares them.

  16. Intelligent Decisions Need Intelligent Choice of Models and Data - a Bayesian Justifiability Analysis for Models with Vastly Different Complexity

    NASA Astrophysics Data System (ADS)

    Nowak, W.; Schöniger, A.; Wöhling, T.; Illman, W. A.

    2016-12-01

    Model-based decision support requires justifiable models with good predictive capabilities. This, in turn, calls for a fine adjustment between predictive accuracy (small systematic model bias that can be achieved with rather complex models), and predictive precision (small predictive uncertainties that can be achieved with simpler models with fewer parameters). The implied complexity/simplicity trade-off depends on the availability of informative data for calibration. If not available, additional data collection can be planned through optimal experimental design. We present a model justifiability analysis that can compare models of vastly different complexity. It rests on Bayesian model averaging (BMA) to investigate the complexity/performance trade-off dependent on data availability. Then, we disentangle the complexity component from the performance component. We achieve this by replacing actually observed data by realizations of synthetic data predicted by the models. This results in a "model confusion matrix". Based on this matrix, the modeler can identify the maximum model complexity that can be justified by the available (or planned) amount and type of data. As a side product, the matrix quantifies model (dis-)similarity. We apply this analysis to aquifer characterization via hydraulic tomography, comparing four models with a vastly different number of parameters (from a homogeneous model to geostatistical random fields). As a testing scenario, we consider hydraulic tomography data. Using subsets of these data, we determine model justifiability as a function of data set size. The test case shows that geostatistical parameterization requires a substantial amount of hydraulic tomography data to be justified, while a zonation-based model can be justified with more limited data set sizes. The actual model performance (as opposed to model justifiability), however, depends strongly on the quality of prior geological information.

  17. Main steam line break accident simulation of APR1400 using the model of ATLAS facility

    NASA Astrophysics Data System (ADS)

    Ekariansyah, A. S.; Deswandri; Sunaryo, Geni R.

    2018-02-01

    A main steam line break simulation for APR1400 as an advanced design of PWR has been performed using the RELAP5 code. The simulation was conducted in a model of thermal-hydraulic test facility called as ATLAS, which represents a scaled down facility of the APR1400 design. The main steam line break event is described in a open-access safety report document, in which initial conditions and assumptionsfor the analysis were utilized in performing the simulation and analysis of the selected parameter. The objective of this work was to conduct a benchmark activities by comparing the simulation results of the CESEC-III code as a conservative approach code with the results of RELAP5 as a best-estimate code. Based on the simulation results, a general similarity in the behavior of selected parameters was observed between the two codes. However the degree of accuracy still needs further research an analysis by comparing with the other best-estimate code. Uncertainties arising from the ATLAS model should be minimized by taking into account much more specific data in developing the APR1400 model.

  18. Flexural torsional buckling of uniformly compressed beam-like structures

    NASA Astrophysics Data System (ADS)

    Ferretti, M.

    2018-02-01

    A Timoshenko beam model embedded in a 3D space is introduced for buckling analysis of multi-store buildings, made by rigid floors connected by elastic columns. The beam model is developed via a direct approach, and the constitutive law, accounting for prestress forces, is deduced via a suitable homogenization procedure. The bifurcation analysis for the case of uniformly compressed buildings is then addressed, and numerical results concerning the Timoshenko model are compared with 3D finite element analyses. Finally, some conclusions and perspectives are drawn.

  19. Analysis and modeling of leakage current sensor under pulsating direct current

    NASA Astrophysics Data System (ADS)

    Li, Kui; Dai, Yihua; Wang, Yao; Niu, Feng; Chen, Zhao; Huang, Shaopo

    2017-05-01

    In this paper, the transformation characteristics of current sensor under pulsating DC leakage current is investigated. The mathematical model of current sensor is proposed to accurately describe the secondary side current and excitation current. The transformation process of current sensor is illustrated in details and the transformation error is analyzed from multi aspects. A simulation model is built and a sensor prototype is designed to conduct comparative evaluation, and both simulation and experimental results are presented to verify the correctness of theoretical analysis.

  20. Measurement error in time-series analysis: a simulation study comparing modelled and monitored data.

    PubMed

    Butland, Barbara K; Armstrong, Ben; Atkinson, Richard W; Wilkinson, Paul; Heal, Mathew R; Doherty, Ruth M; Vieno, Massimo

    2013-11-13

    Assessing health effects from background exposure to air pollution is often hampered by the sparseness of pollution monitoring networks. However, regional atmospheric chemistry-transport models (CTMs) can provide pollution data with national coverage at fine geographical and temporal resolution. We used statistical simulation to compare the impact on epidemiological time-series analysis of additive measurement error in sparse monitor data as opposed to geographically and temporally complete model data. Statistical simulations were based on a theoretical area of 4 regions each consisting of twenty-five 5 km × 5 km grid-squares. In the context of a 3-year Poisson regression time-series analysis of the association between mortality and a single pollutant, we compared the error impact of using daily grid-specific model data as opposed to daily regional average monitor data. We investigated how this comparison was affected if we changed the number of grids per region containing a monitor. To inform simulations, estimates (e.g. of pollutant means) were obtained from observed monitor data for 2003-2006 for national network sites across the UK and corresponding model data that were generated by the EMEP-WRF CTM. Average within-site correlations between observed monitor and model data were 0.73 and 0.76 for rural and urban daily maximum 8-hour ozone respectively, and 0.67 and 0.61 for rural and urban loge(daily 1-hour maximum NO2). When regional averages were based on 5 or 10 monitors per region, health effect estimates exhibited little bias. However, with only 1 monitor per region, the regression coefficient in our time-series analysis was attenuated by an estimated 6% for urban background ozone, 13% for rural ozone, 29% for urban background loge(NO2) and 38% for rural loge(NO2). For grid-specific model data the corresponding figures were 19%, 22%, 54% and 44% respectively, i.e. similar for rural loge(NO2) but more marked for urban loge(NO2). Even if correlations between model and monitor data appear reasonably strong, additive classical measurement error in model data may lead to appreciable bias in health effect estimates. As process-based air pollution models become more widely used in epidemiological time-series analysis, assessments of error impact that include statistical simulation may be useful.

  1. Comparing the Cognitive Process of Circular Causality in Two Patients with Strokes through Qualitative Analysis.

    PubMed

    Derakhshanrad, Seyed Alireza; Piven, Emily; Ghoochani, Bahareh Zeynalzadeh

    2017-10-01

    Walter J. Freeman pioneered the neurodynamic model of brain activity when he described the brain dynamics for cognitive information transfer as the process of circular causality at intention, meaning, and perception (IMP) levels. This view contributed substantially to establishment of the Intention, Meaning, and Perception Model of Neuro-occupation in occupational therapy. As described by the model, IMP levels are three components of the brain dynamics system, with nonlinear connections that enable cognitive function to be processed in a circular causality fashion, known as Cognitive Process of Circular Causality (CPCC). Although considerable research has been devoted to study the brain dynamics by sophisticated computerized imaging techniques, less attention has been paid to study it through investigating the adaptation process of thoughts and behaviors. To explore how CPCC manifested thinking and behavioral patterns, a qualitative case study was conducted on two matched female participants with strokes, who were of comparable ages, affected sides, and other characteristics, except for their resilience and motivational behaviors. CPCC was compared by matrix analysis between two participants, using content analysis with pre-determined categories. Different patterns of thinking and behavior may have happened, due to disparate regulation of CPCC between two participants.

  2. Optimizing Biorefinery Design and Operations via Linear Programming Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talmadge, Michael; Batan, Liaw; Lamers, Patrick

    The ability to assess and optimize economics of biomass resource utilization for the production of fuels, chemicals and power is essential for the ultimate success of a bioenergy industry. The team of authors, consisting of members from the National Renewable Energy Laboratory (NREL) and the Idaho National Laboratory (INL), has developed simple biorefinery linear programming (LP) models to enable the optimization of theoretical or existing biorefineries. The goal of this analysis is to demonstrate how such models can benefit the developing biorefining industry. It focuses on a theoretical multi-pathway, thermochemical biorefinery configuration and demonstrates how the biorefinery can use LPmore » models for operations planning and optimization in comparable ways to the petroleum refining industry. Using LP modeling tools developed under U.S. Department of Energy's Bioenergy Technologies Office (DOE-BETO) funded efforts, the authors investigate optimization challenges for the theoretical biorefineries such as (1) optimal feedstock slate based on available biomass and prices, (2) breakeven price analysis for available feedstocks, (3) impact analysis for changes in feedstock costs and product prices, (4) optimal biorefinery operations during unit shutdowns / turnarounds, and (5) incentives for increased processing capacity. These biorefinery examples are comparable to crude oil purchasing and operational optimization studies that petroleum refiners perform routinely using LPs and other optimization models. It is important to note that the analyses presented in this article are strictly theoretical and they are not based on current energy market prices. The pricing structure assigned for this demonstrative analysis is consistent with $4 per gallon gasoline, which clearly assumes an economic environment that would favor the construction and operation of biorefineries. The analysis approach and examples provide valuable insights into the usefulness of analysis tools for maximizing the potential benefits of biomass utilization for production of fuels, chemicals and power.« less

  3. Comparative transcriptomics of early dipteran development

    PubMed Central

    2013-01-01

    Background Modern sequencing technologies have massively increased the amount of data available for comparative genomics. Whole-transcriptome shotgun sequencing (RNA-seq) provides a powerful basis for comparative studies. In particular, this approach holds great promise for emerging model species in fields such as evolutionary developmental biology (evo-devo). Results We have sequenced early embryonic transcriptomes of two non-drosophilid dipteran species: the moth midge Clogmia albipunctata, and the scuttle fly Megaselia abdita. Our analysis includes a third, published, transcriptome for the hoverfly Episyrphus balteatus. These emerging models for comparative developmental studies close an important phylogenetic gap between Drosophila melanogaster and other insect model systems. In this paper, we provide a comparative analysis of early embryonic transcriptomes across species, and use our data for a phylogenomic re-evaluation of dipteran phylogenetic relationships. Conclusions We show how comparative transcriptomics can be used to create useful resources for evo-devo, and to investigate phylogenetic relationships. Our results demonstrate that de novo assembly of short (Illumina) reads yields high-quality, high-coverage transcriptomic data sets. We use these data to investigate deep dipteran phylogenetic relationships. Our results, based on a concatenation of 160 orthologous genes, provide support for the traditional view of Clogmia being the sister group of Brachycera (Megaselia, Episyrphus, Drosophila), rather than that of Culicomorpha (which includes mosquitoes and blackflies). PMID:23432914

  4. A Comparative Analysis of Models of Bachelors of Arts' Professional Training in Applied Linguistics at the Universities of Ukraine and the USA

    ERIC Educational Resources Information Center

    Korniienko, Vita

    2014-01-01

    The analysis of scientists' researches from different countries dealing with different aspects of training in the educational systems of developed countries was carried out. The models of Bachelors of Arts in Applied Linguistics professional training in Ukraine were considered. It was analyzed a professional training of Bachelor of Arts in Applied…

  5. A model study of bridge hydraulics

    DOT National Transportation Integrated Search

    2010-08-01

    Most flood studies in the United States use the Army Corps of Engineers HEC-RAS (Hydrologic Engineering : Centers River Analysis System) computer program. This study was carried out to compare results of HEC-RAS : bridge modeling with laboratory e...

  6. Statistical tools for transgene copy number estimation based on real-time PCR.

    PubMed

    Yuan, Joshua S; Burris, Jason; Stewart, Nathan R; Mentewab, Ayalew; Stewart, C Neal

    2007-11-01

    As compared with traditional transgene copy number detection technologies such as Southern blot analysis, real-time PCR provides a fast, inexpensive and high-throughput alternative. However, the real-time PCR based transgene copy number estimation tends to be ambiguous and subjective stemming from the lack of proper statistical analysis and data quality control to render a reliable estimation of copy number with a prediction value. Despite the recent progresses in statistical analysis of real-time PCR, few publications have integrated these advancements in real-time PCR based transgene copy number determination. Three experimental designs and four data quality control integrated statistical models are presented. For the first method, external calibration curves are established for the transgene based on serially-diluted templates. The Ct number from a control transgenic event and putative transgenic event are compared to derive the transgene copy number or zygosity estimation. Simple linear regression and two group T-test procedures were combined to model the data from this design. For the second experimental design, standard curves were generated for both an internal reference gene and the transgene, and the copy number of transgene was compared with that of internal reference gene. Multiple regression models and ANOVA models can be employed to analyze the data and perform quality control for this approach. In the third experimental design, transgene copy number is compared with reference gene without a standard curve, but rather, is based directly on fluorescence data. Two different multiple regression models were proposed to analyze the data based on two different approaches of amplification efficiency integration. Our results highlight the importance of proper statistical treatment and quality control integration in real-time PCR-based transgene copy number determination. These statistical methods allow the real-time PCR-based transgene copy number estimation to be more reliable and precise with a proper statistical estimation. Proper confidence intervals are necessary for unambiguous prediction of trangene copy number. The four different statistical methods are compared for their advantages and disadvantages. Moreover, the statistical methods can also be applied for other real-time PCR-based quantification assays including transfection efficiency analysis and pathogen quantification.

  7. A comparative analysis of errors in long-term econometric forecasts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tepel, R.

    1986-04-01

    The growing body of literature that documents forecast accuracy falls generally into two parts. The first is prescriptive and is carried out by modelers who use simulation analysis as a tool for model improvement. These studies are ex post, that is, they make use of known values for exogenous variables and generate an error measure wholly attributable to the model. The second type of analysis is descriptive and seeks to measure errors, identify patterns among errors and variables and compare forecasts from different sources. Most descriptive studies use an ex ante approach, that is, they evaluate model outputs based onmore » estimated (or forecasted) exogenous variables. In this case, it is the forecasting process, rather than the model, that is under scrutiny. This paper uses an ex ante approach to measure errors in forecast series prepared by Data Resources Incorporated (DRI), Wharton Econometric Forecasting Associates (Wharton), and Chase Econometrics (Chase) and to determine if systematic patterns of errors can be discerned between services, types of variables (by degree of aggregation), length of forecast and time at which the forecast is made. Errors are measured as the percent difference between actual and forecasted values for the historical period of 1971 to 1983.« less

  8. The influence of computational assumptions on analysing abdominal aortic aneurysm haemodynamics.

    PubMed

    Ene, Florentina; Delassus, Patrick; Morris, Liam

    2014-08-01

    The variation in computational assumptions for analysing abdominal aortic aneurysm haemodynamics can influence the desired output results and computational cost. Such assumptions for abdominal aortic aneurysm modelling include static/transient pressures, steady/transient flows and rigid/compliant walls. Six computational methods and these various assumptions were simulated and compared within a realistic abdominal aortic aneurysm model with and without intraluminal thrombus. A full transient fluid-structure interaction was required to analyse the flow patterns within the compliant abdominal aortic aneurysms models. Rigid wall computational fluid dynamics overestimates the velocity magnitude by as much as 40%-65% and the wall shear stress by 30%-50%. These differences were attributed to the deforming walls which reduced the outlet volumetric flow rate for the transient fluid-structure interaction during the majority of the systolic phase. Static finite element analysis accurately approximates the deformations and von Mises stresses when compared with transient fluid-structure interaction. Simplifying the modelling complexity reduces the computational cost significantly. In conclusion, the deformation and von Mises stress can be approximately found by static finite element analysis, while for compliant models a full transient fluid-structure interaction analysis is required for acquiring the fluid flow phenomenon. © IMechE 2014.

  9. Mathematical and Statistical Techniques for Systems Medicine: The Wnt Signaling Pathway as a Case Study.

    PubMed

    MacLean, Adam L; Harrington, Heather A; Stumpf, Michael P H; Byrne, Helen M

    2016-01-01

    The last decade has seen an explosion in models that describe phenomena in systems medicine. Such models are especially useful for studying signaling pathways, such as the Wnt pathway. In this chapter we use the Wnt pathway to showcase current mathematical and statistical techniques that enable modelers to gain insight into (models of) gene regulation and generate testable predictions. We introduce a range of modeling frameworks, but focus on ordinary differential equation (ODE) models since they remain the most widely used approach in systems biology and medicine and continue to offer great potential. We present methods for the analysis of a single model, comprising applications of standard dynamical systems approaches such as nondimensionalization, steady state, asymptotic and sensitivity analysis, and more recent statistical and algebraic approaches to compare models with data. We present parameter estimation and model comparison techniques, focusing on Bayesian analysis and coplanarity via algebraic geometry. Our intention is that this (non-exhaustive) review may serve as a useful starting point for the analysis of models in systems medicine.

  10. Molecular design of anticancer drug leads based on three-dimensional quantitative structure-activity relationship.

    PubMed

    Huang, Xiao Yan; Shan, Zhi Jie; Zhai, Hong Lin; Li, Li Na; Zhang, Xiao Yun

    2011-08-22

    Heat shock protein 90 (Hsp90) takes part in the developments of several cancers. Novobiocin, a typically C-terminal inhibitor for Hsp90, will probably used as an important anticancer drug in the future. In this work, we explored the valuable information and designed new novobiocin derivatives based on a three-dimensional quantitative structure-activity relationship (3D QSAR). The comparative molecular field analysis and comparative molecular similarity indices analysis models with high predictive capability were established, and their reliabilities are supported by the statistical parameters. Based on the several important influence factors obtained from these models, six new novobiocin derivatives with higher inhibitory activities were designed and confirmed by the molecular simulation with our models, which provide the potential anticancer drug leads for further research.

  11. Transitions in State Public Health Law: Comparative Analysis of State Public Health Law Reform Following the Turning Point Model State Public Health Act

    PubMed Central

    Meier, Benjamin Mason; Gebbie, Kristine M.

    2009-01-01

    Given the public health importance of law modernization, we undertook a comparative analysis of policy efforts in 4 states (Alaska, South Carolina, Wisconsin, and Nebraska) that have considered public health law reform based on the Turning Point Model State Public Health Act. Through national legislative tracking and state case studies, we investigated how the Turning Point Act's model legal language has been considered for incorporation into state law and analyzed key facilitating and inhibiting factors for public health law reform. Our findings provide the practice community with a research base to facilitate further law reform and inform future scholarship on the role of law as a determinant of the public's health. PMID:19150900

  12. Evaluation of methodology for the analysis of 'time-to-event' data in pharmacogenomic genome-wide association studies.

    PubMed

    Syed, Hamzah; Jorgensen, Andrea L; Morris, Andrew P

    2016-06-01

    To evaluate the power to detect associations between SNPs and time-to-event outcomes across a range of pharmacogenomic study designs while comparing alternative regression approaches. Simulations were conducted to compare Cox proportional hazards modeling accounting for censoring and logistic regression modeling of a dichotomized outcome at the end of the study. The Cox proportional hazards model was demonstrated to be more powerful than the logistic regression analysis. The difference in power between the approaches was highly dependent on the rate of censoring. Initial evaluation of single-nucleotide polymorphism association signals using computationally efficient software with dichotomized outcomes provides an effective screening tool for some design scenarios, and thus has important implications for the development of analytical protocols in pharmacogenomic studies.

  13. Cost-Effectiveness Analysis of Stereotactic Body Radiation Therapy Compared With Radiofrequency Ablation for Inoperable Colorectal Liver Metastases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Hayeon, E-mail: kimh2@upmc.edu; Gill, Beant; Beriwal, Sushil

    Purpose: To conduct a cost-effectiveness analysis to determine whether stereotactic body radiation therapy (SBRT) is a cost-effective therapy compared with radiofrequency ablation (RFA) for patients with unresectable colorectal cancer (CRC) liver metastases. Methods and Materials: A cost-effectiveness analysis was conducted using a Markov model and 1-month cycle over a lifetime horizon. Transition probabilities, quality of life utilities, and costs associated with SBRT and RFA were captured in the model on the basis of a comprehensive literature review and Medicare reimbursements in 2014. Strategies were compared using the incremental cost-effectiveness ratio, with effectiveness measured in quality-adjusted life years (QALYs). To account formore » model uncertainty, 1-way and probabilistic sensitivity analyses were performed. Strategies were evaluated with a willingness-to-pay threshold of $100,000 per QALY gained. Results: In base case analysis, treatment costs for 3 fractions of SBRT and 1 RFA procedure were $13,000 and $4397, respectively. Median survival was assumed the same for both strategies (25 months). The SBRT costs $8202 more than RFA while gaining 0.05 QALYs, resulting in an incremental cost-effectiveness ratio of $164,660 per QALY gained. In 1-way sensitivity analyses, results were most sensitive to variation of median survival from both treatments. Stereotactic body radiation therapy was economically reasonable if better survival was presumed (>1 month gain) or if used for large tumors (>4 cm). Conclusions: If equal survival is assumed, SBRT is not cost-effective compared with RFA for inoperable colorectal liver metastases. However, if better local control leads to small survival gains with SBRT, this strategy becomes cost-effective. Ideally, these results should be confirmed with prospective comparative data.« less

  14. Methods for Mediation Analysis with Missing Data

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Wang, Lijuan

    2013-01-01

    Despite wide applications of both mediation models and missing data techniques, formal discussion of mediation analysis with missing data is still rare. We introduce and compare four approaches to dealing with missing data in mediation analysis including list wise deletion, pairwise deletion, multiple imputation (MI), and a two-stage maximum…

  15. Differential Item Functioning Analysis Using Rasch Item Information Functions

    ERIC Educational Resources Information Center

    Wyse, Adam E.; Mapuranga, Raymond

    2009-01-01

    Differential item functioning (DIF) analysis is a statistical technique used for ensuring the equity and fairness of educational assessments. This study formulates a new DIF analysis method using the information similarity index (ISI). ISI compares item information functions when data fits the Rasch model. Through simulations and an international…

  16. A comparative study of multivariable robustness analysis methods as applied to integrated flight and propulsion control

    NASA Technical Reports Server (NTRS)

    Schierman, John D.; Lovell, T. A.; Schmidt, David K.

    1993-01-01

    Three multivariable robustness analysis methods are compared and contrasted. The focus of the analysis is on system stability and performance robustness to uncertainty in the coupling dynamics between two interacting subsystems. Of particular interest is interacting airframe and engine subsystems, and an example airframe/engine vehicle configuration is utilized in the demonstration of these approaches. The singular value (SV) and structured singular value (SSV) analysis methods are compared to a method especially well suited for analysis of robustness to uncertainties in subsystem interactions. This approach is referred to here as the interacting subsystem (IS) analysis method. This method has been used previously to analyze airframe/engine systems, emphasizing the study of stability robustness. However, performance robustness is also investigated here, and a new measure of allowable uncertainty for acceptable performance robustness is introduced. The IS methodology does not require plant uncertainty models to measure the robustness of the system, and is shown to yield valuable information regarding the effects of subsystem interactions. In contrast, the SV and SSV methods allow for the evaluation of the robustness of the system to particular models of uncertainty, and do not directly indicate how the airframe (engine) subsystem interacts with the engine (airframe) subsystem.

  17. Hospital survey on patient safety culture: psychometric analysis on a Scottish sample.

    PubMed

    Sarac, Cakil; Flin, Rhona; Mearns, Kathryn; Jackson, Jeanette

    2011-10-01

    To investigate the psychometric properties of the Hospital Survey on Patient Safety Culture on a Scottish NHS data set. The data were collected from 1969 clinical staff (estimated 22% response rate) from one acute hospital from each of seven Scottish Health boards. Using a split-half validation technique, the data were randomly split; an exploratory factor analysis was conducted on the calibration data set, and confirmatory factor analyses were conducted on the validation data set to investigate and check the original US model fit in a Scottish sample. Following the split-half validation technique, exploratory factor analysis results showed a 10-factor optimal measurement model. The confirmatory factor analyses were then performed to compare the model fit of two competing models (10-factor alternative model vs 12-factor original model). An S-B scaled χ(2) square difference test demonstrated that the original 12-factor model performed significantly better in a Scottish sample. Furthermore, reliability analyses of each component yielded satisfactory results. The mean scores on the climate dimensions in the Scottish sample were comparable with those found in other European countries. This study provided evidence that the original 12-factor structure of the Hospital Survey on Patient Safety Culture scale has been replicated in this Scottish sample. Therefore, no modifications are required to the original 12-factor model, which is suggested for use, since it would allow researchers the possibility of cross-national comparisons.

  18. Salicylic acid deposition from wash-off products: comparison of in vivo and porcine deposition models.

    PubMed

    Davies, M A

    2015-10-01

    Salicylic acid (SA) is a widely used active in anti-acne face wash products. Only about 1-2% of the total dose is actually deposited on skin during washing, and more efficient deposition systems are sought. The objective of this work was to develop an improved method, including data analysis, to measure deposition of SA from wash-off formulae. Full fluorescence excitation-emission matrices (EEMs) were acquired for non-invasive measurement of deposition of SA from wash-off products. Multivariate data analysis methods - parallel factor analysis and N-way partial least-squares regression - were used to develop and compare deposition models on human volunteers and porcine skin. Although both models are useful, there are differences between them. First, the range of linear response to dosages of SA was 60 μg cm(-2) in vivo compared to 25 μg cm(-2) on porcine skin. Second, the actual shape of the SA band was different between substrates. The methods employed in this work highlight the utility of the use of EEMs, in conjunction with multivariate analysis tools such as parallel factor analysis and multiway partial least-squares calibration, in determining sources of spectral variability in skin and quantification of exogenous species deposited on skin. The human model exhibited the widest range of linearity, but porcine model is still useful up to deposition levels of 25 μg cm(-2) or used with nonlinear calibration models. © 2015 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  19. Quadrupedal rodent gait compensations in a low dose monoiodoacetate model of osteoarthritis.

    PubMed

    Lakes, Emily H; Allen, Kyle D

    2018-06-01

    Rodent gait analysis provides robust, quantitative results for preclinical musculoskeletal and neurological models. In prior work, surgical models of osteoarthritis have been found to result in a hind limb shuffle-stepping gait compensation, while a high dose monoiodoacetate (MIA, 3 mg) model resulted in a hind limb antalgic gait. However, it is unknown whether the antalgic gait caused by MIA is associated with severity of degeneration from the high dosage or the whole-joint degeneration associated with glycolysis inhibition. This study evaluates rodent gait changes resulting from a low dose, 1 mg unilateral intra-articular injection of MIA compared to saline injected and naïve rats. Spatiotemporal and dynamic gait parameters were collected from a total of 42 male Lewis rats spread across 3 time points: 1, 2, and 4 weeks post-injection. To provide a detailed analysis of this low dose MIA model, gait analysis was used to uniquely quantify both fore and hind limb gait parameters. Our data indicate that 1 mg of MIA caused relatively minor degeneration and a shuffle-step gait compensation, similar to the compensation observed in prior surgical models. These data from a 1 mg MIA model show a different gait compensation compared to a previously studied 3 mg model. This 1 mg MIA model resulted in gait compensations more similar to a previously studied surgical model of osteoarthritis. Additionally, this study provides detailed 4 limb analysis of rodent gait that includes spatiotemporal and dynamic data from the same gait trial. These data highlight the importance of measuring dynamic data in combination with spatiotemporal data, since compensatory gait patterns may not be captured by spatial, temporal, or dynamic characterizations alone. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Impacts of Wake Effect and Time Delay on the Dynamic Analysis of Wind Farms Models

    ERIC Educational Resources Information Center

    El-Fouly, Tarek H. M.; El-Saadany, Ehab F.; Salama, Magdy M. A.

    2008-01-01

    This article investigates the impacts of proper modeling of the wake effects and wind speed delays, between different wind turbines' rows, on the dynamic performance accuracy of the wind farms models. Three different modeling scenarios were compared to highlight the impacts of wake effects and wind speed time-delay models. In the first scenario,…

  1. Graph-Theoretic Properties of Networks Based on Word Association Norms: Implications for Models of Lexical Semantic Memory

    ERIC Educational Resources Information Center

    Gruenenfelder, Thomas M.; Recchia, Gabriel; Rubin, Tim; Jones, Michael N.

    2016-01-01

    We compared the ability of three different contextual models of lexical semantic memory (BEAGLE, Latent Semantic Analysis, and the Topic model) and of a simple associative model (POC) to predict the properties of semantic networks derived from word association norms. None of the semantic models were able to accurately predict all of the network…

  2. Rapid analysis of pharmaceutical drugs using LIBS coupled with multivariate analysis.

    PubMed

    Tiwari, P K; Awasthi, S; Kumar, R; Anand, R K; Rai, P K; Rai, A K

    2018-02-01

    Type 2 diabetes drug tablets containing voglibose having dose strengths of 0.2 and 0.3 mg of various brands have been examined, using laser-induced breakdown spectroscopy (LIBS) technique. The statistical methods such as the principal component analysis (PCA) and the partial least square regression analysis (PLSR) have been employed on LIBS spectral data for classifying and developing the calibration models of drug samples. We have developed the ratio-based calibration model applying PLSR in which relative spectral intensity ratios H/C, H/N and O/N are used. Further, the developed model has been employed to predict the relative concentration of element in unknown drug samples. The experiment has been performed in air and argon atmosphere, respectively, and the obtained results have been compared. The present model provides rapid spectroscopic method for drug analysis with high statistical significance for online control and measurement process in a wide variety of pharmaceutical industrial applications.

  3. Comparative analysis of meteorological performance of coupled chemistry-meteorology models in the context of AQMEII phase 2

    EPA Science Inventory

    Air pollution simulations critically depend on the quality of the underlying meteorology. In phase 2 of the Air Quality Model Evaluation International Initiative (AQMEII-2), thirteen modeling groups from Europe and four groups from North America operating eight different regional...

  4. Averaging Models: Parameters Estimation with the R-Average Procedure

    ERIC Educational Resources Information Center

    Vidotto, G.; Massidda, D.; Noventa, S.

    2010-01-01

    The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…

  5. PREDICTING ER BINDING AFFINITY FOR EDC RANKING AND PRIORITIZATION: A COMPARISON OF THREE MODELS

    EPA Science Inventory

    A comparative analysis of how three COREPA models for ER binding affinity performed when used to predict potential estrogen receptor (ER) ligands is presented. Models I and II were developed based on training sets of 232 and 279 rat ER binding affinity measurements, respectively....

  6. Cost drivers and resource allocation in military health care systems.

    PubMed

    Fulton, Larry; Lasdon, Leon S; McDaniel, Reuben R

    2007-03-01

    This study illustrates the feasibility of incorporating technical efficiency considerations in the funding of military hospitals and identifies the primary drivers for hospital costs. Secondary data collected for 24 U.S.-based Army hospitals and medical centers for the years 2001 to 2003 are the basis for this analysis. Technical efficiency was measured by using data envelopment analysis; subsequently, efficiency estimates were included in logarithmic-linear cost models that specified cost as a function of volume, complexity, efficiency, time, and facility type. These logarithmic-linear models were compared against stochastic frontier analysis models. A parsimonious, three-variable, logarithmic-linear model composed of volume, complexity, and efficiency variables exhibited a strong linear relationship with observed costs (R(2) = 0.98). This model also proved reliable in forecasting (R(2) = 0.96). Based on our analysis, as much as $120 million might be reallocated to improve the United States-based Army hospital performance evaluated in this study.

  7. The browning value changes and spectral analysis on the Maillard reaction product from glucose and methionine model system

    NASA Astrophysics Data System (ADS)

    Al-Baarri, A. N.; Legowo, A. M.; Widayat

    2018-01-01

    D-glucose has been understood to provide the various effect on the reactivity in Maillard reaction resulting in the changes in physical performance of food product. Therefore this research was done to analyse physical appearance of Maillard reaction product made of D-glucose and methionine as a model system. The changes in browning value and spectral analysis model system were determined. The glucose-methionine model system was produced through the heating treatment at 50°C and RH 70% for 24 hours. The data were collected for every three hour using spectrophotometer. As result, browning value was elevated with the increase of heating time and remarkably high if compare to the D-glucose only. Furthermore, the spectral analysis showed that methionine turned the pattern of peak appearance. As conclusion, methionine raised the browning value and changed the pattern of spectral analysis in Maillard reaction model system.

  8. Applicability of Monte Carlo cross validation technique for model development and validation using generalised least squares regression

    NASA Astrophysics Data System (ADS)

    Haddad, Khaled; Rahman, Ataur; A Zaman, Mohammad; Shrestha, Surendra

    2013-03-01

    SummaryIn regional hydrologic regression analysis, model selection and validation are regarded as important steps. Here, the model selection is usually based on some measurements of goodness-of-fit between the model prediction and observed data. In Regional Flood Frequency Analysis (RFFA), leave-one-out (LOO) validation or a fixed percentage leave out validation (e.g., 10%) is commonly adopted to assess the predictive ability of regression-based prediction equations. This paper develops a Monte Carlo Cross Validation (MCCV) technique (which has widely been adopted in Chemometrics and Econometrics) in RFFA using Generalised Least Squares Regression (GLSR) and compares it with the most commonly adopted LOO validation approach. The study uses simulated and regional flood data from the state of New South Wales in Australia. It is found that when developing hydrologic regression models, application of the MCCV is likely to result in a more parsimonious model than the LOO. It has also been found that the MCCV can provide a more realistic estimate of a model's predictive ability when compared with the LOO.

  9. Advantages and limitations of classic and 3D QSAR approaches in nano-QSAR studies based on biological activity of fullerene derivatives

    DOE PAGES

    Jagiello, Karolina; Grzonkowska, Monika; Swirog, Marta; ...

    2016-08-29

    In this contribution, the advantages and limitations of two computational techniques that can be used for the investigation of nanoparticles activity and toxicity: classic nano-QSAR (Quantitative Structure–Activity Relationships employed for nanomaterials) and 3D nano-QSAR (three-dimensional Quantitative Structure–Activity Relationships, such us Comparative Molecular Field Analysis, CoMFA/Comparative Molecular Similarity Indices Analysis, CoMSIA analysis employed for nanomaterials) have been briefly summarized. Both approaches were compared according to the selected criteria, including: efficiency, type of experimental data, class of nanomaterials, time required for calculations and computational cost, difficulties in the interpretation. Taking into account the advantages and limitations of each method, we provide themore » recommendations for nano-QSAR modellers and QSAR model users to be able to determine a proper and efficient methodology to investigate biological activity of nanoparticles in order to describe the underlying interactions in the most reliable and useful manner.« less

  10. Advantages and limitations of classic and 3D QSAR approaches in nano-QSAR studies based on biological activity of fullerene derivatives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jagiello, Karolina; Grzonkowska, Monika; Swirog, Marta

    In this contribution, the advantages and limitations of two computational techniques that can be used for the investigation of nanoparticles activity and toxicity: classic nano-QSAR (Quantitative Structure–Activity Relationships employed for nanomaterials) and 3D nano-QSAR (three-dimensional Quantitative Structure–Activity Relationships, such us Comparative Molecular Field Analysis, CoMFA/Comparative Molecular Similarity Indices Analysis, CoMSIA analysis employed for nanomaterials) have been briefly summarized. Both approaches were compared according to the selected criteria, including: efficiency, type of experimental data, class of nanomaterials, time required for calculations and computational cost, difficulties in the interpretation. Taking into account the advantages and limitations of each method, we provide themore » recommendations for nano-QSAR modellers and QSAR model users to be able to determine a proper and efficient methodology to investigate biological activity of nanoparticles in order to describe the underlying interactions in the most reliable and useful manner.« less

  11. Systematic review, network meta-analysis and economic evaluation of biological therapy for the management of active psoriatic arthritis.

    PubMed

    Cawson, Matthew Richard; Mitchell, Stephen Andrew; Knight, Chris; Wildey, Henry; Spurden, Dean; Bird, Alex; Orme, Michelle Elaine

    2014-01-20

    An updated economic evaluation was conducted to compare the cost-effectiveness of the four tumour necrosis factor (TNF)-α inhibitors adalimumab, etanercept, golimumab and infliximab in active, progressive psoriatic arthritis (PsA) where response to standard treatment has been inadequate. A systematic review was conducted to identify relevant, recently published studies and the new trial data were synthesised, via a Bayesian network meta-analysis (NMA), to estimate the relative efficacy of the TNF-α inhibitors in terms of Psoriatic Arthritis Response Criteria (PsARC) response, Health Assessment Questionnaire (HAQ) scores and Psoriasis Area and Severity Index (PASI). A previously developed economic model was updated with the new meta-analysis results and current cost data. The model was adapted to delineate patients by PASI 50%, 75% and 90% response rates to differentiate between psoriasis outcomes. All four licensed TNF-α inhibitors were significantly more effective than placebo in achieving PsARC response in patients with active PsA. Adalimumab, etanercept and infliximab were significantly more effective than placebo in improving HAQ scores in patients who had achieved a PsARC response and in improving HAQ scores in PsARC non-responders. In an analysis using 1,000 model simulations, on average etanercept was the most cost-effective treatment and, at the National Institute for Health and Care Excellence willingness-to-pay threshold of between £20,000 to £30,000, etanercept is the preferred option. The economic analysis agrees with the conclusions from the previous models, in that biologics are shown to be cost-effective for treating patients with active PsA compared with the conventional management strategy. In particular, etanercept is cost-effective compared with the other biologic treatments.

  12. Development of a Feedstock-to-Product Chain Model for Densified Biomass Pellets

    NASA Astrophysics Data System (ADS)

    McPherrin, Daniel

    The Q’Pellet is a spherical, torrefied biomass pellet currently under development. It aims to improve on the shortcomings of commercially available cylindrical white and torrefied pellets. A spreadsheet-based model was developed to allow for techno-economic analysis and simplified life cycle analysis of Q’Pellets, torrefied pellets and white pellets. A case study was developed to compare the production of white, torrefied and Q’Pellet production based on their internal rates of return and life cycle greenhouse gas emissions. The case study was based on a commercial scale plant built in Williams Lake BC with product delivery in Rotterdam, Netherlands. Q’Pellets had the highest modelled internal rate of return, at 12.7%, with white pellets at 11.1% and torrefied pellets at 8.0%. The simplified life cycle analysis showed that Q’Pellets had the lowest life cycle greenhouse gas emissions of the three products, 6.96 kgCO2eq/GJ, compared to 21.50 kgCO2eq/GJ for white pellets and 10.08 kgCO2eq/GJ for torrefied pellets. At these levels of life cycle greenhouse gas emissions, white pellets are above the maximum life cycle emissions to be considered sustainable under EU regulations. Sensitivity analysis was performed on the model by modifying input variables, and showed that white pellets are more sensitive to uncontrollable market variables, especially pellet sale prices, raw biomass prices and transportation costs. Monte Carlo analysis was also performed, which showed that white pellet production is less predictable and more likely to lead to a negative internal rate of return compared to Q’Pellet production.

  13. Structural Configuration Systems Analysis for Advanced Aircraft Fuselage Concepts

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek; Welstead, Jason R.; Quinlan, Jesse R.; Guynn, Mark D.

    2016-01-01

    Structural configuration analysis of an advanced aircraft fuselage concept is investigated. This concept is characterized by a double-bubble section fuselage with rear mounted engines. Based on lessons learned from structural systems analysis of unconventional aircraft, high-fidelity finite-element models (FEM) are developed for evaluating structural performance of three double-bubble section configurations. Structural sizing and stress analysis are applied for design improvement and weight reduction. Among the three double-bubble configurations, the double-D cross-section fuselage design was found to have a relatively lower structural weight. The structural FEM weights of these three double-bubble fuselage section concepts are also compared with several cylindrical fuselage models. Since these fuselage concepts are different in size, shape and material, the fuselage structural FEM weights are normalized by the corresponding passenger floor area for a relative comparison. This structural systems analysis indicates that an advanced composite double-D section fuselage may have a relative structural weight ratio advantage over a conventional aluminum fuselage. Ten commercial and conceptual aircraft fuselage structural weight estimates, which are empirically derived from the corresponding maximum takeoff gross weight, are also presented and compared with the FEM- based estimates for possible correlation. A conceptual full vehicle FEM model with a double-D fuselage is also developed for preliminary structural analysis and weight estimation.

  14. Emergent structures and understanding from a comparative uncertainty analysis of the FUSE rainfall-runoff modelling platform for >1,100 catchments

    NASA Astrophysics Data System (ADS)

    Freer, J. E.; Odoni, N. A.; Coxon, G.; Bloomfield, J.; Clark, M. P.; Greene, S.; Johnes, P.; Macleod, C.; Reaney, S. M.

    2013-12-01

    If we are to learn about catchments and their hydrological function then a range of analysis techniques can be proposed from analysing observations to building complex physically based models using detailed attributes of catchment characteristics. Decisions regarding which technique is fit for a specific purpose will depend on the data available, computing resources, and the underlying reasons for the study. Here we explore defining catchment function in a relatively general sense expressed via a comparison of multiple model structures within an uncertainty analysis framework. We use the FUSE (Framework for Understanding Structural Errors - Clark et al., 2008) rainfall-runoff modelling platform and the GLUE (Generalised Likelihood Uncertainty Estimation - Beven and Freer, 2001) uncertainty analysis framework. Using these techniques we assess two main outcomes: 1) Benchmarking our predictive capability using discharge performance metrics for a diverse range of catchments across the UK 2) evaluating emergent behaviour for each catchment and/or region expressed as ';best performing' model structures that may be equally plausible representations of catchment behaviour. We shall show how such comparative hydrological modelling studies show patterns of emergent behaviour linked both to seasonal responses and to different geoclimatic regions. These results have implications for the hydrological community regarding how models can help us learn about places as hypothesis testing tools. Furthermore we explore what the limits are to such an analysis when dealing with differing data quality and information content from ';pristine' to less well characterised and highly modified catchment domains. This research has been piloted in the UK as part of the Environmental Virtual Observatory programme (EVOp), funded by NERC to demonstrate the use of cyber-infrastructure and cloud computing resources to develop better methods of linking data and models and to support scenario analysis for research, policy and operational needs.

  15. Guaranteeing robustness of structural condition monitoring to environmental variability

    NASA Astrophysics Data System (ADS)

    Van Buren, Kendra; Reilly, Jack; Neal, Kyle; Edwards, Harry; Hemez, François

    2017-01-01

    Advances in sensor deployment and computational modeling have allowed significant strides to be recently made in the field of Structural Health Monitoring (SHM). One widely used SHM strategy is to perform a vibration analysis where a model of the structure's pristine (undamaged) condition is compared with vibration response data collected from the physical structure. Discrepancies between model predictions and monitoring data can be interpreted as structural damage. Unfortunately, multiple sources of uncertainty must also be considered in the analysis, including environmental variability, unknown model functional forms, and unknown values of model parameters. Not accounting for these sources of uncertainty can lead to false-positives or false-negatives in the structural condition assessment. To manage the uncertainty, we propose a robust SHM methodology that combines three technologies. A time series algorithm is trained using "baseline" data to predict the vibration response, compare predictions to actual measurements collected on a potentially damaged structure, and calculate a user-defined damage indicator. The second technology handles the uncertainty present in the problem. An analysis of robustness is performed to propagate this uncertainty through the time series algorithm and obtain the corresponding bounds of variation of the damage indicator. The uncertainty description and robustness analysis are both inspired by the theory of info-gap decision-making. Lastly, an appropriate "size" of the uncertainty space is determined through physical experiments performed in laboratory conditions. Our hypothesis is that examining how the uncertainty space changes throughout time might lead to superior diagnostics of structural damage as compared to only monitoring the damage indicator. This methodology is applied to a portal frame structure to assess if the strategy holds promise for robust SHM. (Publication approved for unlimited, public release on October-28-2015, LA-UR-15-28442, unclassified.)

  16. Scapular notching in reverse shoulder arthroplasty: validation of a computer impingement model.

    PubMed

    Roche, Christopher P; Marczuk, Yann; Wright, Thomas W; Flurin, Pierre-Henri; Grey, Sean G; Jones, Richard B; Routman, Howard D; Gilot, Gregory J; Zuckerman, Joseph D

    2013-01-01

    The purpose of this study is to validate a reverse shoulder computer impingement model and quantify the impact of implant position on scapular impingement by comparing it to that of a radiographic analysis of 256 patients who received the same prosthesis and were followed postoperatively for an average of 22.2 months. A geometric computer analysis quantified anterior and posterior scapular impingement as the humerus was internally and externally rotated at varying levels of abduction and adduction relative to a fixed scapula at defined glenoid implant positions. These impingement results were compared to radiographic study of 256 patients who were analyzed for notching, glenoid baseplate position, and glenosphere overhang. The computer model predicted no impingement at 0° humeral abduction in the scapular plane for the 38 mm, 42 mm, and 46 mm devices when the glenoid baseplate cage peg is positioned 18.6 mm, 20.4 mm, and 22.7 mm from the inferior glenoid rim (of the reamed glenoid) or when glenosphere overhang of 4.6 mm, 4.7 mm, and 4.5 mm was obtained with each size glenosphere, respectively. When compared to the radiographic analysis, the computer model correctly predicted impingement based upon glenoid base- plate position in 18 of 26 patients with scapular notching and based upon glenosphere overhang in 15 of 26 patients with scapular notching. Reverse shoulder implant positioning plays an important role in scapular notching. The results of this study demonstrate that the computer impingement model can effectively predict impingement based upon implant positioning in a majority of patients who developed scapular notching clinically. This computer analysis provides guidance to surgeons on implant positions that reduce scapular notching, a well-documented complication of reverse shoulder arthroplasty.

  17. Structural modelling and comparative analysis of homologous, analogous and specific proteins from Trypanosoma cruzi versus Homo sapiens: putative drug targets for chagas' disease treatment

    PubMed Central

    2010-01-01

    Background Trypanosoma cruzi is the etiological agent of Chagas' disease, an endemic infection that causes thousands of deaths every year in Latin America. Therapeutic options remain inefficient, demanding the search for new drugs and/or new molecular targets. Such efforts can focus on proteins that are specific to the parasite, but analogous enzymes and enzymes with a three-dimensional (3D) structure sufficiently different from the corresponding host proteins may represent equally interesting targets. In order to find these targets we used the workflows MHOLline and AnEnΠ obtaining 3D models from homologous, analogous and specific proteins of Trypanosoma cruzi versus Homo sapiens. Results We applied genome wide comparative modelling techniques to obtain 3D models for 3,286 predicted proteins of T. cruzi. In combination with comparative genome analysis to Homo sapiens, we were able to identify a subset of 397 enzyme sequences, of which 356 are homologous, 3 analogous and 38 specific to the parasite. Conclusions In this work, we present a set of 397 enzyme models of T. cruzi that can constitute potential structure-based drug targets to be investigated for the development of new strategies to fight Chagas' disease. The strategies presented here support the concept of structural analysis in conjunction with protein functional analysis as an interesting computational methodology to detect potential targets for structure-based rational drug design. For example, 2,4-dienoyl-CoA reductase (EC 1.3.1.34) and triacylglycerol lipase (EC 3.1.1.3), classified as analogous proteins in relation to H. sapiens enzymes, were identified as new potential molecular targets. PMID:21034488

  18. Structural modelling and comparative analysis of homologous, analogous and specific proteins from Trypanosoma cruzi versus Homo sapiens: putative drug targets for chagas' disease treatment.

    PubMed

    Capriles, Priscila V S Z; Guimarães, Ana C R; Otto, Thomas D; Miranda, Antonio B; Dardenne, Laurent E; Degrave, Wim M

    2010-10-29

    Trypanosoma cruzi is the etiological agent of Chagas' disease, an endemic infection that causes thousands of deaths every year in Latin America. Therapeutic options remain inefficient, demanding the search for new drugs and/or new molecular targets. Such efforts can focus on proteins that are specific to the parasite, but analogous enzymes and enzymes with a three-dimensional (3D) structure sufficiently different from the corresponding host proteins may represent equally interesting targets. In order to find these targets we used the workflows MHOLline and AnEnΠ obtaining 3D models from homologous, analogous and specific proteins of Trypanosoma cruzi versus Homo sapiens. We applied genome wide comparative modelling techniques to obtain 3D models for 3,286 predicted proteins of T. cruzi. In combination with comparative genome analysis to Homo sapiens, we were able to identify a subset of 397 enzyme sequences, of which 356 are homologous, 3 analogous and 38 specific to the parasite. In this work, we present a set of 397 enzyme models of T. cruzi that can constitute potential structure-based drug targets to be investigated for the development of new strategies to fight Chagas' disease. The strategies presented here support the concept of structural analysis in conjunction with protein functional analysis as an interesting computational methodology to detect potential targets for structure-based rational drug design. For example, 2,4-dienoyl-CoA reductase (EC 1.3.1.34) and triacylglycerol lipase (EC 3.1.1.3), classified as analogous proteins in relation to H. sapiens enzymes, were identified as new potential molecular targets.

  19. Verification of Orthogrid Finite Element Modeling Techniques

    NASA Technical Reports Server (NTRS)

    Steeve, B. E.

    1996-01-01

    The stress analysis of orthogrid structures, specifically with I-beam sections, is regularly performed using finite elements. Various modeling techniques are often used to simplify the modeling process but still adequately capture the actual hardware behavior. The accuracy of such 'Oshort cutso' is sometimes in question. This report compares three modeling techniques to actual test results from a loaded orthogrid panel. The finite element models include a beam, shell, and mixed beam and shell element model. Results show that the shell element model performs the best, but that the simpler beam and beam and shell element models provide reasonable to conservative results for a stress analysis. When deflection and stiffness is critical, it is important to capture the effect of the orthogrid nodes in the model.

  20. Comparative promoter analysis allows de novo identification of specialized cell junction-associated proteins.

    PubMed

    Cohen, Clemens D; Klingenhoff, Andreas; Boucherot, Anissa; Nitsche, Almut; Henger, Anna; Brunner, Bodo; Schmid, Holger; Merkle, Monika; Saleem, Moin A; Koller, Klaus-Peter; Werner, Thomas; Gröne, Hermann-Josef; Nelson, Peter J; Kretzler, Matthias

    2006-04-11

    Shared transcription factor binding sites that are conserved in distance and orientation help control the expression of gene products that act together in the same biological context. New bioinformatics approaches allow the rapid characterization of shared promoter structures and can be used to find novel interacting molecules. Here, these principles are demonstrated by using molecules linked to the unique functional unit of the glomerular slit diaphragm. An evolutionarily conserved promoter model was generated by comparative genomics in the proximal promoter regions of the slit diaphragm-associated molecule nephrin. Phylogenetic promoter fingerprints of known elements of the slit diaphragm complex identified the nephrin model in the promoter region of zonula occludens-1 (ZO-1). Genome-wide scans using this promoter model effectively predicted a previously unrecognized slit diaphragm molecule, cadherin-5. Nephrin, ZO-1, and cadherin-5 mRNA showed stringent coexpression across a diverse set of human glomerular diseases. Comparative promoter analysis can identify regulatory pathways at work in tissue homeostasis and disease processes.

  1. Comparative study of popular objective functions for damping power system oscillations in multimachine system.

    PubMed

    Islam, Naz Niamul; Hannan, M A; Shareef, Hussain; Mohamed, Azah; Salam, M A

    2014-01-01

    Power oscillation damping controller is designed in linearized model with heuristic optimization techniques. Selection of the objective function is very crucial for damping controller design by optimization algorithms. In this research, comparative analysis has been carried out to evaluate the effectiveness of popular objective functions used in power system oscillation damping. Two-stage lead-lag damping controller by means of power system stabilizers is optimized using differential search algorithm for different objective functions. Linearized model simulations are performed to compare the dominant mode's performance and then the nonlinear model is continued to evaluate the damping performance over power system oscillations. All the simulations are conducted in two-area four-machine power system to bring a detailed analysis. Investigated results proved that multiobjective D-shaped function is an effective objective function in terms of moving unstable and lightly damped electromechanical modes into stable region. Thus, D-shape function ultimately improves overall system damping and concurrently enhances power system reliability.

  2. A WRF-Chem Analysis of Flash Rates, Lightning-NOx Production and Subsequent Trace Gas Chemistry of the 29-30 May 2012 Convective Event in Oklahoma During DC3

    NASA Technical Reports Server (NTRS)

    Cummings, Kristin A.; Pickering, Kenneth; Barth, Mary; Weinheimer, A.; Bela, M.; Li, Y; Allen, D.; Bruning, E.; MacGorman, D.; Rutledge, S.; hide

    2015-01-01

    The Deep Convective Clouds and Chemistry (DC3) field campaign in 2012 provided a plethora of aircraft and ground-based observations (e.g., trace gases, lightning and radar) to study deep convective storms, their convective transport of trace gases, and associated lightning occurrence and production of nitrogen oxides (NOx). This is a continuation of previous work, which compared lightning observations (Oklahoma Lightning Mapping Array and National Lightning Detection Network) with flashes generated by various flash rate parameterization schemes (FRPSs) from the literature in a Weather Research and Forecasting Chemistry (WRF-Chem) model simulation of the 29-30 May 2012 Oklahoma thunderstorm. Based on the Oklahoma radar observations and Lightning Mapping Array data, new FRPSs are being generated and incorporated into the model. The focus of this analysis is on estimating the amount of lightning-generated nitrogen oxides (LNOx) produced per flash in this storm through a series of model simulations using different production per flash assumptions and comparisons with DC3 aircraft anvil observations. The result of this analysis will be compared with previously studied mid-latitude storms. Additional model simulations are conducted to investigate the upper troposphere transport, distribution, and chemistry of the LNOx plume during the 24 hours following the convective event to investigate ozone production. These model-simulated mixing ratios are compared against the aircraft observations made on 30 May over the southern Appalachians.

  3. Binding site exploration of CCR5 using in silico methodologies: a 3D-QSAR approach.

    PubMed

    Gadhe, Changdev G; Kothandan, Gugan; Cho, Seung Joo

    2013-01-01

    Chemokine receptor 5 (CCR5) is an important receptor used by human immunodeficiency virus type 1 (HIV-1) to gain viral entry into host cell. In this study, we used a combined approach of comparative modeling, molecular docking, and three dimensional quantitative structure activity relationship (3D-QSAR) analyses to elucidate detailed interaction of CCR5 with their inhibitors. Docking study of the most potent inhibitor from a series of compounds was done to derive the bioactive conformation. Parameters such as random selection, rational selection, different charges and grid spacing were utilized in the model development to check their performance on the model predictivity. Final comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) models were chosen based on the rational selection method, Gasteiger-Hückel charges and a grid spacing of 0.5 Å. Rational model for CoMFA (q(2) = 0.722, r(2) = 0.884, Q(2) = 0.669) and CoMSIA (q(2) = 0.712, r(2) = 0.825, Q(2) = 0.522) was obtained with good statistics. Mapping of contour maps onto CCR5 interface led us to better understand of the ligand-protein interaction. Docking analysis revealed that the Glu283 is crucial for interaction. Two new amino acid residues, Tyr89 and Thr167 were identified as important in ligand-protein interaction. No site directed mutagenesis studies on these residues have been reported.

  4. Are animal models predictive for human postmortem muscle protein degradation?

    PubMed

    Ehrenfellner, Bianca; Zissler, Angela; Steinbacher, Peter; Monticelli, Fabio C; Pittner, Stefan

    2017-11-01

    A most precise determination of the postmortem interval (PMI) is a crucial aspect in forensic casework. Although there are diverse approaches available to date, the high heterogeneity of cases together with the respective postmortal changes often limit the validity and sufficiency of many methods. Recently, a novel approach for time since death estimation by the analysis of postmortal changes of muscle proteins was proposed. It is however necessary to improve the reliability and accuracy, especially by analysis of possible influencing factors on protein degradation. This is ideally investigated on standardized animal models that, however, require legitimization by a comparison of human and animal tissue, and in this specific case of protein degradation profiles. Only if protein degradation events occur in comparable fashion within different species, respective findings can sufficiently be transferred from the animal model to application in humans. Therefor samples from two frequently used animal models (mouse and pig), as well as forensic cases with representative protein profiles of highly differing PMIs were analyzed. Despite physical and physiological differences between species, western blot analysis revealed similar patterns in most of the investigated proteins. Even most degradation events occurred in comparable fashion. In some other aspects, however, human and animal profiles depicted distinct differences. The results of this experimental series clearly indicate the huge importance of comparative studies, whenever animal models are considered. Although animal models could be shown to reflect the basic principles of protein degradation processes in humans, we also gained insight in the difficulties and limitations of the applicability of the developed methodology in different mammalian species regarding protein specificity and methodic functionality.

  5. International Space Station Model Correlation Analysis

    NASA Technical Reports Server (NTRS)

    Laible, Michael R.; Fitzpatrick, Kristin; Hodge, Jennifer; Grygier, Michael

    2018-01-01

    This paper summarizes the on-orbit structural dynamic data and the related modal analysis, model validation and correlation performed for the International Space Station (ISS) configuration ISS Stage ULF7, 2015 Dedicated Thruster Firing (DTF). The objective of this analysis is to validate and correlate the analytical models used to calculate the ISS internal dynamic loads and compare the 2015 DTF with previous tests. During the ISS configurations under consideration, on-orbit dynamic measurements were collected using the three main ISS instrumentation systems; Internal Wireless Instrumentation System (IWIS), External Wireless Instrumentation System (EWIS) and the Structural Dynamic Measurement System (SDMS). The measurements were recorded during several nominal on-orbit DTF tests on August 18, 2015. Experimental modal analyses were performed on the measured data to extract modal parameters including frequency, damping, and mode shape information. Correlation and comparisons between test and analytical frequencies and mode shapes were performed to assess the accuracy of the analytical models for the configurations under consideration. These mode shapes were also compared to earlier tests. Based on the frequency comparisons, the accuracy of the mathematical models is assessed and model refinement recommendations are given. In particular, results of the first fundamental mode will be discussed, nonlinear results will be shown, and accelerometer placement will be assessed.

  6. Regression Model Optimization for the Analysis of Experimental Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2009-01-01

    A candidate math model search algorithm was developed at Ames Research Center that determines a recommended math model for the multivariate regression analysis of experimental data. The search algorithm is applicable to classical regression analysis problems as well as wind tunnel strain gage balance calibration analysis applications. The algorithm compares the predictive capability of different regression models using the standard deviation of the PRESS residuals of the responses as a search metric. This search metric is minimized during the search. Singular value decomposition is used during the search to reject math models that lead to a singular solution of the regression analysis problem. Two threshold dependent constraints are also applied. The first constraint rejects math models with insignificant terms. The second constraint rejects math models with near-linear dependencies between terms. The math term hierarchy rule may also be applied as an optional constraint during or after the candidate math model search. The final term selection of the recommended math model depends on the regressor and response values of the data set, the user s function class combination choice, the user s constraint selections, and the result of the search metric minimization. A frequently used regression analysis example from the literature is used to illustrate the application of the search algorithm to experimental data.

  7. Statistical Evaluation of Time Series Analysis Techniques

    NASA Technical Reports Server (NTRS)

    Benignus, V. A.

    1973-01-01

    The performance of a modified version of NASA's multivariate spectrum analysis program is discussed. A multiple regression model was used to make the revisions. Performance improvements were documented and compared to the standard fast Fourier transform by Monte Carlo techniques.

  8. Proteomic Analysis of Acetaminophen-Induced Changes in Mitochondrial Protein Expression Using Spectral Counting

    PubMed Central

    Stamper, Brendan D.; Mohar, Isaac; Kavanagh, Terrance J.; Nelson, Sidney D.

    2011-01-01

    Comparative proteomic analysis following treatment with acetaminophen (APAP) was performed on two different models of APAP-mediated hepatocellular injury in order to both identify common targets for adduct formation and track drug-induced changes in protein expression. Male C57BL/6 mice were used as a model for APAP-mediated liver injury in vivo and TAMH cells were used as a model for APAP-mediated cytotoxicity in vitro. SEQUEST was unable to identify the precise location of sites of adduction following treatment with APAP in either system. However, semiquantitative analysis of the proteomic datasets using spectral counting revealed a downregulation of P450 isoforms associated with APAP bioactivation, and an upregulation of proteins related to the electron transport chain by APAP compared to control. Both mechanisms are likely compensatory in nature as decreased P450 expression is likely to attenuate toxicity associated with N-acetyl-p-quinoneimine (NAPQI) formation, whereas APAP-induced electron transport chain component upregulation may be an attempt to promote cellular bioenergetics. PMID:21329376

  9. Nonlinear solid finite element analysis of mitral valves with heterogeneous leaflet layers

    NASA Astrophysics Data System (ADS)

    Prot, V.; Skallerud, B.

    2009-02-01

    An incompressible transversely isotropic hyperelastic material for solid finite element analysis of a porcine mitral valve response is described. The material model implementation is checked in single element tests and compared with a membrane implementation in an out-of-plane loading test to study how the layered structures modify the stress response for a simple geometry. Three different collagen layer arrangements are used in finite element analysis of the mitral valve. When the leaflets are arranged in two layers with the collagen on the ventricular side, the stress in the fibre direction through the thickness in the central part of the anterior leaflet is homogenized and the peak stress is reduced. A simulation using membrane elements is also carried out for comparison with the solid finite element results. Compared to echocardiographic measurements, the finite element models bulge too much in the left atrium. This may be due to evidence of active muscle fibres in some parts of the anterior leaflet, whereas our constitutive modelling is based on passive material.

  10. Numerical prediction of Pelton turbine efficiency

    NASA Astrophysics Data System (ADS)

    Jošt, D.; Mežnar, P.; Lipej, A.

    2010-08-01

    This paper presents a numerical analysis of flow in a 2 jet Pelton turbine with horizontal axis. The analysis was done for the model at several operating points in different operating regimes. The results were compared to the results of a test of the model. Analysis was performed using ANSYS CFX-12.1 computer code. A k-ω SST turbulent model was used. Free surface flow was modelled by two-phase homogeneous model. At first, a steady state analysis of flow in the distributor with two injectors was performed for several needle strokes. This provided us with data on flow energy losses in the distributor and the shape and velocity of jets. The second step was an unsteady analysis of the runner with jets. Torque on the shaft was then calculated from pressure distribution data. Averaged torque values are smaller than measured ones. Consequently, calculated turbine efficiency is also smaller than the measured values, the difference is about 4 %. The shape of the efficiency diagram conforms well to the measurements.

  11. [Public health conceptual models and paradigms].

    PubMed

    Hernández-Girón, Carlos; Orozco-Núñez, Emanuel; Arredondo-López, Armando

    2012-01-01

    The epidemiological transition model proposed by Omhran at the beginning of the 1970s (decreased fecundity rate and increased life expectancy), together with modifications in lifestyles and diet, showed increased mortality due to chronically degenerative causes. This essay thus discusses and makes a comparative analysis of some currents of thought, taking as its common thread an analysis of epidemiological change identified in different eras or stages and relationships with some public health models or conceptual frameworks. Discussing public health paradigms leads to a historical recapitulation of conceptual models ranging from magical-religious conceptions to ecological and socio-medical models. M. Susser proposed 3 eras in this discipline's evolution in his speech on the future of the epidemiology. The epidemiological changes analysed through different approaches constitute elements of analysis that all models discussed in this essay include to delimit the contributions and variables so determining them.

  12. A comparative study of the characterization of miR-155 in knockout mice

    PubMed Central

    Zhang, Dong; Cui, Yongchun; Li, Bin; Luo, Xiaokang; Li, Bo; Tang, Yue

    2017-01-01

    miR-155 is one of the most important miRNAs and plays a very important role in numerous biological processes. However, few studies have characterized this miRNA in mice under normal physiological conditions. We aimed to characterize miR-155 in vivo by using a comparative analysis. In our study, we compared miR-155 knockout (KO) mice with C57BL/6 wild type (WT) mice in order to characterize miR-155 in mice under normal physiological conditions using many evaluation methods, including a reproductive performance analysis, growth curve, ultrasonic estimation, haematological examination, and histopathological analysis. These analyses showed no significant differences between groups in the main evaluation indices. The growth and development were nearly normal for all mice and did not differ between the control and model groups. Using a comparative analysis and a summary of related studies published in recent years, we found that miR-155 was not essential for normal physiological processes in 8-week-old mice. miR-155 deficiency did not affect the development and growth of naturally ageing mice during the 42 days after birth. Thus, studying the complex biological functions of miR-155 requires the further use of KO mouse models. PMID:28278287

  13. A stock market forecasting model combining two-directional two-dimensional principal component analysis and radial basis function neural network.

    PubMed

    Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J

    2015-01-01

    In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron.

  14. A Stock Market Forecasting Model Combining Two-Directional Two-Dimensional Principal Component Analysis and Radial Basis Function Neural Network

    PubMed Central

    Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J.

    2015-01-01

    In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron. PMID:25849483

  15. Comparative assessment of analytical approaches to quantify the risk for introduction of rare animal diseases: the example of avian influenza in Spain.

    PubMed

    Sánchez-Vizcaíno, Fernando; Perez, Andrés; Martínez-López, Beatriz; Sánchez-Vizcaíno, José Manuel

    2012-08-01

    Trade of animals and animal products imposes an uncertain and variable risk for exotic animal diseases introduction into importing countries. Risk analysis provides importing countries with an objective, transparent, and internationally accepted method for assessing that risk. Over the last decades, European Union countries have conducted probabilistic risk assessments quite frequently to quantify the risk for rare animal diseases introduction into their territories. Most probabilistic animal health risk assessments have been typically classified into one-level and multilevel binomial models. One-level models are more simple than multilevel models because they assume that animals or products originate from one single population. However, it is unknown whether such simplification may result in substantially different results compared to those obtained through the use of multilevel models. Here, data used on a probabilistic multilevel binomial model formulated to assess the risk for highly pathogenic avian influenza introduction into Spain were reanalyzed using a one-level binomial model and their outcomes were compared. An alternative ordinal model is also proposed here, which makes use of simpler assumptions and less information compared to those required by traditional one-level and multilevel approaches. Results suggest that, at least under certain circumstances, results of the one-level and ordinal approaches are similar to those obtained using multilevel models. Consequently, we argue that, when data are insufficient to run traditional probabilistic models, the ordinal approach presented here may be a suitable alternative to rank exporting countries in terms of the risk that they impose for the spread of rare animal diseases into disease-free countries. © 2012 Society for Risk Analysis.

  16. Continuity Clinic Model and Diabetic Outcomes in Internal Medicine Residencies: Findings of the Educational Innovations Project Ambulatory Collaborative

    PubMed Central

    Francis, Maureen D.; Julian, Katherine A.; Wininger, David A.; Drake, Sean; Bollman, KeriLyn; Nabors, Christopher; Pereira, Anne; Rosenblum, Michael; Zelenski, Amy B.; Sweet, David; Thomas, Kris; Varney, Andrew; Warm, Eric; Francis, Mark L.

    2016-01-01

    Background Efforts to improve diabetes care in residency programs are ongoing and in the midst of continuity clinic redesign at many institutions. While there appears to be a link between resident continuity and improvement in glycemic control for diabetic patients, it is uncertain whether clinic structure affects quality measures and patient outcomes. Methods This multi-institutional, cross-sectional study included 12 internal medicine programs. Three outcomes (glycemic control, blood pressure control, and achievement of target low-density lipoprotein [LDL]) and 2 process measures (A1C and LDL measurement) were reported for diabetic patients. Traditional, block, and combination clinic models were compared using analysis of covariance (ANCOVA). Analysis was adjusted for continuity, utilization, workload, and panel size. Results No significant differences were found in glycemic control across clinic models (P = .06). The percentage of diabetic patients with LDL < 100 mg/dL was 60% in block, compared to 54.9% and 55% in traditional and combination models (P = .006). The percentage of diabetic patients with blood pressure < 130/80 mmHg was 48.4% in block, compared to 36.7% and 36.9% in other models (P < .001). The percentage of diabetic patients with HbA1C measured was 92.1% in block compared to 75.2% and 82.1% in other models (P < .001). Also, the percentage of diabetic patients with LDL measured was significantly different across all groups, with 91.2% in traditional, 70.4% in combination, and 83.3% in block model programs (P < .001). Conclusions While high scores on diabetic quality measures are achievable in any clinic model, the block model design was associated with better performance. PMID:26913099

  17. Round Robin Analyses of the Steel Containment Vessel Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costello, J.F.; Hashimote, T.; Klamerus, E.W.

    A high pressure test of the steel containment vessel (SCV) model was conducted on December 11-12, 1996 at Sandia National Laboratories, Albuquerque, NM, USA. The test model is a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of an improved Mark II boiling water reactor (BWR) containment. Several organizations from the US, Europe, and Asia were invited to participate in a Round Robin analysis to perform independent pretest predictions and posttest evaluations of the behavior of the SCV model during the high pressure test. Both pretest and posttest analysis results from all Round Robin participants were compared tomore » the high pressure test data. This paper summarizes the Round Robin analysis activities and discusses the lessons learned from the collective effort.« less

  18. A critical examination of the validity of simplified models for radiant heat transfer analysis.

    NASA Technical Reports Server (NTRS)

    Toor, J. S.; Viskanta, R.

    1972-01-01

    Examination of the directional effects of the simplified models by comparing the experimental data with the predictions based on simple and more detailed models for the radiation characteristics of surfaces. Analytical results indicate that the constant property diffuse and specular models do not yield the upper and lower bounds on local radiant heat flux. In general, the constant property specular analysis yields higher values of irradiation than the constant property diffuse analysis. A diffuse surface in the enclosure appears to destroy the effect of specularity of the other surfaces. Semigray and gray analyses predict the irradiation reasonably well provided that the directional properties and the specularity of the surfaces are taken into account. The uniform and nonuniform radiosity diffuse models are in satisfactory agreement with each other.

  19. Comparative analysis through probability distributions of a data set

    NASA Astrophysics Data System (ADS)

    Cristea, Gabriel; Constantinescu, Dan Mihai

    2018-02-01

    In practice, probability distributions are applied in such diverse fields as risk analysis, reliability engineering, chemical engineering, hydrology, image processing, physics, market research, business and economic research, customer support, medicine, sociology, demography etc. This article highlights important aspects of fitting probability distributions to data and applying the analysis results to make informed decisions. There are a number of statistical methods available which can help us to select the best fitting model. Some of the graphs display both input data and fitted distributions at the same time, as probability density and cumulative distribution. The goodness of fit tests can be used to determine whether a certain distribution is a good fit. The main used idea is to measure the "distance" between the data and the tested distribution, and compare that distance to some threshold values. Calculating the goodness of fit statistics also enables us to order the fitted distributions accordingly to how good they fit to data. This particular feature is very helpful for comparing the fitted models. The paper presents a comparison of most commonly used goodness of fit tests as: Kolmogorov-Smirnov, Anderson-Darling, and Chi-Squared. A large set of data is analyzed and conclusions are drawn by visualizing the data, comparing multiple fitted distributions and selecting the best model. These graphs should be viewed as an addition to the goodness of fit tests.

  20. Multi-modal management of acromegaly: a value perspective.

    PubMed

    Kimmell, Kristopher T; Weil, Robert J; Marko, Nicholas F

    2015-10-01

    The Acromegaly Consensus Group recently released updated guidelines for medical management of acromegaly patients. We subjected these guidelines to a cost analysis. We conducted a cost analysis of the recommendations based on published efficacy rates as well as publicly available cost data. The results were compared to findings from a previously reported comparative effectiveness analysis of acromegaly treatments. Using decision tree software, two models were created based on the Acromegaly Consensus Group's recommendations and the comparative effectiveness analysis. The decision tree for the Consensus Group's recommendations was subjected to multi-way tornado analysis to identify variables that most impacted the value analysis of the decision tree. The value analysis confirmed the Consensus Group's recommendations of somatostatin analogs as first line therapy for medical management. Our model also demonstrated significant value in using dopamine agonist agents as upfront therapy as well. Sensitivity analysis identified the cost of somatostatin analogs and growth hormone receptor antagonists as having the most significant impact on the cost effectiveness of medical therapies. Our analysis confirmed the value of surgery as first-line therapy for patients with surgically accessible lesions. Surgery provides the greatest value for management of patients with acromegaly. However, in accordance with the Acromegaly Consensus Group's recent recommendations, somatostatin analogs provide the greatest value and should be used as first-line therapy for patients who cannot be managed surgically. At present, the substantial cost is the most significant negative factor in the value of medical therapies for acromegaly.

  1. The “Dry-Run” Analysis: A Method for Evaluating Risk Scores for Confounding Control

    PubMed Central

    Wyss, Richard; Hansen, Ben B.; Ellis, Alan R.; Gagne, Joshua J.; Desai, Rishi J.; Glynn, Robert J.; Stürmer, Til

    2017-01-01

    Abstract A propensity score (PS) model's ability to control confounding can be assessed by evaluating covariate balance across exposure groups after PS adjustment. The optimal strategy for evaluating a disease risk score (DRS) model's ability to control confounding is less clear. DRS models cannot be evaluated through balance checks within the full population, and they are usually assessed through prediction diagnostics and goodness-of-fit tests. A proposed alternative is the “dry-run” analysis, which divides the unexposed population into “pseudo-exposed” and “pseudo-unexposed” groups so that differences on observed covariates resemble differences between the actual exposed and unexposed populations. With no exposure effect separating the pseudo-exposed and pseudo-unexposed groups, a DRS model is evaluated by its ability to retrieve an unconfounded null estimate after adjustment in this pseudo-population. We used simulations and an empirical example to compare traditional DRS performance metrics with the dry-run validation. In simulations, the dry run often improved assessment of confounding control, compared with the C statistic and goodness-of-fit tests. In the empirical example, PS and DRS matching gave similar results and showed good performance in terms of covariate balance (PS matching) and controlling confounding in the dry-run analysis (DRS matching). The dry-run analysis may prove useful in evaluating confounding control through DRS models. PMID:28338910

  2. An assessment of gains and losses from international trade in the forest sector

    Treesearch

    Joseph Buongiorno; Craig Johnston; Shushuai Zhu

    2017-01-01

    The importance of international trade for the welfare of actors in the forest sector was estimated by comparing the current state of the world with a world in pure autarky with zero imports and exports of roundwood and manufactured wood products. The analysis was done with a comparative statics application of the Global Forest Products Model. The model was first...

  3. Developing a Drosophila Model of Schwannomatosis

    DTIC Science & Technology

    2013-02-01

    Drosophila melanogaster has become an important model system for cancer studies. Reduced redundancy in the Drosophila genome compared with that of...of high-resolution deletion coverage of the Drosophila melanogaster genome . Nat. Genet. 36, 288-292. Pastor-Pareja, J. C., Wu, M. and Xu. T. (2008...microarray analysis of the entire Drosophila melanogaster genome and compared gene expression profiles of wild type, dCap-D3 and rbf1 mutant

  4. A 3D moisture-stress FEM analysis for time dependent problems in timber structures

    NASA Astrophysics Data System (ADS)

    Fortino, Stefania; Mirianon, Florian; Toratti, Tomi

    2009-11-01

    This paper presents a 3D moisture-stress numerical analysis for timber structures under variable humidity and load conditions. An orthotropic viscoelastic-mechanosorptive material model is specialized on the basis of previous models. Both the constitutive model and the equations needed to describe the moisture flow across the structure are implemented into user subroutines of the Abaqus finite element code and a coupled moisture-stress analysis is performed for several types of mechanical loads and moisture changes. The presented computational approach is validated by analyzing some wood tests described in the literature and comparing the computational results with the reported experimental data.

  5. Internet infrastructures and health care systems: a qualitative comparative analysis on networks and markets in the British National Health Service and Kaiser Permanente.

    PubMed

    Séror, Ann C

    2002-12-01

    The Internet and emergent telecommunications infrastructures are transforming the future of health care management. The costs of health care delivery systems, products, and services continue to rise everywhere, but performance of health care delivery is associated with institutional and ideological considerations as well as availability of financial and technological resources. to identify the effects of ideological differences on health care market infrastructures including the Internet and telecommunications technologies by a comparative case analysis of two large health care organizations: the British National Health Service and the California-based Kaiser Permanente health maintenance organization. A qualitative comparative analysis focusing on the British National Health Service and the Kaiser Permanente health maintenance organization to show how system infrastructures vary according to market dynamics dominated by health care institutions ("push") or by consumer demand ("pull"). System control mechanisms may be technologically embedded, institutional, or behavioral. The analysis suggests that telecommunications technologies and the Internet may contribute significantly to health care system performance in a context of ideological diversity. The study offers evidence to validate alternative models of health care governance: the national constitution model, and the enterprise business contract model. This evidence also suggests important questions for health care policy makers as well as researchers in telecommunications, organizational theory, and health care management.

  6. Internet Infrastructures and Health Care Systems: a Qualitative Comparative Analysis on Networks and Markets in the British National Health Service and Kaiser Permanente

    PubMed Central

    2002-01-01

    Background The Internet and emergent telecommunications infrastructures are transforming the future of health care management. The costs of health care delivery systems, products, and services continue to rise everywhere, but performance of health care delivery is associated with institutional and ideological considerations as well as availability of financial and technological resources. Objective To identify the effects of ideological differences on health care market infrastructures including the Internet and telecommunications technologies by a comparative case analysis of two large health care organizations: the British National Health Service and the California-based Kaiser Permanente health maintenance organization. Methods A qualitative comparative analysis focusing on the British National Health Service and the Kaiser Permanente health maintenance organization to show how system infrastructures vary according to market dynamics dominated by health care institutions ("push") or by consumer demand ("pull"). System control mechanisms may be technologically embedded, institutional, or behavioral. Results The analysis suggests that telecommunications technologies and the Internet may contribute significantly to health care system performance in a context of ideological diversity. Conclusions The study offers evidence to validate alternative models of health care governance: the national constitution model, and the enterprise business contract model. This evidence also suggests important questions for health care policy makers as well as researchers in telecommunications, organizational theory, and health care management. PMID:12554552

  7. Resolving the double tension: Toward a new approach to measurement modeling in cross-national research

    NASA Astrophysics Data System (ADS)

    Medina, Tait Runnfeldt

    The increasing global reach of survey research provides sociologists with new opportunities to pursue theory building and refinement through comparative analysis. However, comparison across a broad array of diverse contexts introduces methodological complexities related to the development of constructs (i.e., measurement modeling) that if not adequately recognized and properly addressed undermine the quality of research findings and cast doubt on the validity of substantive conclusions. The motivation for this dissertation arises from a concern that the availability of cross-national survey data has outpaced sociologists' ability to appropriately analyze and draw meaningful conclusions from such data. I examine the implicit assumptions and detail the limitations of three commonly used measurement models in cross-national analysis---summative scale, pooled factor model, and multiple-group factor model with measurement invariance. Using the orienting lens of the double tension I argue that a new approach to measurement modeling that incorporates important cross-national differences into the measurement process is needed. Two such measurement models---multiple-group factor model with partial measurement invariance (Byrne, Shavelson and Muthen 1989) and the alignment method (Asparouhov and Muthen 2014; Muthen and Asparouhov 2014)---are discussed in detail and illustrated using a sociologically relevant substantive example. I demonstrate that the former approach is vulnerable to an identification problem that arbitrarily impacts substantive conclusions. I conclude that the alignment method is built on model assumptions that are consistent with theoretical understandings of cross-national comparability and provides an approach to measurement modeling and construct development that is uniquely suited for cross-national research. The dissertation makes three major contributions: First, it provides theoretical justification for a new cross-national measurement model and explicates a link between theoretical conceptions of cross-national comparability and a statistical method. Second, it provides a clear and detailed discussion of model identification in multiple-group confirmatory factor analysis that is missing from the literature. This discussion sets the stage for the introduction of the identification problem within multiple-group confirmatory factor analysis with partial measurement invariance and the alternative approach to model identification employed by the alignment method. Third, it offers the first pedagogical presentation of the alignment method using a sociologically relevant example.

  8. Elastic-plastic models for multi-site damage

    NASA Technical Reports Server (NTRS)

    Actis, Ricardo L.; Szabo, Barna A.

    1994-01-01

    This paper presents recent developments in advanced analysis methods for the computation of stress site damage. The method of solution is based on the p-version of the finite element method. Its implementation was designed to permit extraction of linear stress intensity factors using a superconvergent extraction method (known as the contour integral method) and evaluation of the J-integral following an elastic-plastic analysis. Coarse meshes are adequate for obtaining accurate results supported by p-convergence data. The elastic-plastic analysis is based on the deformation theory of plasticity and the von Mises yield criterion. The model problem consists of an aluminum plate with six equally spaced holes and a crack emanating from each hole. The cracks are of different sizes. The panel is subjected to a remote tensile load. Experimental results are available for the panel. The plasticity analysis provided the same limit load as the experimentally determined load. The results of elastic-plastic analysis were compared with the results of linear elastic analysis in an effort to evaluate how plastic zone sizes influence the crack growth rates. The onset of net-section yielding was determined also. The results show that crack growth rate is accelerated by the presence of adjacent damage, and the critical crack size is shorter when the effects of plasticity are taken into consideration. This work also addresses the effects of alternative stress-strain laws: The elastic-ideally-plastic material model is compared against the Ramberg-Osgood model.

  9. Studying the Representation Accuracy of the Earth's Gravity Field in the Polar Regions Based on the Global Geopotential Models

    NASA Astrophysics Data System (ADS)

    Koneshov, V. N.; Nepoklonov, V. B.

    2018-05-01

    The development of studies on estimating the accuracy of the Earth's modern global gravity models in terms of the spherical harmonics of the geopotential in the problematic regions of the world is discussed. The comparative analysis of the results of reconstructing quasi-geoid heights and gravity anomalies from the different models is carried out for two polar regions selected within a radius of 1000 km from the North and South poles. The analysis covers nine recently developed models, including six high-resolution models and three lower order models, including the Russian GAOP2012 model. It is shown that the modern models determine the quasi-geoid heights and gravity anomalies in the polar regions with errors of 5 to 10 to a few dozen cm and from 3 to 5 to a few dozen mGal, respectively, depending on the resolution. The accuracy of the models in the Arctic is several times higher than in the Antarctic. This is associated with the peculiarities of gravity anomalies in every particular region and with the fact that the polar part of the Antarctic has been comparatively less explored by the gravity methods than the polar Arctic.

  10. Theoretical study of the accuracy of the pulse method, frontal analysis, and frontal analysis by characteristic points for the determination of single component adsorption isotherms.

    PubMed

    Andrzejewska, Anna; Kaczmarski, Krzysztof; Guiochon, Georges

    2009-02-13

    The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventional procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N=500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.

  11. Advanced grid-stiffened composite shells for applications in heavy-lift helicopter rotor blade spars

    NASA Astrophysics Data System (ADS)

    Narayanan Nampy, Sreenivas

    Modern rotor blades are constructed using composite materials to exploit their superior structural performance compared to metals. Helicopter rotor blade spars are conventionally designed as monocoque structures. Blades of the proposed Heavy Lift Helicopter are envisioned to be as heavy as 800 lbs when designed using the monocoque spar design. A new and innovative design is proposed to replace the conventional spar designs with light weight grid-stiffened composite shell. Composite stiffened shells have been known to provide excellent strength to weight ratio and damage tolerance with an excellent potential to reduce weight. Conventional stringer--rib stiffened construction is not suitable for rotor blade spars since they are limited in generating high torsion stiffness that is required for aeroelastic stability of the rotor. As a result, off-axis (helical) stiffeners must be provided. This is a new design space where innovative modeling techniques are needed. The structural behavior of grid-stiffened structures under axial, bending, and torsion loads, typically experienced by rotor blades need to be accurately predicted. The overall objective of the present research is to develop and integrate the necessary design analysis tools to conduct a feasibility study in employing grid-stiffened shells for heavy-lift rotor blade spars. Upon evaluating the limitations in state-of-the-art analytical models in predicting the axial, bending, and torsion stiffness coefficients of grid and grid-stiffened structures, a new analytical model was developed. The new analytical model based on the smeared stiffness approach was developed employing the stiffness matrices of the constituent members of the grid structure such as an arch, helical, or straight beam representing circumferential, helical, and longitudinal stiffeners. This analysis has the capability to model various stiffening configurations such as angle-grid, ortho-grid, and general-grid. Analyses were performed using an existing state-of-the-art and newly developed model to predict the torsion, bending, and axial stiffness of grid and grid-stiffened structures with various stiffening configurations. These predictions were compared to results generated using finite element analysis (FEA) to observe excellent correlation (within 6%) for a range of parameters for grid and grid-stiffened structures such as grid density, stiffener angle, and aspect ratio of the stiffener cross-section. Experimental results from cylindrical grid specimen testing were compared with analytical prediction using the new analysis. The new analysis predicted stiffness coefficients with nearly 7% error compared to FEA results. From the parametric studies conducted, it was observed that the previous state-of-the-art analysis on the other hand exhibited errors of the order of 39% for certain designs. Stability evaluations were also conducted by integrating the new analysis with established stability formulations. A design study was conducted to evaluate the potential weight savings of a simple grid-stiffened rotor blade spar structure compared to a baseline monocoque design. Various design constraints such as stiffness, strength, and stability were imposed. A manual search was conducted for design parameters such as stiffener density, stiffener angle, shell laminate, and stiffener aspect ratio that provide lightweight grid-stiffened designs compared to the baseline. It was found that a weight saving of 9.1% compared to the baseline is possible without violating any of the design constraints.

  12. Statistics of SU(5) D-brane models on a type II orientifold

    NASA Astrophysics Data System (ADS)

    Gmeiner, Florian; Stein, Maren

    2006-06-01

    We perform a statistical analysis of models with SU(5) and flipped SU(5) gauge group in a type II orientifold setup. We investigate the distribution and correlation of properties of these models, including the number of generations and the hidden sector gauge group. Compared to the recent analysis [F. Gmeiner, R. Blumenhagen, G. Honecker, D. Lüst, and T. Weigand, J. High Energy Phys.JHEPFG1029-8479 01 (2006) 004; F. Gmeiner, Fortschr. Phys.FPYKA60015-8208 54, 391 (2006).10.1088/1126-6708/2006/01/004] of models with a standard model-like gauge group, we find very similar results.

  13. Comparison of CTT and Rasch-based approaches for the analysis of longitudinal Patient Reported Outcomes.

    PubMed

    Blanchin, Myriam; Hardouin, Jean-Benoit; Le Neel, Tanguy; Kubis, Gildas; Blanchard, Claire; Mirallié, Eric; Sébille, Véronique

    2011-04-15

    Health sciences frequently deal with Patient Reported Outcomes (PRO) data for the evaluation of concepts, in particular health-related quality of life, which cannot be directly measured and are often called latent variables. Two approaches are commonly used for the analysis of such data: Classical Test Theory (CTT) and Item Response Theory (IRT). Longitudinal data are often collected to analyze the evolution of an outcome over time. The most adequate strategy to analyze longitudinal latent variables, which can be either based on CTT or IRT models, remains to be identified. This strategy must take into account the latent characteristic of what PROs are intended to measure as well as the specificity of longitudinal designs. A simple and widely used IRT model is the Rasch model. The purpose of our study was to compare CTT and Rasch-based approaches to analyze longitudinal PRO data regarding type I error, power, and time effect estimation bias. Four methods were compared: the Score and Mixed models (SM) method based on the CTT approach, the Rasch and Mixed models (RM), the Plausible Values (PV), and the Longitudinal Rasch model (LRM) methods all based on the Rasch model. All methods have shown comparable results in terms of type I error, all close to 5 per cent. LRM and SM methods presented comparable power and unbiased time effect estimations, whereas RM and PV methods showed low power and biased time effect estimations. This suggests that RM and PV methods should be avoided to analyze longitudinal latent variables. Copyright © 2010 John Wiley & Sons, Ltd.

  14. Evaluation of different time domain peak models using extreme learning machine-based peak detection for EEG signal.

    PubMed

    Adam, Asrul; Ibrahim, Zuwairie; Mokhtar, Norrima; Shapiai, Mohd Ibrahim; Cumming, Paul; Mubin, Marizan

    2016-01-01

    Various peak models have been introduced to detect and analyze peaks in the time domain analysis of electroencephalogram (EEG) signals. In general, peak model in the time domain analysis consists of a set of signal parameters, such as amplitude, width, and slope. Models including those proposed by Dumpala, Acir, Liu, and Dingle are routinely used to detect peaks in EEG signals acquired in clinical studies of epilepsy or eye blink. The optimal peak model is the most reliable peak detection performance in a particular application. A fair measure of performance of different models requires a common and unbiased platform. In this study, we evaluate the performance of the four different peak models using the extreme learning machine (ELM)-based peak detection algorithm. We found that the Dingle model gave the best performance, with 72 % accuracy in the analysis of real EEG data. Statistical analysis conferred that the Dingle model afforded significantly better mean testing accuracy than did the Acir and Liu models, which were in the range 37-52 %. Meanwhile, the Dingle model has no significant difference compared to Dumpala model.

  15. Simulation-based sensitivity analysis for non-ignorably missing data.

    PubMed

    Yin, Peng; Shi, Jian Q

    2017-01-01

    Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.

  16. Miniaturization of Micro-Solder Bumps and Effect of IMC on Stress Distribution

    NASA Astrophysics Data System (ADS)

    Choudhury, Soud Farhan; Ladani, Leila

    2016-07-01

    As the joints become smaller in more advanced packages and devices, intermetallic (IMCs) volume ratio increases, which significantly impacts the overall mechanical behavior of joints. The existence of only a few grains of Sn (Tin) and IMC materials results in anisotropic elastic and plastic behavior which is not detectable using conventional finite element (FE) simulation with average properties for polycrystalline material. In this study, crystal plasticity finite element (CPFE) simulation is used to model the whole joint including copper, Sn solder and Cu6Sn5 IMC material. Experimental lap-shear test results for solder joints from the literature were used to validate the models. A comparative analysis between traditional FE, CPFE and experiments was conducted. The CPFE model was able to correlate the experiments more closely compared to traditional FE analysis because of its ability to capture micro-mechanical anisotropic behavior. Further analysis was conducted to evaluate the effect of IMC thickness on stress distribution in micro-bumps using a systematic numerical experiment with IMC thickness ranging from 0% to 80%. The analysis was conducted on micro-bumps with single crystal Sn and bicrystal Sn. The overall stress distribution and shear deformation changes as the IMC thickness increases. The model with higher IMC thickness shows a stiffer shear response, and provides a higher shear yield strength.

  17. Multiple receptor conformation docking, dock pose clustering and 3D QSAR studies on human poly(ADP-ribose) polymerase-1 (PARP-1) inhibitors.

    PubMed

    Fatima, Sabiha; Jatavath, Mohan Babu; Bathini, Raju; Sivan, Sree Kanth; Manga, Vijjulatha

    2014-10-01

    Poly(ADP-ribose) polymerase-1 (PARP-1) functions as a DNA damage sensor and signaling molecule. It plays a vital role in the repair of DNA strand breaks induced by radiation and chemotherapeutic drugs; inhibitors of this enzyme have the potential to improve cancer chemotherapy or radiotherapy. Three-dimensional quantitative structure activity relationship (3D QSAR) models were developed using comparative molecular field analysis, comparative molecular similarity indices analysis and docking studies. A set of 88 molecules were docked into the active site of six X-ray crystal structures of poly(ADP-ribose)polymerase-1 (PARP-1), by a procedure called multiple receptor conformation docking (MRCD), in order to improve the 3D QSAR models through the analysis of binding conformations. The docked poses were clustered to obtain the best receptor binding conformation. These dock poses from clustering were used for 3D QSAR analysis. Based on MRCD and QSAR information, some key features have been identified that explain the observed variance in the activity. Two receptor-based QSAR models were generated; these models showed good internal and external statistical reliability that is evident from the [Formula: see text], [Formula: see text] and [Formula: see text]. The identified key features enabled us to design new PARP-1 inhibitors.

  18. [The trial of business data analysis at the Department of Radiology by constructing the auto-regressive integrated moving-average (ARIMA) model].

    PubMed

    Tani, Yuji; Ogasawara, Katsuhiko

    2012-01-01

    This study aimed to contribute to the management of a healthcare organization by providing management information using time-series analysis of business data accumulated in the hospital information system, which has not been utilized thus far. In this study, we examined the performance of the prediction method using the auto-regressive integrated moving-average (ARIMA) model, using the business data obtained at the Radiology Department. We made the model using the data used for analysis, which was the number of radiological examinations in the past 9 years, and we predicted the number of radiological examinations in the last 1 year. Then, we compared the actual value with the forecast value. We were able to establish that the performance prediction method was simple and cost-effective by using free software. In addition, we were able to build the simple model by pre-processing the removal of trend components using the data. The difference between predicted values and actual values was 10%; however, it was more important to understand the chronological change rather than the individual time-series values. Furthermore, our method was highly versatile and adaptable compared to the general time-series data. Therefore, different healthcare organizations can use our method for the analysis and forecasting of their business data.

  19. FGWAS: Functional genome wide association analysis.

    PubMed

    Huang, Chao; Thompson, Paul; Wang, Yalin; Yu, Yang; Zhang, Jingwen; Kong, Dehan; Colen, Rivka R; Knickmeyer, Rebecca C; Zhu, Hongtu

    2017-10-01

    Functional phenotypes (e.g., subcortical surface representation), which commonly arise in imaging genetic studies, have been used to detect putative genes for complexly inherited neuropsychiatric and neurodegenerative disorders. However, existing statistical methods largely ignore the functional features (e.g., functional smoothness and correlation). The aim of this paper is to develop a functional genome-wide association analysis (FGWAS) framework to efficiently carry out whole-genome analyses of functional phenotypes. FGWAS consists of three components: a multivariate varying coefficient model, a global sure independence screening procedure, and a test procedure. Compared with the standard multivariate regression model, the multivariate varying coefficient model explicitly models the functional features of functional phenotypes through the integration of smooth coefficient functions and functional principal component analysis. Statistically, compared with existing methods for genome-wide association studies (GWAS), FGWAS can substantially boost the detection power for discovering important genetic variants influencing brain structure and function. Simulation studies show that FGWAS outperforms existing GWAS methods for searching sparse signals in an extremely large search space, while controlling for the family-wise error rate. We have successfully applied FGWAS to large-scale analysis of data from the Alzheimer's Disease Neuroimaging Initiative for 708 subjects, 30,000 vertices on the left and right hippocampal surfaces, and 501,584 SNPs. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Comparison of speckle-tracking echocardiography with invasive hemodynamics for the detection of characteristic cardiac dysfunction in type-1 and type-2 diabetic rat models.

    PubMed

    Mátyás, Csaba; Kovács, Attila; Németh, Balázs Tamás; Oláh, Attila; Braun, Szilveszter; Tokodi, Márton; Barta, Bálint András; Benke, Kálmán; Ruppert, Mihály; Lakatos, Bálint Károly; Merkely, Béla; Radovits, Tamás

    2018-01-16

    Measurement of systolic and diastolic function in animal models is challenging by conventional non-invasive methods. Therefore, we aimed at comparing speckle-tracking echocardiography (STE)-derived parameters to the indices of left ventricular (LV) pressure-volume (PV) analysis to detect cardiac dysfunction in rat models of type-1 (T1DM) and type-2 (T2DM) diabetes mellitus. Rat models of T1DM (induced by 60 mg/kg streptozotocin, n = 8) and T2DM (32-week-old Zucker Diabetic Fatty rats, n = 7) and corresponding control animals (n = 5 and n = 8, respectively) were compared. Echocardiography and LV PV analysis were performed. LV short-axis recordings were used for STE analysis. Global circumferential strain, peak strain rate values in systole (SrS), isovolumic relaxation (SrIVR) and early diastole (SrE) were measured. LV contractility, active relaxation and stiffness were measured by PV analysis. In T1DM, contractility and active relaxation were deteriorated to a greater extent compared to T2DM. In contrast, diastolic stiffness was impaired in T2DM. Correspondingly, STE described more severe systolic dysfunction in T1DM. Among diastolic STE parameters, SrIVR was more decreased in T1DM, however, SrE was more reduced in T2DM. In T1DM, SrS correlated with contractility, SrIVR with active relaxation, while in T2DM SrE was related to cardiac stiffness, cardiomyocyte diameter and fibrosis. Strain and strain rate parameters can be valuable and feasible measures to describe the dynamic changes in contractility, active relaxation and LV stiffness in animal models of T1DM and T2DM. STE corresponds to PV analysis and also correlates with markers of histological myocardial remodeling.

  1. Sonographically guided intrasheath percutaneous release of the first annular pulley for trigger digits, part 2: randomized comparative study of the economic impact of 3 surgical models.

    PubMed

    Rojo-Manaute, Jose Manuel; Capa-Grasa, Alberto; Del Cerro-Gutiérrez, Miguel; Martínez, Manuel Villanueva; Chana-Rodríguez, Francisco; Martín, Javier Vaquero

    2012-03-01

    Trigger digit surgery can be performed by an open approach using classic open surgery, by a wide-awake approach, or by sonographically guided first annular pulley release in day surgery and office-based ambulatory settings. Our goal was to perform a turnover and economic analysis of 3 surgical models. Two studies were conducted. The first was a turnover analysis of 57 patients allocated 4:4:1 into the surgical models: sonographically guided-office-based, classic open-day surgery, and wide-awake-office-based. Regression analysis for the turnover time was monitored for assessing stability (R(2) < .26). Second, on the basis of turnover times and hospital tariff revenues, we calculated the total costs, income to cost ratio, opportunity cost, true cost, true net income (primary variable), break-even points for sonographically guided fixed costs, and 1-way analysis for identifying thresholds among alternatives. Thirteen sonographically guided-office-based patients were withdrawn because of a learning curve influence. The wide-awake (n = 6) and classic (n = 26) models were compared to the last 25% of the sonographically guided group (n = 12), which showed significantly less mean turnover times, income to cost ratios 2.52 and 10.9 times larger, and true costs 75.48 and 20.92 times lower, respectively. A true net income break-even point happened after 19.78 sonographically guided-office-based procedures. Sensitivity analysis showed a threshold between wide-awake and last 25% sonographically guided true costs if the last 25% sonographically guided turnover times reached 65.23 and 27.81 minutes, respectively. However, this trial was underpowered. This trial comparing surgical models was underpowered and is inconclusive on turnover times; however, the sonographically guided-office-based approach showed shorter turnover times and better economic results with a quick recoup of the costs of sonographically assisted surgery.

  2. Application of a High-Fidelity Icing Analysis Method to a Model-Scale Rotor in Forward Flight

    NASA Technical Reports Server (NTRS)

    Narducci, Robert; Orr, Stanley; Kreeger, Richard E.

    2012-01-01

    An icing analysis process involving the loose coupling of OVERFLOW-RCAS for rotor performance prediction and with LEWICE3D for thermal analysis and ice accretion is applied to a model-scale rotor for validation. The process offers high-fidelity rotor analysis for the noniced and iced rotor performance evaluation that accounts for the interaction of nonlinear aerodynamics with blade elastic deformations. Ice accumulation prediction also involves loosely coupled data exchanges between OVERFLOW and LEWICE3D to produce accurate ice shapes. Validation of the process uses data collected in the 1993 icing test involving Sikorsky's Powered Force Model. Non-iced and iced rotor performance predictions are compared to experimental measurements as are predicted ice shapes.

  3. Stability Analysis of the Slowed-Rotor Compound Helicopter Configuration

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne; Floros, Matthew W.

    2004-01-01

    The stability and control of rotors at high advance ratio are considered. Teetering, articulated, gimbaled, and rigid hub types are considered for a compound helicopter (rotor and fixed wing). Stability predictions obtained using an analytical rigid flapping blade analysis, a rigid blade CAMRAD II model, and an elastic blade CAMRAD II model are compared. For the flapping blade analysis, the teetering rotor is the most stable, 5howing no instabilities up to an advance ratio of 3 and a Lock number of 18. With an elastic blade model, the teetering rotor is unstable at an advance ratio of 1.5. Analysis of the trim controls and blade flapping shows that for small positive collective pitch, trim can be maintained without excessive control input or flapping angles.

  4. Parameterization of the InVEST Crop Pollination Model to spatially predict abundance of wild blueberry (Vaccinium angustifolium Aiton) native bee pollinators in Maine, USA

    USGS Publications Warehouse

    Groff, Shannon C.; Loftin, Cynthia S.; Drummond, Frank; Bushmann, Sara; McGill, Brian J.

    2016-01-01

    Non-native honeybees historically have been managed for crop pollination, however, recent population declines draw attention to pollination services provided by native bees. We applied the InVEST Crop Pollination model, developed to predict native bee abundance from habitat resources, in Maine's wild blueberry crop landscape. We evaluated model performance with parameters informed by four approaches: 1) expert opinion; 2) sensitivity analysis; 3) sensitivity analysis informed model optimization; and, 4) simulated annealing (uninformed) model optimization. Uninformed optimization improved model performance by 29% compared to expert opinion-informed model, while sensitivity-analysis informed optimization improved model performance by 54%. This suggests that expert opinion may not result in the best parameter values for the InVEST model. The proportion of deciduous/mixed forest within 2000 m of a blueberry field also reliably predicted native bee abundance in blueberry fields, however, the InVEST model provides an efficient tool to estimate bee abundance beyond the field perimeter.

  5. Modeling of human operator dynamics in simple manual control utilizing time series analysis. [tracking (position)

    NASA Technical Reports Server (NTRS)

    Agarwal, G. C.; Osafo-Charles, F.; Oneill, W. D.; Gottlieb, G. L.

    1982-01-01

    Time series analysis is applied to model human operator dynamics in pursuit and compensatory tracking modes. The normalized residual criterion is used as a one-step analytical tool to encompass the processes of identification, estimation, and diagnostic checking. A parameter constraining technique is introduced to develop more reliable models of human operator dynamics. The human operator is adequately modeled by a second order dynamic system both in pursuit and compensatory tracking modes. In comparing the data sampling rates, 100 msec between samples is adequate and is shown to provide better results than 200 msec sampling. The residual power spectrum and eigenvalue analysis show that the human operator is not a generator of periodic characteristics.

  6. The performance evaluation model of mining project founded on the weight optimization entropy value method

    NASA Astrophysics Data System (ADS)

    Mao, Chao; Chen, Shou

    2017-01-01

    According to the traditional entropy value method still have low evaluation accuracy when evaluating the performance of mining projects, a performance evaluation model of mineral project founded on improved entropy is proposed. First establish a new weight assignment model founded on compatible matrix analysis of analytic hierarchy process (AHP) and entropy value method, when the compatibility matrix analysis to achieve consistency requirements, if it has differences between subjective weights and objective weights, moderately adjust both proportions, then on this basis, the fuzzy evaluation matrix for performance evaluation. The simulation experiments show that, compared with traditional entropy and compatible matrix analysis method, the proposed performance evaluation model of mining project based on improved entropy value method has higher accuracy assessment.

  7. Analysis of terahertz dielectric properties of pork tissue

    NASA Astrophysics Data System (ADS)

    Huang, Yuqing; Xie, Qiaoling; Sun, Ping

    2017-10-01

    Seeing that about 70% component of fresh biological tissues is water, many scientists try to use water models to describe the dielectric properties of biological tissues. The classical water dielectric models are Debye model, Double Debye model and Cole-Cole model. This work aims to determine a suitable model by comparing three models above with experimental data. These models are applied to fresh pork tissue. By means of least square method, the parameters of different models are fitted with the experimental data. Comparing different models on both dielectric function, the Cole-Cole model is verified the best to describe the experiments of pork tissue. The correction factor α of the Cole-Cole model is an important modification for biological tissues. So Cole-Cole model is supposed to be a priority selection to describe the dielectric properties for biological tissues in the terahertz range.

  8. Incorporating twitter-based human activity information in spatial analysis of crashes in urban areas.

    PubMed

    Bao, Jie; Liu, Pan; Yu, Hao; Xu, Chengcheng

    2017-09-01

    The primary objective of this study was to investigate how to incorporate human activity information in spatial analysis of crashes in urban areas using Twitter check-in data. This study used the data collected from the City of Los Angeles in the United States to illustrate the procedure. The following five types of data were collected: crash data, human activity data, traditional traffic exposure variables, road network attributes and social-demographic data. A web crawler by Python was developed to collect the venue type information from the Twitter check-in data automatically. The human activities were classified into seven categories by the obtained venue types. The collected data were aggregated into 896 Traffic Analysis Zones (TAZ). Geographically weighted regression (GWR) models were developed to establish a relationship between the crash counts reported in a TAZ and various contributing factors. Comparative analyses were conducted to compare the performance of GWR models which considered traditional traffic exposure variables only, Twitter-based human activity variables only, and both traditional traffic exposure and Twitter-based human activity variables. The model specification results suggested that human activity variables significantly affected the crash counts in a TAZ. The results of comparative analyses suggested that the models which considered both traditional traffic exposure and human activity variables had the best goodness-of-fit in terms of the highest R 2 and lowest AICc values. The finding seems to confirm the benefits of incorporating human activity information in spatial analysis of crashes using Twitter check-in data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. "A Bayesian sensitivity analysis to evaluate the impact of unmeasured confounding with external data: a real world comparative effectiveness study in osteoporosis".

    PubMed

    Zhang, Xiang; Faries, Douglas E; Boytsov, Natalie; Stamey, James D; Seaman, John W

    2016-09-01

    Observational studies are frequently used to assess the effectiveness of medical interventions in routine clinical practice. However, the use of observational data for comparative effectiveness is challenged by selection bias and the potential of unmeasured confounding. This is especially problematic for analyses using a health care administrative database, in which key clinical measures are often not available. This paper provides an approach to conducting a sensitivity analyses to investigate the impact of unmeasured confounding in observational studies. In a real world osteoporosis comparative effectiveness study, the bone mineral density (BMD) score, an important predictor of fracture risk and a factor in the selection of osteoporosis treatments, is unavailable in the data base and lack of baseline BMD could potentially lead to significant selection bias. We implemented Bayesian twin-regression models, which simultaneously model both the observed outcome and the unobserved unmeasured confounder, using information from external sources. A sensitivity analysis was also conducted to assess the robustness of our conclusions to changes in such external data. The use of Bayesian modeling in this study suggests that the lack of baseline BMD did have a strong impact on the analysis, reversing the direction of the estimated effect (odds ratio of fracture incidence at 24 months: 0.40 vs. 1.36, with/without adjusting for unmeasured baseline BMD). The Bayesian twin-regression models provide a flexible sensitivity analysis tool to quantitatively assess the impact of unmeasured confounding in observational studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. CFD Analysis of different types of single basin solar stills

    NASA Astrophysics Data System (ADS)

    Maheswari, C. Uma; Meenakshi Reddy, R.

    2018-03-01

    The current work deals with the numerical and experimental analysis of a solar still of single basin with improved models of stepped, finned, PCM (Phase modification Materials) instrumentation in single slope. The work is additionally extended to double slope solar still of single basin and also the performances were compared with one another. The one slope basin inclinations were compared for 15° and 20°. From the investigations it had been ascertained that single slope with 20° and PCM instrumentation has given the upper productivity compared to different sorts.

  11. Waveform model for an eccentric binary black hole based on the effective-one-body-numerical-relativity formalism

    NASA Astrophysics Data System (ADS)

    Cao, Zhoujian; Han, Wen-Biao

    2017-08-01

    Binary black hole systems are among the most important sources for gravitational wave detection. They are also good objects for theoretical research for general relativity. A gravitational waveform template is important to data analysis. An effective-one-body-numerical-relativity (EOBNR) model has played an essential role in the LIGO data analysis. For future space-based gravitational wave detection, many binary systems will admit a somewhat orbit eccentricity. At the same time, the eccentric binary is also an interesting topic for theoretical study in general relativity. In this paper, we construct the first eccentric binary waveform model based on an effective-one-body-numerical-relativity framework. Our basic assumption in the model construction is that the involved eccentricity is small. We have compared our eccentric EOBNR model to the circular one used in the LIGO data analysis. We have also tested our eccentric EOBNR model against another recently proposed eccentric binary waveform model; against numerical relativity simulation results; and against perturbation approximation results for extreme mass ratio binary systems. Compared to numerical relativity simulations with an eccentricity as large as about 0.2, the overlap factor for our eccentric EOBNR model is better than 0.98 for all tested cases, including spinless binary and spinning binary, equal mass binary, and unequal mass binary. Hopefully, our eccentric model can be the starting point to develop a faithful template for future space-based gravitational wave detectors.

  12. Research on the spatial analysis method of seismic hazard for island

    NASA Astrophysics Data System (ADS)

    Jia, Jing; Jiang, Jitong; Zheng, Qiuhong; Gao, Huiying

    2017-05-01

    Seismic hazard analysis(SHA) is a key component of earthquake disaster prevention field for island engineering, whose result could provide parameters for seismic design microscopically and also is the requisite work for the island conservation planning’s earthquake and comprehensive disaster prevention planning macroscopically, in the exploitation and construction process of both inhabited and uninhabited islands. The existing seismic hazard analysis methods are compared in their application, and their application and limitation for island is analysed. Then a specialized spatial analysis method of seismic hazard for island (SAMSHI) is given to support the further related work of earthquake disaster prevention planning, based on spatial analysis tools in GIS and fuzzy comprehensive evaluation model. The basic spatial database of SAMSHI includes faults data, historical earthquake record data, geological data and Bouguer gravity anomalies data, which are the data sources for the 11 indices of the fuzzy comprehensive evaluation model, and these indices are calculated by the spatial analysis model constructed in ArcGIS’s Model Builder platform.

  13. Prediction of unwanted pregnancies using logistic regression, probit regression and discriminant analysis

    PubMed Central

    Ebrahimzadeh, Farzad; Hajizadeh, Ebrahim; Vahabi, Nasim; Almasian, Mohammad; Bakhteyar, Katayoon

    2015-01-01

    Background: Unwanted pregnancy not intended by at least one of the parents has undesirable consequences for the family and the society. In the present study, three classification models were used and compared to predict unwanted pregnancies in an urban population. Methods: In this cross-sectional study, 887 pregnant mothers referring to health centers in Khorramabad, Iran, in 2012 were selected by the stratified and cluster sampling; relevant variables were measured and for prediction of unwanted pregnancy, logistic regression, discriminant analysis, and probit regression models and SPSS software version 21 were used. To compare these models, indicators such as sensitivity, specificity, the area under the ROC curve, and the percentage of correct predictions were used. Results: The prevalence of unwanted pregnancies was 25.3%. The logistic and probit regression models indicated that parity and pregnancy spacing, contraceptive methods, household income and number of living male children were related to unwanted pregnancy. The performance of the models based on the area under the ROC curve was 0.735, 0.733, and 0.680 for logistic regression, probit regression, and linear discriminant analysis, respectively. Conclusion: Given the relatively high prevalence of unwanted pregnancies in Khorramabad, it seems necessary to revise family planning programs. Despite the similar accuracy of the models, if the researcher is interested in the interpretability of the results, the use of the logistic regression model is recommended. PMID:26793655

  14. 3D-QSAR Studies on Barbituric Acid Derivatives as Urease Inhibitors and the Effect of Charges on the Quality of a Model.

    PubMed

    Ul-Haq, Zaheer; Ashraf, Sajda; Al-Majid, Abdullah Mohammed; Barakat, Assem

    2016-04-30

    Urease enzyme (EC 3.5.1.5) has been determined as a virulence factor in pathogenic microorganisms that are accountable for the development of different diseases in humans and animals. In continuance of our earlier study on the helicobacter pylori urease inhibition by barbituric acid derivatives, 3D-QSAR (three dimensional quantitative structural activity relationship) advance studies were performed by Comparative Molecular Field Analysis (CoMFA) and Comparative Molecular Similarity Indices Analysis (CoMSIA) methods. Different partial charges were calculated to examine their consequences on the predictive ability of the developed models. The finest developed model for CoMFA and CoMSIA were achieved by using MMFF94 charges. The developed CoMFA model gives significant results with cross-validation (q²) value of 0.597 and correlation coefficients (r²) of 0.897. Moreover, five different fields i.e., steric, electrostatic, and hydrophobic, H-bond acceptor and H-bond donors were used to produce a CoMSIA model, with q² and r² of 0.602 and 0.98, respectively. The generated models were further validated by using an external test set. Both models display good predictive power with r²pred ≥ 0.8. The analysis of obtained CoMFA and CoMSIA contour maps provided detailed insight for the promising modification of the barbituric acid derivatives with an enhanced biological activity.

  15. Prediction of unwanted pregnancies using logistic regression, probit regression and discriminant analysis.

    PubMed

    Ebrahimzadeh, Farzad; Hajizadeh, Ebrahim; Vahabi, Nasim; Almasian, Mohammad; Bakhteyar, Katayoon

    2015-01-01

    Unwanted pregnancy not intended by at least one of the parents has undesirable consequences for the family and the society. In the present study, three classification models were used and compared to predict unwanted pregnancies in an urban population. In this cross-sectional study, 887 pregnant mothers referring to health centers in Khorramabad, Iran, in 2012 were selected by the stratified and cluster sampling; relevant variables were measured and for prediction of unwanted pregnancy, logistic regression, discriminant analysis, and probit regression models and SPSS software version 21 were used. To compare these models, indicators such as sensitivity, specificity, the area under the ROC curve, and the percentage of correct predictions were used. The prevalence of unwanted pregnancies was 25.3%. The logistic and probit regression models indicated that parity and pregnancy spacing, contraceptive methods, household income and number of living male children were related to unwanted pregnancy. The performance of the models based on the area under the ROC curve was 0.735, 0.733, and 0.680 for logistic regression, probit regression, and linear discriminant analysis, respectively. Given the relatively high prevalence of unwanted pregnancies in Khorramabad, it seems necessary to revise family planning programs. Despite the similar accuracy of the models, if the researcher is interested in the interpretability of the results, the use of the logistic regression model is recommended.

  16. Computer simulation of Cerebral Arteriovenous Malformation-validation analysis of hemodynamics parameters.

    PubMed

    Kumar, Y Kiran; Mehta, Shashi Bhushan; Ramachandra, Manjunath

    2017-01-01

    The purpose of this work is to provide some validation methods for evaluating the hemodynamic assessment of Cerebral Arteriovenous Malformation (CAVM). This article emphasizes the importance of validating noninvasive measurements for CAVM patients, which are designed using lumped models for complex vessel structure. The validation of the hemodynamics assessment is based on invasive clinical measurements and cross-validation techniques with the Philips proprietary validated software's Qflow and 2D Perfursion. The modeling results are validated for 30 CAVM patients for 150 vessel locations. Mean flow, diameter, and pressure were compared between modeling results and with clinical/cross validation measurements, using an independent two-tailed Student t test. Exponential regression analysis was used to assess the relationship between blood flow, vessel diameter, and pressure between them. Univariate analysis is used to assess the relationship between vessel diameter, vessel cross-sectional area, AVM volume, AVM pressure, and AVM flow results were performed with linear or exponential regression. Modeling results were compared with clinical measurements from vessel locations of cerebral regions. Also, the model is cross validated with Philips proprietary validated software's Qflow and 2D Perfursion. Our results shows that modeling results and clinical results are nearly matching with a small deviation. In this article, we have validated our modeling results with clinical measurements. The new approach for cross-validation is proposed by demonstrating the accuracy of our results with a validated product in a clinical environment.

  17. Development of a Conservative Model Validation Approach for Reliable Analysis

    DTIC Science & Technology

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  18. Combat Simulation Using Breach Computer Language

    DTIC Science & Technology

    1979-09-01

    simulation and weapon system analysis computer language Two types of models were constructed: a stochastic duel and a dynamic engagement model The... duel model validates the BREACH approach by comparing results with mathematical solutions. The dynamic model shows the capability of the BREACH...BREACH 2 Background 2 The Language 3 Static Duel 4 Background and Methodology 4 Validation 5 Results 8 Tank Duel Simulation 8 Dynamic Assault Model

  19. Comprehensive analysis of a Metabolic Model for lipid production in Rhodosporidium toruloides.

    PubMed

    Castañeda, María Teresita; Nuñez, Sebastián; Garelli, Fabricio; Voget, Claudio; Battista, Hernán De

    2018-05-19

    The yeast Rhodosporidium toruloides has been extensively studied for its application in biolipid production. The knowledge of its metabolism capabilities and the application of constraint-based flux analysis methodology provide useful information for process prediction and optimization. The accuracy of the resulting predictions is highly dependent on metabolic models. A metabolic reconstruction for R. toruloides metabolism has been recently published. On the basis of this model, we developed a curated version that unblocks the central nitrogen metabolism and, in addition, completes charge and mass balances in some reactions neglected in the former model. Then, a comprehensive analysis of network capability was performed with the curated model and compared with the published metabolic reconstruction. The flux distribution obtained by lipid optimization with Flux Balance Analysis was able to replicate the internal biochemical changes that lead to lipogenesis in oleaginous microorganisms. These results motivate the development of a genome-scale model for complete elucidation of R. toruloides metabolism. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Sensitivity analysis of a sound absorption model with correlated inputs

    NASA Astrophysics Data System (ADS)

    Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.

    2017-04-01

    Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.

  1. Computational Analysis of Human Blood Flow

    NASA Astrophysics Data System (ADS)

    Panta, Yogendra; Marie, Hazel; Harvey, Mark

    2009-11-01

    Fluid flow modeling with commercially available computational fluid dynamics (CFD) software is widely used to visualize and predict physical phenomena related to various biological systems. In this presentation, a typical human aorta model was analyzed assuming the blood flow as laminar with complaint cardiac muscle wall boundaries. FLUENT, a commercially available finite volume software, coupled with Solidworks, a modeling software, was employed for the preprocessing, simulation and postprocessing of all the models.The analysis mainly consists of a fluid-dynamics analysis including a calculation of the velocity field and pressure distribution in the blood and a mechanical analysis of the deformation of the tissue and artery in terms of wall shear stress. A number of other models e.g. T branches, angle shaped were previously analyzed and compared their results for consistency for similar boundary conditions. The velocities, pressures and wall shear stress distributions achieved in all models were as expected given the similar boundary conditions. The three dimensional time dependent analysis of blood flow accounting the effect of body forces with a complaint boundary was also performed.

  2. The Design and the Formative Evaluation of a Web-Based Course for Simulation Analysis Experiences

    ERIC Educational Resources Information Center

    Tao, Yu-Hui; Guo, Shin-Ming; Lu, Ya-Hui

    2006-01-01

    Simulation output analysis has received little attention comparing to modeling and programming in real-world simulation applications. This is further evidenced by our observation that students and beginners acquire neither adequate details of knowledge nor relevant experience of simulation output analysis in traditional classroom learning. With…

  3. Strategic and Market Analysis | Bioenergy | NREL

    Science.gov Websites

    recent efforts in comparative techno-economic analysis. Our analysis considers a wide range of conversion Intermediates NREL has developed first-of-its-kind process models and economic assessments of the co-processing work strives to understand the economic incentives, technical risks, and key data gaps that need to be

  4. Testing a Wheeled Landing Gear System for the TH-57 Helicopter

    DTIC Science & Technology

    1992-12-01

    initial comparison was done using a structural analysis program, GIFTS , to simultaneously analyze an~i compare the gear systems. Experimental data was used...15 B. GIFTS PROGRAM RESULTS ............................ 15 1. Model...Element Total System ( GIFTS ) structural analysis program, which is resident oin the Aeiunauimia Euginme1ing Department computer system, an analysis

  5. Medical Terminology: A Phonological Analysis for Teaching English Pronunciation.

    ERIC Educational Resources Information Center

    Jabbour-Lagocki, Judith

    1992-01-01

    A phonological analysis of medical terminology was developed as an answer to pleas from students in medical English courses in Austria. The analysis can serve as a model for other sciences in which a comparable predicament exists: Graeco-Latinate terms are readily understood when written, but not easily recognized when spoken. (JL)

  6. Scan-To Output Validation: Towards a Standardized Geometric Quality Assessment of Building Information Models Based on Point Clouds

    NASA Astrophysics Data System (ADS)

    Bonduel, M.; Bassier, M.; Vergauwen, M.; Pauwels, P.; Klein, R.

    2017-11-01

    The use of Building Information Modeling (BIM) for existing buildings based on point clouds is increasing. Standardized geometric quality assessment of the BIMs is needed to make them more reliable and thus reusable for future users. First, available literature on the subject is studied. Next, an initial proposal for a standardized geometric quality assessment is presented. Finally, this method is tested and evaluated with a case study. The number of specifications on BIM relating to existing buildings is limited. The Levels of Accuracy (LOA) specification of the USIBD provides definitions and suggestions regarding geometric model accuracy, but lacks a standardized assessment method. A deviation analysis is found to be dependent on (1) the used mathematical model, (2) the density of the point clouds and (3) the order of comparison. Results of the analysis can be graphical and numerical. An analysis on macro (building) and micro (BIM object) scale is necessary. On macro scale, the complete model is compared to the original point cloud and vice versa to get an overview of the general model quality. The graphical results show occluded zones and non-modeled objects respectively. Colored point clouds are derived from this analysis and integrated in the BIM. On micro scale, the relevant surface parts are extracted per BIM object and compared to the complete point cloud. Occluded zones are extracted based on a maximum deviation. What remains is classified according to the LOA specification. The numerical results are integrated in the BIM with the use of object parameters.

  7. Subsonic Analysis of 0.04-Scale F-16XL Models Using an Unstructured Euler Code

    NASA Technical Reports Server (NTRS)

    Lessard, Wendy B.

    1996-01-01

    The subsonic flow field about an F-16XL airplane model configuration was investigated with an inviscid unstructured grid technique. The computed surface pressures were compared to wind-tunnel test results at Mach 0.148 for a range of angles of attack from 0 deg to 20 deg. To evaluate the effect of grid dependency on the solution, a grid study was performed in which fine, medium, and coarse grid meshes were generated. The off-surface vortical flow field was locally adapted and showed improved correlation to the wind-tunnel data when compared to the nonadapted flow field. Computational results are also compared to experimental five-hole pressure probe data. A detailed analysis of the off-body computed pressure contours, velocity vectors, and particle traces are presented and discussed.

  8. A general framework for the use of logistic regression models in meta-analysis.

    PubMed

    Simmonds, Mark C; Higgins, Julian Pt

    2016-12-01

    Where individual participant data are available for every randomised trial in a meta-analysis of dichotomous event outcomes, "one-stage" random-effects logistic regression models have been proposed as a way to analyse these data. Such models can also be used even when individual participant data are not available and we have only summary contingency table data. One benefit of this one-stage regression model over conventional meta-analysis methods is that it maximises the correct binomial likelihood for the data and so does not require the common assumption that effect estimates are normally distributed. A second benefit of using this model is that it may be applied, with only minor modification, in a range of meta-analytic scenarios, including meta-regression, network meta-analyses and meta-analyses of diagnostic test accuracy. This single model can potentially replace the variety of often complex methods used in these areas. This paper considers, with a range of meta-analysis examples, how random-effects logistic regression models may be used in a number of different types of meta-analyses. This one-stage approach is compared with widely used meta-analysis methods including Bayesian network meta-analysis and the bivariate and hierarchical summary receiver operating characteristic (ROC) models for meta-analyses of diagnostic test accuracy. © The Author(s) 2014.

  9. Conceptual Models of Depression in Primary Care Patients: A Comparative Study

    PubMed Central

    Karasz, Alison; Garcia, Nerina; Ferri, Lucia

    2009-01-01

    Conventional psychiatric treatment models are based on a biopsychiatric model of depression. A plausible explanation for low rates of depression treatment utilization among ethnic minorities and the poor is that members of these communities do not share the cultural assumptions underlying the biopsychiatric model. The study examined conceptual models of depression among depressed patients from various ethnic groups, focusing on the degree to which patients’ conceptual models ‘matched’ a biopsychiatric model of depression. The sample included 74 primary care patients from three ethnic groups screening positive for depression. We administered qualitative interviews assessing patients’ conceptual representations of depression. The analysis proceeded in two phases. The first phase involved a strategy called ‘quantitizing’ the qualitative data. A rating scheme was developed and applied to the data by a rater blind to study hypotheses. The data was subjected to statistical analyses. The second phase of the analysis involved the analysis of thematic data using standard qualitative techniques. Study hypotheses were largely supported. The qualitative analysis provided a detailed picture of primary care patients’ conceptual models of depression and suggested interesting directions for future research. PMID:20182550

  10. Oseltamivir Treatment for Children with Influenza-Like Illness in China: A Cost-Effectiveness Analysis.

    PubMed

    Shen, Kunling; Xiong, Tengbin; Tan, Seng Chuen; Wu, Jiuhong

    2016-01-01

    Influenza is a common viral respiratory infection that causes epidemics and pandemics in the human population. Oseltamivir is a neuraminidase inhibitor-a new class of antiviral therapy for influenza. Although its efficacy and safety have been established, there is uncertainty regarding whether influenza-like illness (ILI) in children is best managed by oseltamivir at the onset of illness, and its cost-effectiveness in children has not been studied in China. To evaluate the cost-effectiveness of post rapid influenza diagnostic test (RIDT) treatment with oseltamivir and empiric treatment with oseltamivir comparing with no antiviral therapy against influenza for children with ILI. We developed a decision-analytic model based on previously published evidence to simulate and evaluate 1-year potential clinical and economic outcomes associated with three managing strategies for children presenting with symptoms of influenza. Model inputs were derived from literature and expert opinion of clinical practice and research in China. Outcome measures included costs and quality-adjusted life year (QALY). All the interventions were compared with incremental cost-effectiveness ratios (ICER). In base case analysis, empiric treatment with oseltamivir consistently produced the greatest gains in QALY. When compared with no antiviral therapy, the empiric treatment with oseltamivir strategy is very cost effective with an ICER of RMB 4,438. When compared with the post RIDT treatment with oseltamivir, the empiric treatment with oseltamivir strategy is dominant. Probabilistic sensitivity analysis projected that there is a 100% probability that empiric oseltamivir treatment would be considered as a very cost-effective strategy compared to the no antiviral therapy, according to the WHO recommendations for cost-effectiveness thresholds. The same was concluded with 99% probability for empiric oseltamivir treatment being a very cost-effective strategy compared to the post RIDT treatment with oseltamivir. In the Chinese setting of current health system, our modelling based simulation analysis suggests that empiric treatment with oseltamivir to be a cost-saving and very cost-effective strategy in managing children with ILI.

  11. Accuracy of Digital vs. Conventional Implant Impressions

    PubMed Central

    Lee, Sang J.; Betensky, Rebecca A.; Gianneschi, Grace E.; Gallucci, German O.

    2015-01-01

    The accuracy of digital impressions greatly influences the clinical viability in implant restorations. The aim of this study is to compare the accuracy of gypsum models acquired from the conventional implant impression to digitally milled models created from direct digitalization by three-dimensional analysis. Thirty gypsum and 30 digitally milled models impressed directly from a reference model were prepared. The models were scanned by a laboratory scanner and 30 STL datasets from each group were imported to an inspection software. The datasets were aligned to the reference dataset by a repeated best fit algorithm and 10 specified contact locations of interest were measured in mean volumetric deviations. The areas were pooled by cusps, fossae, interproximal contacts, horizontal and vertical axes of implant position and angulation. The pooled areas were statistically analysed by comparing each group to the reference model to investigate the mean volumetric deviations accounting for accuracy and standard deviations for precision. Milled models from digital impressions had comparable accuracy to gypsum models from conventional impressions. However, differences in fossae and vertical displacement of the implant position from the gypsum and digitally milled models compared to the reference model, exhibited statistical significance (p<0.001, p=0.020 respectively). PMID:24720423

  12. Developing a new solar radiation estimation model based on Buckingham theorem

    NASA Astrophysics Data System (ADS)

    Ekici, Can; Teke, Ismail

    2018-06-01

    While the value of solar radiation can be expressed physically in the days without clouds, this expression becomes difficult in cloudy and complicated weather conditions. In addition, solar radiation measurements are often not taken in developing countries. In such cases, solar radiation estimation models are used. Solar radiation prediction models estimate solar radiation using other measured meteorological parameters those are available in the stations. In this study, a solar radiation estimation model was obtained using Buckingham theorem. This theory has been shown to be useful in predicting solar radiation. In this study, Buckingham theorem is used to express the solar radiation by derivation of dimensionless pi parameters. This derived model is compared with temperature based models in the literature. MPE, RMSE, MBE and NSE error analysis methods are used in this comparison. Allen, Hargreaves, Chen and Bristow-Campbell models in the literature are used for comparison. North Dakota's meteorological data were used to compare the models. Error analysis were applied through the comparisons between the models in the literature and the model that is derived in the study. These comparisons were made using data obtained from North Dakota's agricultural climate network. In these applications, the model obtained within the scope of the study gives better results. Especially, in terms of short-term performance, it has been found that the obtained model gives satisfactory results. It has been seen that this model gives better accuracy in comparison with other models. It is possible in RMSE analysis results. Buckingham theorem was found useful in estimating solar radiation. In terms of long term performances and percentage errors, the model has given good results.

  13. Dynamic analysis of rotor flex-structure based on nonlinear anisotropic shell models

    NASA Astrophysics Data System (ADS)

    Bauchau, Olivier A.; Chiang, Wuying

    1991-05-01

    In this paper an anisotropic shallow shell model is developed that accommodates transverse shearing deformations and arbitrarily large displacements and rotations, but strains are assumed to remain small. Two kinematic models are developed, the first using two DOF to locate the direction of the normal to the shell's midplane, the second using three. The latter model allows for an automatic compatibility of the shell model with beam models. The shell model is validated by comparing its predictions with several benchmark problems. In actual helicopter rotor blade problems, the shell model of the flex structure is shown to give very different results shown compared to beam models. The lead-lag and torsion modes in particular are strongly affected, whereas flapping modes seem to be less affected.

  14. Forecasting of primary energy consumption data in the United States: A comparison between ARIMA and Holter-Winters models

    NASA Astrophysics Data System (ADS)

    Rahman, A.; Ahmar, A. S.

    2017-09-01

    This research has a purpose to compare ARIMA Model and Holt-Winters Model based on MAE, RSS, MSE, and RMS criteria in predicting Primary Energy Consumption Total data in the US. The data from this research ranges from January 1973 to December 2016. This data will be processed by using R Software. Based on the results of data analysis that has been done, it is found that the model of Holt-Winters Additive type (MSE: 258350.1) is the most appropriate model in predicting Primary Energy Consumption Total data in the US. This model is more appropriate when compared with Holt-Winters Multiplicative type (MSE: 262260,4) and ARIMA Seasonal model (MSE: 723502,2).

  15. Comparative analysis of the modified enclosed energy metric for self-focusing holograms from digital lensless holographic microscopy.

    PubMed

    Trujillo, Carlos; Garcia-Sucerquia, Jorge

    2015-06-01

    A comparative analysis of the performance of the modified enclosed energy (MEE) method for self-focusing holograms recorded with digital lensless holographic microscopy is presented. Notwithstanding the MEE analysis previously published, no extended analysis of its performance has been reported. We have tested the MEE in terms of the minimum axial distance allowed between the set of reconstructed holograms to search for the focal plane and the elapsed time to obtain the focused image. These parameters have been compared with those for some of the already reported methods in the literature. The MEE achieves better results in terms of self-focusing quality but at a higher computational cost. Despite its longer processing time, the method remains within a time frame to be technologically attractive. Modeled and experimental holograms have been utilized in this work to perform the comparative study.

  16. DigOut: viewing differential expression genes as outliers.

    PubMed

    Yu, Hui; Tu, Kang; Xie, Lu; Li, Yuan-Yuan

    2010-12-01

    With regards to well-replicated two-conditional microarray datasets, the selection of differentially expressed (DE) genes is a well-studied computational topic, but for multi-conditional microarray datasets with limited or no replication, the same task is not properly addressed by previous studies. This paper adopts multivariate outlier analysis to analyze replication-lacking multi-conditional microarray datasets, finding that it performs significantly better than the widely used limit fold change (LFC) model in a simulated comparative experiment. Compared with the LFC model, the multivariate outlier analysis also demonstrates improved stability against sample variations in a series of manipulated real expression datasets. The reanalysis of a real non-replicated multi-conditional expression dataset series leads to satisfactory results. In conclusion, a multivariate outlier analysis algorithm, like DigOut, is particularly useful for selecting DE genes from non-replicated multi-conditional gene expression dataset.

  17. Mental Models about Seismic Effects: Students' Profile Based Comparative Analysis

    ERIC Educational Resources Information Center

    Moutinho, Sara; Moura, Rui; Vasconcelos, Clara

    2016-01-01

    Nowadays, meaningful learning takes a central role in science education and is based in mental models that allow the representation of the real world by individuals. Thus, it is essential to analyse the student's mental models by promoting an easier reconstruction of scientific knowledge, by allowing them to become consistent with the curricular…

  18. Predictor-Based Model Reference Adaptive Control

    NASA Technical Reports Server (NTRS)

    Lavretsky, Eugene; Gadient, Ross; Gregory, Irene M.

    2010-01-01

    This paper is devoted to the design and analysis of a predictor-based model reference adaptive control. Stable adaptive laws are derived using Lyapunov framework. The proposed architecture is compared with the now classical model reference adaptive control. A simulation example is presented in which numerical evidence indicates that the proposed controller yields improved transient characteristics.

  19. Modeling Conditional Probabilities in Complex Educational Assessments. CSE Technical Report.

    ERIC Educational Resources Information Center

    Mislevy, Robert J.; Almond, Russell; Dibello, Lou; Jenkins, Frank; Steinberg, Linda; Yan, Duanli; Senturk, Deniz

    An active area in psychometric research is coordinated task design and statistical analysis built around cognitive models. Compared with classical test theory and item response theory, there is often less information from observed data about the measurement-model parameters. On the other hand, there is more information from the grounding…

  20. Application of finite element substructuring to composite micromechanics. M.S. Thesis - Akron Univ., May 1984

    NASA Technical Reports Server (NTRS)

    Caruso, J. J.

    1984-01-01

    Finite element substructuring is used to predict unidirectional fiber composite hygral (moisture), thermal, and mechanical properties. COSMIC NASTRAN and MSC/NASTRAN are used to perform the finite element analysis. The results obtained from the finite element model are compared with those obtained from the simplified composite micromechanics equations. A unidirectional composite structure made of boron/HM-epoxy, S-glass/IMHS-epoxy and AS/IMHS-epoxy are studied. The finite element analysis is performed using three dimensional isoparametric brick elements and two distinct models. The first model consists of a single cell (one fiber surrounded by matrix) to form a square. The second model uses the single cell and substructuring to form a nine cell square array. To compare computer time and results with the nine cell superelement model, another nine cell model is constructed using conventional mesh generation techniques. An independent computer program consisting of the simplified micromechanics equation is developed to predict the hygral, thermal, and mechanical properties for this comparison. The results indicate that advanced techniques can be used advantageously for fiber composite micromechanics.

  1. Quantifying the Strength of General Factors in Psychopathology: A Comparison of CFA with Maximum Likelihood Estimation, BSEM, and ESEM/EFA Bifactor Approaches.

    PubMed

    Murray, Aja Louise; Booth, Tom; Eisner, Manuel; Obsuth, Ingrid; Ribeaud, Denis

    2018-05-22

    Whether or not importance should be placed on an all-encompassing general factor of psychopathology (or p factor) in classifying, researching, diagnosing, and treating psychiatric disorders depends (among other issues) on the extent to which comorbidity is symptom-general rather than staying largely within the confines of narrower transdiagnostic factors such as internalizing and externalizing. In this study, we compared three methods of estimating p factor strength. We compared omega hierarchical and explained common variance calculated from confirmatory factor analysis (CFA) bifactor models with maximum likelihood (ML) estimation, from exploratory structural equation modeling/exploratory factor analysis models with a bifactor rotation, and from Bayesian structural equation modeling (BSEM) bifactor models. Our simulation results suggested that BSEM with small variance priors on secondary loadings might be the preferred option. However, CFA with ML also performed well provided secondary loadings were modeled. We provide two empirical examples of applying the three methodologies using a normative sample of youth (z-proso, n = 1,286) and a university counseling sample (n = 359).

  2. Particle Simulation of Coulomb Collisions: Comparing the Methods of Takizuka & Abe and Nanbu

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, C; Lin, T; Caflisch, R

    2007-05-22

    The interactions of charged particles in a plasma are in a plasma is governed by the long-range Coulomb collision. We compare two widely used Monte Carlo models for Coulomb collisions. One was developed by Takizuka and Abe in 1977, the other was developed by Nanbu in 1997. We perform deterministic and stochastic error analysis with respect to particle number and time step. The two models produce similar stochastic errors, but Nanbu's model gives smaller time step errors. Error comparisons between these two methods are presented.

  3. The impact of structural uncertainty on cost-effectiveness models for adjuvant endocrine breast cancer treatments: the need for disease-specific model standardization and improved guidance.

    PubMed

    Frederix, Gerardus W J; van Hasselt, Johan G C; Schellens, Jan H M; Hövels, Anke M; Raaijmakers, Jan A M; Huitema, Alwin D R; Severens, Johan L

    2014-01-01

    Structural uncertainty relates to differences in model structure and parameterization. For many published health economic analyses in oncology, substantial differences in model structure exist, leading to differences in analysis outcomes and potentially impacting decision-making processes. The objectives of this analysis were (1) to identify differences in model structure and parameterization for cost-effectiveness analyses (CEAs) comparing tamoxifen and anastrazole for adjuvant breast cancer (ABC) treatment; and (2) to quantify the impact of these differences on analysis outcome metrics. The analysis consisted of four steps: (1) review of the literature for identification of eligible CEAs; (2) definition and implementation of a base model structure, which included the core structural components for all identified CEAs; (3) definition and implementation of changes or additions in the base model structure or parameterization; and (4) quantification of the impact of changes in model structure or parameterizations on the analysis outcome metrics life-years gained (LYG), incremental costs (IC) and the incremental cost-effectiveness ratio (ICER). Eleven CEA analyses comparing anastrazole and tamoxifen as ABC treatment were identified. The base model consisted of the following health states: (1) on treatment; (2) off treatment; (3) local recurrence; (4) metastatic disease; (5) death due to breast cancer; and (6) death due to other causes. The base model estimates of anastrazole versus tamoxifen for the LYG, IC and ICER were 0.263 years, €3,647 and €13,868/LYG, respectively. In the published models that were evaluated, differences in model structure included the addition of different recurrence health states, and associated transition rates were identified. Differences in parameterization were related to the incidences of recurrence, local recurrence to metastatic disease, and metastatic disease to death. The separate impact of these model components on the LYG ranged from 0.207 to 0.356 years, while incremental costs ranged from €3,490 to €3,714 and ICERs ranged from €9,804/LYG to €17,966/LYG. When we re-analyzed the published CEAs in our framework by including their respective model properties, the LYG ranged from 0.207 to 0.383 years, IC ranged from €3,556 to €3,731 and ICERs ranged from €9,683/LYG to €17,570/LYG. Differences in model structure and parameterization lead to substantial differences in analysis outcome metrics. This analysis supports the need for more guidance regarding structural uncertainty and the use of standardized disease-specific models for health economic analyses of adjuvant endocrine breast cancer therapies. The developed approach in the current analysis could potentially serve as a template for further evaluations of structural uncertainty and development of disease-specific models.

  4. Cost effectiveness analysis comparing repetitive transcranial magnetic stimulation to antidepressant medications after a first treatment failure for major depressive disorder in newly diagnosed patients - A lifetime analysis.

    PubMed

    Voigt, Jeffrey; Carpenter, Linda; Leuchter, Andrew

    2017-01-01

    Repetitive Transcranial Magnetic Stimulation (rTMS) commonly is used for the treatment of Major Depressive Disorder (MDD) after patients have failed to benefit from trials of multiple antidepressant medications. No analysis to date has examined the cost-effectiveness of rTMS used earlier in the course of treatment and over a patients' lifetime. We used lifetime Markov simulation modeling to compare the direct costs and quality adjusted life years (QALYs) of rTMS and medication therapy in patients with newly diagnosed MDD (ages 20-59) who had failed to benefit from one pharmacotherapy trial. Patients' life expectancies, rates of response and remission, and quality of life outcomes were derived from the literature, and treatment costs were based upon published Medicare reimbursement data. Baseline costs, aggregate per year quality of life assessments (QALYs), Monte Carlo simulation, tornado analysis, assessment of dominance, and one way sensitivity analysis were also performed. The discount rate applied was 3%. Lifetime direct treatment costs, and QALYs identified rTMS as the dominant therapy compared to antidepressant medications (i.e., lower costs with better outcomes) in all age ranges, with costs/improved QALYs ranging from $2,952/0.32 (older patients) to $11,140/0.43 (younger patients). One-way sensitivity analysis demonstrated that the model was most sensitive to the input variables of cost per rTMS session, monthly prescription drug cost, and the number of rTMS sessions per year. rTMS was identified as the dominant therapy compared to antidepressant medication trials over the life of the patient across the lifespan of adults with MDD, given current costs of treatment. These models support the use of rTMS after a single failed antidepressant medication trial versus further attempts at medication treatment in adults with MDD.

  5. Cost effectiveness analysis comparing repetitive transcranial magnetic stimulation to antidepressant medications after a first treatment failure for major depressive disorder in newly diagnosed patients – A lifetime analysis

    PubMed Central

    2017-01-01

    Objective Repetitive Transcranial Magnetic Stimulation (rTMS) commonly is used for the treatment of Major Depressive Disorder (MDD) after patients have failed to benefit from trials of multiple antidepressant medications. No analysis to date has examined the cost-effectiveness of rTMS used earlier in the course of treatment and over a patients’ lifetime. Methods We used lifetime Markov simulation modeling to compare the direct costs and quality adjusted life years (QALYs) of rTMS and medication therapy in patients with newly diagnosed MDD (ages 20–59) who had failed to benefit from one pharmacotherapy trial. Patients’ life expectancies, rates of response and remission, and quality of life outcomes were derived from the literature, and treatment costs were based upon published Medicare reimbursement data. Baseline costs, aggregate per year quality of life assessments (QALYs), Monte Carlo simulation, tornado analysis, assessment of dominance, and one way sensitivity analysis were also performed. The discount rate applied was 3%. Results Lifetime direct treatment costs, and QALYs identified rTMS as the dominant therapy compared to antidepressant medications (i.e., lower costs with better outcomes) in all age ranges, with costs/improved QALYs ranging from $2,952/0.32 (older patients) to $11,140/0.43 (younger patients). One-way sensitivity analysis demonstrated that the model was most sensitive to the input variables of cost per rTMS session, monthly prescription drug cost, and the number of rTMS sessions per year. Conclusion rTMS was identified as the dominant therapy compared to antidepressant medication trials over the life of the patient across the lifespan of adults with MDD, given current costs of treatment. These models support the use of rTMS after a single failed antidepressant medication trial versus further attempts at medication treatment in adults with MDD. PMID:29073256

  6. Comparison of physical and semi-empirical hydraulic models for flood inundation mapping

    NASA Astrophysics Data System (ADS)

    Tavakoly, A. A.; Afshari, S.; Omranian, E.; Feng, D.; Rajib, A.; Snow, A.; Cohen, S.; Merwade, V.; Fekete, B. M.; Sharif, H. O.; Beighley, E.

    2016-12-01

    Various hydraulic/GIS-based tools can be used for illustrating spatial extent of flooding for first-responders, policy makers and the general public. The objective of this study is to compare four flood inundation modeling tools: HEC-RAS-2D, Gridded Surface Subsurface Hydrologic Analysis (GSSHA), AutoRoute and Height Above the Nearest Drainage (HAND). There is a trade-off among accuracy, workability and computational demand in detailed, physics-based flood inundation models (e.g. HEC-RAS-2D and GSSHA) in contrast with semi-empirical, topography-based, computationally less expensive approaches (e.g. AutoRoute and HAND). The motivation for this study is to evaluate this trade-off and offer guidance to potential large-scale application in an operational prediction system. The models were assessed and contrasted via comparability analysis (e.g. overlapping statistics) by using three case studies in the states of Alabama, Texas, and West Virginia. The sensitivity and accuracy of physical and semi-eimpirical models in producing inundation extent were evaluated for the following attributes: geophysical characteristics (e.g. high topographic variability vs. flat natural terrain, urbanized vs. rural zones, effect of surface roughness paratermer value), influence of hydraulic structures such as dams and levees compared to unobstructed flow condition, accuracy in large vs. small study domain, effect of spatial resolution in topographic data (e.g. 10m National Elevation Dataset vs. 0.3m LiDAR). Preliminary results suggest that semi-empericial models tend to underestimate in a flat, urbanized area with controlled/managed river channel around 40% of the inundation extent compared to the physical models, regardless of topographic resolution. However, in places where there are topographic undulations, semi-empericial models attain relatively higher level of accuracy than they do in flat non-urbanized terrain.

  7. Radiative Transfer Modeling and Retrievals for Advanced Hyperspectral Sensors

    NASA Technical Reports Server (NTRS)

    Liu, Xu; Zhou, Daniel K.; Larar, Allen M.; Smith, William L., Sr.; Mango, Stephen A.

    2009-01-01

    A novel radiative transfer model and a physical inversion algorithm based on principal component analysis will be presented. Instead of dealing with channel radiances, the new approach fits principal component scores of these quantities. Compared to channel-based radiative transfer models, the new approach compresses radiances into a much smaller dimension making both forward modeling and inversion algorithm more efficient.

  8. Psychiatry's next top model: cause for a re-think on drug models of psychosis and other psychiatric disorders.

    PubMed

    Carhart-Harris, R L; Brugger, S; Nutt, D J; Stone, J M

    2013-09-01

    Despite the widespread application of drug modelling in psychiatric research, the relative value of different models has never been formally compared in the same analysis. Here we compared the effects of five drugs (cannabis, psilocybin, amphetamine, ketamine and alcohol) in relation to psychiatric symptoms in a two-part subjective analysis. In the first part, mental health professionals associated statements referring to specific experiences, for example 'I don't bother to get out of bed', to one or more psychiatric symptom clusters, for example depression and negative psychotic symptoms. This measured the specificity of an experience for a particular disorder. In the second part, individuals with personal experience with each of the above-listed drugs were asked how reliably each drug produced the experiences listed in part 1, both acutely and sub-acutely. Part 1 failed to find any experiences that were specific for negative or cognitive psychotic symptoms over depression. The best model of positive symptoms was psilocybin and the best models overall were the acute alcohol and amphetamine models of mania. These results challenge current assumptions about drug models and motivate further research on this understudied area.

  9. An Objective Verification of the North American Mesoscale Model for Kennedy Space Center and Cape Canaveral Air Force Station

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III

    2010-01-01

    The 45th Weather Squadron (45 WS) Launch Weather Officers (LWO's) use the 12-km resolution North American Mesoscale (NAM) model (MesoNAM) text and graphical product forecasts extensively to support launch weather operations. However, the actual performance of the model at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) has not been measured objectively. In order to have tangible evidence of model performance, the 45 WS tasked the Applied Meteorology Unit (AMU; Bauman et ai, 2004) to conduct a detailed statistical analysis of model output compared to observed values. The model products are provided to the 45 WS by ACTA, Inc. and include hourly forecasts from 0 to 84 hours based on model initialization times of 00, 06, 12 and 18 UTC. The objective analysis compared the MesoNAM forecast winds, temperature (T) and dew pOint (T d), as well as the changes in these parameters over time, to the observed values from the sensors in the KSC/CCAFS wind tower network shown in Table 1. These objective statistics give the forecasters knowledge of the model's strengths and weaknesses, which will result in improved forecasts for operations.

  10. Visualization of RNA structure models within the Integrative Genomics Viewer.

    PubMed

    Busan, Steven; Weeks, Kevin M

    2017-07-01

    Analyses of the interrelationships between RNA structure and function are increasingly important components of genomic studies. The SHAPE-MaP strategy enables accurate RNA structure probing and realistic structure modeling of kilobase-length noncoding RNAs and mRNAs. Existing tools for visualizing RNA structure models are not suitable for efficient analysis of long, structurally heterogeneous RNAs. In addition, structure models are often advantageously interpreted in the context of other experimental data and gene annotation information, for which few tools currently exist. We have developed a module within the widely used and well supported open-source Integrative Genomics Viewer (IGV) that allows visualization of SHAPE and other chemical probing data, including raw reactivities, data-driven structural entropies, and data-constrained base-pair secondary structure models, in context with linear genomic data tracks. We illustrate the usefulness of visualizing RNA structure in the IGV by exploring structure models for a large viral RNA genome, comparing bacterial mRNA structure in cells with its structure under cell- and protein-free conditions, and comparing a noncoding RNA structure modeled using SHAPE data with a base-pairing model inferred through sequence covariation analysis. © 2017 Busan and Weeks; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  11. Comparative assessment of turbulence model in predicting airflow over a NACA 0010 airfoil

    NASA Astrophysics Data System (ADS)

    Panday, Shoyon; Khan, Nafiz Ahmed; Rasel, Md; Faisal, Kh. Md.; Salam, Md. Abdus

    2017-06-01

    Nowadays the role of computational fluid dynamics to predict the flow behavior over airfoil is quite prominent. Most often a 2-D subsonic flow simulation is carried out over an airfoil at a certain Reynolds number and various angles of attack obtained by different turbulence models those are based on governing equations. The commonly used turbulence models are K-ɛpsilon, K-omega, Spalart Allmaras etc. Variation in turbulence model effectively influences the result of analysis. Here a comparative study is represented to show the effect of different turbulence models for a 2-D flow analysis over a National Advisory Committee for Aeronautics (NACA) airfoil 0010. This airfoil was analysed at 200000 Re number in 10 different angle of attacks at a constant speed of 21.6 m/s. Numbers of two dimensional flow simulation was run by changing the turbulence model, for each AOA. In accordance with the variation of result for different turbulence model, it was also found that for which model, attained result is close enough to experimental outcome from a low subsonic wind tunnel AF100. This paper also documents the effect of high and low angle of attack on the flow behaviour over an airfoil.

  12. Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data.

    PubMed

    Røge, Rasmus E; Madsen, Kristoffer H; Schmidt, Mikkel N; Mørup, Morten

    2017-10-01

    Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians subsequently normalized. Thus, when performing model selection, the two models are not in agreement. Analyzing multisubject whole brain resting-state fMRI data from healthy adult subjects, we find that the vMF mixture model is considerably more reliable than the gaussian mixture model when comparing solutions across models trained on different groups of subjects, and again we find that the two models disagree on the optimal number of components. The analysis indicates that the fMRI data support more than a thousand clusters, and we confirm this is not a result of overfitting by demonstrating better prediction on data from held-out subjects. Our results highlight the utility of using directional statistics to model standardized fMRI data and demonstrate that whole brain segmentation of fMRI data requires a very large number of functional units in order to adequately account for the discernible statistical patterns in the data.

  13. Lack of species-specific difference in pulmonary function when using mouse versus human plasma in a mouse model of hemorrhagic shock.

    PubMed

    Peng, Zhanglong; Pati, Shibani; Fontaine, Magali J; Hall, Kelly; Herrera, Anthony V; Kozar, Rosemary A

    2016-11-01

    Clinical studies have demonstrated that the early and empiric use of plasma improves survival after hemorrhagic shock. We have demonstrated in rodent models of hemorrhagic shock that resuscitation with plasma is protective to the lungs compared with lactated Ringer's solution. As our long-term objective is to determine the molecular mechanisms that modulate plasma's protective effects in injured bleeding patients, we have used human plasma in a mouse model of hemorrhagic shock. The goal of the current experiments is to determine if there are significant adverse effects on lung injury when using human versus mouse plasma in an established murine model of hemorrhagic shock and laparotomy. Mice underwent laparotomy and 90 minutes of hemorrhagic shock to a mean arterial pressure (MAP) of 35 ± 5 mm Hg followed by resuscitation at 1× shed blood using either mouse fresh frozen plasma (FFP), human FFP, or human lyophilized plasma. Mean arterial pressure was recorded during shock and for the first 30 minutes of resuscitation. After 3 hours, animals were killed, and lungs collected for analysis. There was a significant increase in early MAP when mouse FFP was used to resuscitate animals compared with human FFP or human lyophilized plasma. However, despite these differences, analysis of the mouse lungs revealed no significant differences in pulmonary histopathology, lung permeability, or lung edema between all three plasma groups. Analysis of neutrophil infiltration in the lungs revealed that mouse FFP decreased neutrophil influx as measured by neutrophil staining; however, myeloperoxidase immunostaining revealed no significant differences in between groups. The study of human plasma in a mouse model of hemorrhagic shock is feasible but does reveal some differences compared with mouse plasma-based resuscitation in physiologic measures such as MAP postresuscitation. Measures of end organ function such as lung injury appear to be comparable in this acute model of hemorrhagic shock and resuscitation.

  14. Guidance for the utility of linear models in meta-analysis of genetic association studies of binary phenotypes.

    PubMed

    Cook, James P; Mahajan, Anubha; Morris, Andrew P

    2017-02-01

    Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.

  15. A Comparative Analysis of a Generalized Lanchester Equation Model and a Stochastic Computer Simulation Model.

    DTIC Science & Technology

    1987-03-01

    model is one in which words or numerical descriptions are used to represent an entity or process. An example of a symbolic model is a mathematical ...are the third type of model used in modeling combat attrition. Analytical models are symbolic models which use mathematical symbols and equations to...simplicity and the ease of tracing through the mathematical computations. In this section I will discuss some of the shortcoming which have been

  16. Preliminary Work for Modeling the Propellers of an Aircraft as a Noise Source in an Acoustic Boundary Element Analysis

    NASA Technical Reports Server (NTRS)

    Vlahopoulos, Nickolas; Lyle, Karen H.; Burley, Casey L.

    1998-01-01

    An algorithm for generating appropriate velocity boundary conditions for an acoustic boundary element analysis from the kinematics of an operating propeller is presented. It constitutes the initial phase of Integrating sophisticated rotorcraft models into a conventional boundary element analysis. Currently, the pressure field is computed by a linear approximation. An initial validation of the developed process was performed by comparing numerical results to test data for the external acoustic pressure on the surface of a tilt-rotor aircraft for one flight condition.

  17. HCIT Contrast Performance Sensitivity Studies: Simulation Versus Experiment

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin; Shaklan, Stuart; Krist, John; Cady, Eric J.; Kern, Brian; Balasubramanian, Kunjithapatham

    2013-01-01

    Using NASA's High Contrast Imaging Testbed (HCIT) at the Jet Propulsion Laboratory, we have experimentally investigated the sensitivity of dark hole contrast in a Lyot coronagraph for the following factors: 1) Lateral and longitudinal translation of an occulting mask; 2) An opaque spot on the occulting mask; 3) Sizes of the controlled dark hole area. Also, we compared the measured results with simulations obtained using both MACOS (Modeling and Analysis for Controlled Optical Systems) and PROPER optical analysis programs with full three-dimensional near-field diffraction analysis to model HCIT's optical train and coronagraph.

  18. Factor analysis and multiple regression between topography and precipitation on Jeju Island, Korea

    NASA Astrophysics Data System (ADS)

    Um, Myoung-Jin; Yun, Hyeseon; Jeong, Chang-Sam; Heo, Jun-Haeng

    2011-11-01

    SummaryIn this study, new factors that influence precipitation were extracted from geographic variables using factor analysis, which allow for an accurate estimation of orographic precipitation. Correlation analysis was also used to examine the relationship between nine topographic variables from digital elevation models (DEMs) and the precipitation in Jeju Island. In addition, a spatial analysis was performed in order to verify the validity of the regression model. From the results of the correlation analysis, it was found that all of the topographic variables had a positive correlation with the precipitation. The relations between the variables also changed in accordance with a change in the precipitation duration. However, upon examining the correlation matrix, no significant relationship between the latitude and the aspect was found. According to the factor analysis, eight topographic variables (latitude being the exception) were found to have a direct influence on the precipitation. Three factors were then extracted from the eight topographic variables. By directly comparing the multiple regression model with the factors (model 1) to the multiple regression model with the topographic variables (model 3), it was found that model 1 did not violate the limits of statistical significance and multicollinearity. As such, model 1 was considered to be appropriate for estimating the precipitation when taking into account the topography. In the study of model 1, the multiple regression model using factor analysis was found to be the best method for estimating the orographic precipitation on Jeju Island.

  19. The Influence of Study-Level Inference Models and Study Set Size on Coordinate-Based fMRI Meta-Analyses

    PubMed Central

    Bossier, Han; Seurinck, Ruth; Kühn, Simone; Banaschewski, Tobias; Barker, Gareth J.; Bokde, Arun L. W.; Martinot, Jean-Luc; Lemaitre, Herve; Paus, Tomáš; Millenet, Sabina; Moerkerke, Beatrijs

    2018-01-01

    Given the increasing amount of neuroimaging studies, there is a growing need to summarize published results. Coordinate-based meta-analyses use the locations of statistically significant local maxima with possibly the associated effect sizes to aggregate studies. In this paper, we investigate the influence of key characteristics of a coordinate-based meta-analysis on (1) the balance between false and true positives and (2) the activation reliability of the outcome from a coordinate-based meta-analysis. More particularly, we consider the influence of the chosen group level model at the study level [fixed effects, ordinary least squares (OLS), or mixed effects models], the type of coordinate-based meta-analysis [Activation Likelihood Estimation (ALE) that only uses peak locations, fixed effects, and random effects meta-analysis that take into account both peak location and height] and the amount of studies included in the analysis (from 10 to 35). To do this, we apply a resampling scheme on a large dataset (N = 1,400) to create a test condition and compare this with an independent evaluation condition. The test condition corresponds to subsampling participants into studies and combine these using meta-analyses. The evaluation condition corresponds to a high-powered group analysis. We observe the best performance when using mixed effects models in individual studies combined with a random effects meta-analysis. Moreover the performance increases with the number of studies included in the meta-analysis. When peak height is not taken into consideration, we show that the popular ALE procedure is a good alternative in terms of the balance between type I and II errors. However, it requires more studies compared to other procedures in terms of activation reliability. Finally, we discuss the differences, interpretations, and limitations of our results. PMID:29403344

  20. Impact of temporal upscaling and chemical transport model horizontal resolution on reducing ozone exposure misclassification

    NASA Astrophysics Data System (ADS)

    Xu, Yadong; Serre, Marc L.; Reyes, Jeanette M.; Vizuete, William

    2017-10-01

    We have developed a Bayesian Maximum Entropy (BME) framework that integrates observations from a surface monitoring network and predictions from a Chemical Transport Model (CTM) to create improved exposure estimates that can be resolved into any spatial and temporal resolution. The flexibility of the framework allows for input of data in any choice of time scales and CTM predictions of any spatial resolution with varying associated degrees of estimation error and cost in terms of implementation and computation. This study quantifies the impact on exposure estimation error due to these choices by first comparing estimations errors when BME relied on ozone concentration data either as an hourly average, the daily maximum 8-h average (DM8A), or the daily 24-h average (D24A). Our analysis found that the use of DM8A and D24A data, although less computationally intensive, reduced estimation error more when compared to the use of hourly data. This was primarily due to the poorer CTM model performance in the hourly average predicted ozone. Our second analysis compared spatial variability and estimation errors when BME relied on CTM predictions with a grid cell resolution of 12 × 12 km2 versus a coarser resolution of 36 × 36 km2. Our analysis found that integrating the finer grid resolution CTM predictions not only reduced estimation error, but also increased the spatial variability in daily ozone estimates by 5 times. This improvement was due to the improved spatial gradients and model performance found in the finer resolved CTM simulation. The integration of observational and model predictions that is permitted in a BME framework continues to be a powerful approach for improving exposure estimates of ambient air pollution. The results of this analysis demonstrate the importance of also understanding model performance variability and its implications on exposure error.

  1. Comparing Supply-Side Specifications in Models of Global Agriculture and the Food System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, Sherman; van Meijl, Hans; Willenbockel, Dirk

    This paper compares the theoretical specification of production and technical change across the partial equilibrium (PE) and computable general equilibrium (CGE) models of the global agricultural and food system included in the AgMIP model comparison study. The two modeling approaches have different theoretical underpinnings concerning the scope of economic activity they capture and how they represent technology and the behavior of supply and demand in markets. This paper focuses on their different specifications of technology and supply behavior, comparing their theoretical and empirical treatments. While the models differ widely in their specifications of technology, both within and between the PEmore » and CGE classes of models, we find that the theoretical responsiveness of supply to changes in prices can be similar, depending on parameter choices that define the behavior of supply functions over the domain of applicability defined by the common scenarios used in the AgMIP comparisons. In particular, we compare the theoretical specification of supply in CGE models with neoclassical production functions and PE models that focus on land and crop yields in agriculture. In practice, however, comparability of results given parameter choices is an empirical question, and the models differ in their sensitivity to variations in specification. To illustrate the issues, sensitivity analysis is done with one global CGE model, MAGNET, to indicate how the results vary with different specification of technical change, and how they compare with the results from PE models.« less

  2. Numerical Analysis of Thermo Hydraulic Conditions in Car Fog Lamp

    NASA Astrophysics Data System (ADS)

    Ramšak, M.; Žunič, Z.; Škerget, L.; Jurejevčič, T.

    2009-08-01

    In the article a coupled heat transfer in the solid and fluid inside of a car fog lamp is presented using CFD software CFX [1]. All three basic principles of heat transfer are dealt with: conduction, convection and radiation. Two different approaches to radiation modeling are compared. Laminar and turbulent flow modeling are compared since computed Rayleight number indicates transitional flow regime. Results are in good agreement with the measurements.

  3. Testing and Analysis of Sensor Ports

    NASA Technical Reports Server (NTRS)

    Zhang, M.; Frendi, A.; Thompson, W.; Casiano, M. J.

    2016-01-01

    This Technical Publication summarizes the work focused on the testing and analysis of sensor ports. The tasks under this contract were divided into three areas: (1) Development of an Analytical Model, (2) Conducting a Set of Experiments, and (3) Obtaining Computational Solutions. Results from the experiment using both short and long sensor ports were obtained using harmonic, random, and frequency sweep plane acoustic waves. An amplification factor of the pressure signal between the port inlet and the back of the port is obtained and compared to models. Comparisons of model and experimental results showed very good agreement.

  4. Electrical Systems Analysis at NASA Glenn Research Center: Status and Prospects

    NASA Technical Reports Server (NTRS)

    Freeh, Joshua E.; Liang, Anita D.; Berton, Jeffrey J.; Wickenheiser, Timothy J.

    2003-01-01

    An analysis of an electrical power and propulsion system for a 2-place general aviation aircraft is presented to provide a status of such modeling at NASA Glenn Research Center. The thermodynamic/ electrical model and mass prediction tools are described and the resulting system power and mass are shown. Three technology levels are used to predict the effect of advancements in component technology. Methods of fuel storage are compared by mass and volume. Prospects for future model development and validation at NASA as well as possible applications are also summarized.

  5. Comparing of Cox model and parametric models in analysis of effective factors on event time of neuropathy in patients with type 2 diabetes.

    PubMed

    Kargarian-Marvasti, Sadegh; Rimaz, Shahnaz; Abolghasemi, Jamileh; Heydari, Iraj

    2017-01-01

    Cox proportional hazard model is the most common method for analyzing the effects of several variables on survival time. However, under certain circumstances, parametric models give more precise estimates to analyze survival data than Cox. The purpose of this study was to investigate the comparative performance of Cox and parametric models in a survival analysis of factors affecting the event time of neuropathy in patients with type 2 diabetes. This study included 371 patients with type 2 diabetes without neuropathy who were registered at Fereydunshahr diabetes clinic. Subjects were followed up for the development of neuropathy between 2006 to March 2016. To investigate the factors influencing the event time of neuropathy, significant variables in univariate model ( P < 0.20) were entered into the multivariate Cox and parametric models ( P < 0.05). In addition, Akaike information criterion (AIC) and area under ROC curves were used to evaluate the relative goodness of fitted model and the efficiency of each procedure, respectively. Statistical computing was performed using R software version 3.2.3 (UNIX platforms, Windows and MacOS). Using Kaplan-Meier, survival time of neuropathy was computed 76.6 ± 5 months after initial diagnosis of diabetes. After multivariate analysis of Cox and parametric models, ethnicity, high-density lipoprotein and family history of diabetes were identified as predictors of event time of neuropathy ( P < 0.05). According to AIC, "log-normal" model with the lowest Akaike's was the best-fitted model among Cox and parametric models. According to the results of comparison of survival receiver operating characteristics curves, log-normal model was considered as the most efficient and fitted model.

  6. Application of Interface Technology in Nonlinear Analysis of a Stitched/RFI Composite Wing Stub Box

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Ransom, Jonathan B.

    1997-01-01

    A recently developed interface technology was successfully employed in the geometrically nonlinear analysis of a full-scale stitched/RFI composite wing box loaded in bending. The technology allows mismatched finite element models to be joined in a variationally consistent manner and reduces the modeling complexity by eliminating transition meshing. In the analysis, local finite element models of nonlinearly deformed wide bays of the wing box are refined without the need for transition meshing to the surrounding coarse mesh. The COMET-AR finite element code, which has the interface technology capability, was used to perform the analyses. The COMET-AR analysis is compared to both a NASTRAN analysis and to experimental data. The interface technology solution is shown to be in good agreement with both. The viability of interface technology for coupled global/local analysis of large scale aircraft structures is demonstrated.

  7. Leasing Into the Sun: A Mixed Method Analysis of Transactions of Homes with Third Party Owned Solar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoen, Ben; Rand, Joseph; Adomatis, Sandra

    This analysis is the first to examine if homes with third-party owned (TPO) PV systems are unique in the marketplace as compared to non-PV or non-TPO PV homes. This is of growing importance as the number of homes with TPO systems is nearly a half of a million in the US currently and is growing. A hedonic pricing model analysis of 20,106 homes that sold in California between 2011 and 2013 is conducted, as well as a paired sales analysis of 18 pairs of TPO PV and non-PV homes in San Diego spanning 2012 and 2013. The hedonic model examinedmore » 2,914 non-TPO PV home sales and 113 TPO PV sales and fails to uncover statistically significant premiums for TPO PV homes nor for those with pre-paid leases as compared to non-PV homes. Similarly, the paired sales analysis does not find evidence of an impact to value for the TPO homes when comparing to non-PV homes. Analyses of non-TPO PV sales both here and previously have found larger and statistically significant premiums. Collection of a larger dataset that covers the present period is recommended for future analyses so that smaller, more nuanced and recent effects can be discovered.« less

  8. Numerical and experimental analysis for solidification and residual stress in the GMAW process for AISI 304 stainless steel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, J.; Mazumder, J.

    1996-12-31

    Networking three fields of welding--thermal, microstructure, and stress--was attempted and produced a reliable model using a numerical method with the finite element analysis technique. Model prediction was compared with experimental data in order to validate the model. The effects of welding process parameters on these welding fields were analyzed and reported. The effort to correlate the residual stress and solidification was initiated, with some valuable results. The solidification process was simulated using the formulation based on the Hunt-Trivedi model. Based on the temperature history, solidification speed and primary dendrite arm spacing were predicted at given nodes of interest. Results showmore » that the variation during solidification is usually within an order of magnitude. The temperature gradient was generally in the range of 10{sup 4}--10{sup 5} K/m for the given welding conditions (welding power = 6 kW and welding speed = 3.3867 to 7.62 mm/sec), while solidification speed appeared to slow down from an order of 10{sup {minus}1} to 10{sup {minus}2} m/sec during solidification. SEM images revealed that the primary dendrite arm spacing (PDAS) fell in the range of 10{sup 1}--10{sup 2} {micro}m. For grain growth at the heat affected zone (HAZ), Ashby`s model was employed. The prediction was in agreement with experimental results. For the residual stress calculation, the same mesh generation used in the heat transfer analysis was applied to make the simulation consistent. The analysis consisted of a transient heat analysis followed by a thermal stress analysis. An experimentally measured strain history was compared with the simulated result. The relationship between microstructure and the stress/strain field of welding was also obtained. 64 refs., 18 figs., 9 tabs.« less

  9. Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.

    PubMed

    Xie, Yanmei; Zhang, Biao

    2017-04-20

    Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and Nutrition Examination Survey (NHANES).

  10. Site term from single-station sigma analysis of S-waves in western Turkey

    NASA Astrophysics Data System (ADS)

    Akyol, Nihal

    2018-05-01

    The main aim of this study is to obtain site terms from single-station sigma analysis and to compare them with the site functions resulting from different techniques. The dataset consists of 1764 records from 322 micro- and moderate-size local earthquakes recorded by 29 stations in western Turkey. Median models were derived from S-wave Fourier amplitude spectra for selected 22 frequencies, by utilizing the MLR procedure which performs the maximum likelihood (ML) estimation of mixed models where the fixed effects are treated as random (R) effects with infinite variance. At this stage, b (geometrical spreading coefficient) and Q (quality factor) values were decomposed, simultaneously. The residuals of the median models were examined by utilizing the single-station sigma analysis to obtain the site terms of 29 stations. Sigma for the median models is about 0.422 log10 units and decreases to about 0.308, when the site terms from the single-station sigma analysis were considered (27% reduction). The event-corrected within-event standard deviations for each frequency are rather stable, in the range 0.19-0.23 log10 units with an average value of 0.20 (± 0.01). The site terms from single-station sigma analysis were compared with the site function estimates from the horizontal-to-vertical-spectral-ratio (HVSR) and generalized inversion (INV) techniques by Akyol et al. (2013) and Kurtulmuş and Akyol (2015), respectively. Consistency was observed between the single-station sigma site terms and the INV site transfer functions. The results imply that the single-station sigma analysis could separate the site terms with respect to the median models.

  11. A network-base analysis of CMIP5 "historical" experiments

    NASA Astrophysics Data System (ADS)

    Bracco, A.; Foudalis, I.; Dovrolis, C.

    2012-12-01

    In computer science, "complex network analysis" refers to a set of metrics, modeling tools and algorithms commonly used in the study of complex nonlinear dynamical systems. Its main premise is that the underlying topology or network structure of a system has a strong impact on its dynamics and evolution. By allowing to investigate local and non-local statistical interaction, network analysis provides a powerful, but only marginally explored, framework to validate climate models and investigate teleconnections, assessing their strength, range, and impacts on the climate system. In this work we propose a new, fast, robust and scalable methodology to examine, quantify, and visualize climate sensitivity, while constraining general circulation models (GCMs) outputs with observations. The goal of our novel approach is to uncover relations in the climate system that are not (or not fully) captured by more traditional methodologies used in climate science and often adopted from nonlinear dynamical systems analysis, and to explain known climate phenomena in terms of the network structure or its metrics. Our methodology is based on a solid theoretical framework and employs mathematical and statistical tools, exploited only tentatively in climate research so far. Suitably adapted to the climate problem, these tools can assist in visualizing the trade-offs in representing global links and teleconnections among different data sets. Here we present the methodology, and compare network properties for different reanalysis data sets and a suite of CMIP5 coupled GCM outputs. With an extensive model intercomparison in terms of the climate network that each model leads to, we quantify how each model reproduces major teleconnections, rank model performances, and identify common or specific errors in comparing model outputs and observations.

  12. Comparison of modeling approaches for carbon partitioning: Impact on estimates of global net primary production and equilibrium biomass of woody vegetation from MODIS GPP

    NASA Astrophysics Data System (ADS)

    Ise, Takeshi; Litton, Creighton M.; Giardina, Christian P.; Ito, Akihiko

    2010-12-01

    Partitioning of gross primary production (GPP) to aboveground versus belowground, to growth versus respiration, and to short versus long-lived tissues exerts a strong influence on ecosystem structure and function, with potentially large implications for the global carbon budget. A recent meta-analysis of forest ecosystems suggests that carbon partitioning to leaves, stems, and roots varies consistently with GPP and that the ratio of net primary production (NPP) to GPP is conservative across environmental gradients. To examine influences of carbon partitioning schemes employed by global ecosystem models, we used this meta-analysis-based model and a satellite-based (MODIS) terrestrial GPP data set to estimate global woody NPP and equilibrium biomass, and then compared it to two process-based ecosystem models (Biome-BGC and VISIT) using the same GPP data set. We hypothesized that different carbon partitioning schemes would result in large differences in global estimates of woody NPP and equilibrium biomass. Woody NPP estimated by Biome-BGC and VISIT was 25% and 29% higher than the meta-analysis-based model for boreal forests, with smaller differences in temperate and tropics. Global equilibrium woody biomass, calculated from model-specific NPP estimates and a single set of tissue turnover rates, was 48 and 226 Pg C higher for Biome-BGC and VISIT compared to the meta-analysis-based model, reflecting differences in carbon partitioning to structural versus metabolically active tissues. In summary, we found that different carbon partitioning schemes resulted in large variations in estimates of global woody carbon flux and storage, indicating that stand-level controls on carbon partitioning are not yet accurately represented in ecosystem models.

  13. STRUCTURAL ESTIMATES OF TREATMENT EFFECTS ON OUTCOMES USING RETROSPECTIVE DATA: AN APPLICATION TO DUCTAL CARCINOMA IN SITU

    PubMed Central

    Gold, Heather Taffet; Sorbero, Melony E. S.; Griggs, Jennifer J.; Do, Huong T.; Dick, Andrew W.

    2013-01-01

    Analysis of observational cohort data is subject to bias from unobservable risk selection. We compared econometric models and treatment effectiveness estimates using the linked Surveillance, Epidemiology, and End Results (SEER)-Medicare claims data for women diagnosed with ductal carcinoma in situ. Treatment effectiveness estimates for mastectomy and breast conserving surgery (BCS) with or without radiotherapy were compared using three different models: simultaneous-equations model, discrete-time survival model with unobserved heterogeneity (frailty), and proportional hazards model. Overall trends in disease-free survival (DFS), or time to first subsequent breast event, by treatment are similar regardless of the model, with mastectomy yielding the highest DFS over 8 years of follow-up, followed by BCS with radiotherapy, and then BCS alone. Absolute rates and direction of bias varied substantially by treatment strategy. DFS was underestimated by single-equation and frailty models compared to the simultaneous-equations model and RCT results for BCS with RT and overestimated for BCS alone. PMID:21602195

  14. Univariate and bivariate likelihood-based meta-analysis methods performed comparably when marginal sensitivity and specificity were the targets of inference.

    PubMed

    Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H

    2017-03-01

    To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Adaptation of video game UVW mapping to 3D visualization of gene expression patterns

    NASA Astrophysics Data System (ADS)

    Vize, Peter D.; Gerth, Victor E.

    2007-01-01

    Analysis of gene expression patterns within an organism plays a critical role in associating genes with biological processes in both health and disease. During embryonic development the analysis and comparison of different gene expression patterns allows biologists to identify candidate genes that may regulate the formation of normal tissues and organs and to search for genes associated with congenital diseases. No two individual embryos, or organs, are exactly the same shape or size so comparing spatial gene expression in one embryo to that in another is difficult. We will present our efforts in comparing gene expression data collected using both volumetric and projection approaches. Volumetric data is highly accurate but difficult to process and compare. Projection methods use UV mapping to align texture maps to standardized spatial frameworks. This approach is less accurate but is very rapid and requires very little processing. We have built a database of over 180 3D models depicting gene expression patterns mapped onto the surface of spline based embryo models. Gene expression data in different models can easily be compared to determine common regions of activity. Visualization software, both Java and OpenGL optimized for viewing 3D gene expression data will also be demonstrated.

  16. IMPACT OF TRMM PRECIPITATION ON CPTEC’S RPSAS ANALYSIS

    NASA Astrophysics Data System (ADS)

    Herdies, D. L.; Bastarz, C. F.; Fernandez, J. P.

    2009-12-01

    In this work a data assimilation study was performed to assess the impact of estimated precipitation from TRMM (Tropical Rainfall Measuring Mission) on the CPTEC (Centro de Previsão de Tempo e Estudos Climáticos at Brasil) RPSAS (Regional Physical-space Statistical Analysis System) analyses and the Eta model forecast over the region of La Plata Basin, during a case o MCC (Mesoscale Convective Complex) occurred between 22th and 23th January 2003. The data assimilation system RPSAS and the mesoscale regional Eta model (both with 20km of spatial resolution) were run together with and without the TRMM precipitation. Is this study the assimilation of precipitation is basically a nudging process and is performed during the first guess stage by the Eta model, like in the NCEP (National Centers for Environmental Predictions) EDAS (Eta Data Assimilation System) precipitation data assimilation. During this process the model adjusts the precipitation by comparing, at which grid point and at which time step, the model precipitation against the TRMM precipitation. Doing this some adjustments are made on the latent heat vertical profile, water vapor mixing ratio and relative humidity, by considering the Betts-Miller-Janjic convective parameterization. On the next step, the RPSAS produces an analysis which covers most of the South America and the adjacent oceans. From this analysis the Eta model produces 6h, 12h, 18h and 24h forecast. Data collected from the SALLJEX (South America Low Level Jet EXperiment) was used to compare the forecasts of the model and the CPTEC 40km Regional Reanalysis was used to compare with the RPSAS analyses. Some preliminary results show that the precipitation assimilation improves the first hours of the forecast (typically 6h). The variables verified were the zonal and meridional wind, geopotential height and the precipitation. The convective precipitation fields were improved, mainly over the 6h forecast. This is an important improvement because the first guess field will serve as an analysis of the next forecast window. Also were noticed that the mean error for those variables was reduced (principally for the zonal wind). This reveals that with an improved first guess field, the model was able to detect the MCC occurred in the north of Argentina, due to the improved representation of the winds fields (direction and intensity), pressure and the surface variables.

  17. Cancer cells growing on perfused 3D collagen model produced higher reactive oxygen species level and were more resistant to cisplatin compared to the 2D model.

    PubMed

    Liu, Qingxi; Zhang, Zijiang; Liu, Yupeng; Cui, Zhanfeng; Zhang, Tongcun; Li, Zhaohui; Ma, Wenjian

    2018-03-01

    Three-dimensional (3D) collagen scaffold models, due to their ability to mimic the tissue and organ structure in vivo, have received increasing interest in drug discovery and toxicity evaluation. In this study, we developed a perfused 3D model and studied cellular response to cytotoxic drugs in comparison with traditional 2D cell cultures as evaluated by cancer drug cisplatin. Cancer cells grown in perfused 3D environments showed increased levels of reactive oxygen species (ROS) production compared to the 2D culture. As determined by growth analysis, cells in the 3D culture, after forming a spheroid, were more resistant to the cancer drug cisplatin compared to that of the 2D cell culture. In addition, 3D culturing cells showed elevated level of ROS, indicating a physiological change or the formation of a microenvironment that resembles tumor cells in vivo. These data revealed that cellular response to drugs for cells growing in 3D environments are dramatically different from that of 2D cultured cells. Thus, the perfused 3D collagen scaffold model we report here might be a potentially very useful tool for drug analysis.

  18. Visible lesion thresholds and model predictions for Q-switched 1318-nm and 1540-nm laser exposures to porcine skin

    NASA Astrophysics Data System (ADS)

    Zohner, Justin J.; Schuster, Kurt J.; Chavey, Lucas J.; Stolarski, David J.; Kumru, Semih S.; Rockwell, Benjamin A.; Thomas, Robert J.; Cain, Clarence P.

    2006-02-01

    Skin damage thresholds were measured and compared with theoretical predictions using a skin thermal model for near-IR laser pulses at 1318 nm and 1540 nm. For the 1318-nm data, a Q-switched, 50-ns pulse with a spot size of 5 mm was applied to porcine skin and the damage thresholds were determined at 1 hour and 24 hours postexposure using Probit analysis. The same analysis was conducted for a Q-switched, 30-ns pulse at 1540 nm with a spot size of 5 mm. The Yucatan mini-pig was used as the skin model for human skin due to its similarity to pigmented human skin. The ED 50 for these skin exposures at 24 hours postexposure was 10.5 J/cm2 for the 1318-nm exposures, and 6.1 J/cm2 for the 1540-nm exposures. These results were compared to thermal model predictions. We show that the thermal model fails to account for the ED 50 values observed. A brief discussion of the possible causes of this discrepancy is presented. These thresholds are also compared with previously published skin minimum visible lesion (MVL) thresholds and with the ANSI Standard's MPE for 1318-nm lasers at 50 ns and 1540-nm lasers at 30 ns.

  19. Thermodynamic stability of nanosized multicomponent bubbles/droplets: the square gradient theory and the capillary approach.

    PubMed

    Wilhelmsen, Øivind; Bedeaux, Dick; Kjelstrup, Signe; Reguera, David

    2014-01-14

    Formation of nanosized droplets/bubbles from a metastable bulk phase is connected to many unresolved scientific questions. We analyze the properties and stability of multicomponent droplets and bubbles in the canonical ensemble, and compare with single-component systems. The bubbles/droplets are described on the mesoscopic level by square gradient theory. Furthermore, we compare the results to a capillary model which gives a macroscopic description. Remarkably, the solutions of the square gradient model, representing bubbles and droplets, are accurately reproduced by the capillary model except in the vicinity of the spinodals. The solutions of the square gradient model form closed loops, which shows the inherent symmetry and connected nature of bubbles and droplets. A thermodynamic stability analysis is carried out, where the second variation of the square gradient description is compared to the eigenvalues of the Hessian matrix in the capillary description. The analysis shows that it is impossible to stabilize arbitrarily small bubbles or droplets in closed systems and gives insight into metastable regions close to the minimum bubble/droplet radii. Despite the large difference in complexity, the square gradient and the capillary model predict the same finite threshold sizes and very similar stability limits for bubbles and droplets, both for single-component and two-component systems.

  20. Thermodynamic stability of nanosized multicomponent bubbles/droplets: The square gradient theory and the capillary approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilhelmsen, Øivind, E-mail: oivind.wilhelmsen@ntnu.no; Bedeaux, Dick; Kjelstrup, Signe

    Formation of nanosized droplets/bubbles from a metastable bulk phase is connected to many unresolved scientific questions. We analyze the properties and stability of multicomponent droplets and bubbles in the canonical ensemble, and compare with single-component systems. The bubbles/droplets are described on the mesoscopic level by square gradient theory. Furthermore, we compare the results to a capillary model which gives a macroscopic description. Remarkably, the solutions of the square gradient model, representing bubbles and droplets, are accurately reproduced by the capillary model except in the vicinity of the spinodals. The solutions of the square gradient model form closed loops, which showsmore » the inherent symmetry and connected nature of bubbles and droplets. A thermodynamic stability analysis is carried out, where the second variation of the square gradient description is compared to the eigenvalues of the Hessian matrix in the capillary description. The analysis shows that it is impossible to stabilize arbitrarily small bubbles or droplets in closed systems and gives insight into metastable regions close to the minimum bubble/droplet radii. Despite the large difference in complexity, the square gradient and the capillary model predict the same finite threshold sizes and very similar stability limits for bubbles and droplets, both for single-component and two-component systems.« less

  1. Benzodiazepines and antipsychotic medications for treatment of acute cocaine toxicity in animal models--a systematic review and meta-analysis.

    PubMed

    Heard, Kennon; Cleveland, Nathan R; Krier, Shay

    2011-11-01

    There are no controlled human studies to determine the efficacy of benzodiazepines or antipsychotic medications for prevention or treatment of acute cocaine toxicity. The only available controlled data are from animal models and these studies have reported inconsistent benefits. The objective of this study was to quantify the reported efficacy of benzodiazepines and antipsychotic medication for the prevention of mortality due to cocaine poisoning. We conducted a systematic review to identify English language articles describing experiments that compared a benzodiazepine or antipsychotic medication to placebo for the prevention of acute cocaine toxicity in an animal model. We then used these articles in a meta-analysis with a random-effects model to quantify the absolute risk reduction observed in these experiments. We found 10 articles evaluating antipsychotic medications and 15 articles evaluating benzodiazepines. Antipsychotic medications reduced the risk of death by 27% (95% CI, 15.2%-38.7%) compared to placebo and benzodiazepines reduced the risk of death by 52% (42.8%-60.7%) compared to placebo. Both treatments showed evidence of a dose-response effect, and no experiment found a statistically significant increase in risk of death. We conclude that both benzodiazepines and antipsychotic medications are effective for the prevention of lethality from cocaine toxicity in animal models.

  2. Science-Technology-Society literacy in college non-majors biology: Comparing problem/case studies based learning and traditional expository methods of instruction

    NASA Astrophysics Data System (ADS)

    Peters, John S.

    This study used a multiple response model (MRM) on selected items from the Views on Science-Technology-Society (VOSTS) survey to examine science-technology-society (STS) literacy among college non-science majors' taught using Problem/Case Studies Based Learning (PBL/CSBL) and traditional expository methods of instruction. An initial pilot investigation of 15 VOSTS items produced a valid and reliable scoring model which can be used to quantitatively assess student literacy on a variety of STS topics deemed important for informed civic engagement in science related social and environmental issues. The new scoring model allows for the use of parametric inferential statistics to test hypotheses about factors influencing STS literacy. The follow-up cross-institutional study comparing teaching methods employed Hierarchical Linear Modeling (HLM) to model the efficiency and equitability of instructional methods on STS literacy. A cluster analysis was also used to compare pre and post course patterns of student views on the set of positions expressed within VOSTS items. HLM analysis revealed significantly higher instructional efficiency in the PBL/CSBL study group for 4 of the 35 STS attitude indices (characterization of media vs. school science; tentativeness of scientific models; cultural influences on scientific research), and more equitable effects of traditional instruction on one attitude index (interdependence of science and technology). Cluster analysis revealed generally stable patterns of pre to post course views across study groups, but also revealed possible teaching method effects on the relationship between the views expressed within VOSTS items with respect to (1) interdependency of science and technology; (2) anti-technology; (3) socioscientific decision-making; (4) scientific/technological solutions to environmental problems; (5) usefulness of school vs. media characterizations of science; (6) social constructivist vs. objectivist views of theories; (7) impact of cultural religious/ethical views on science; (8) tentativeness of scientific models, evidence and predictions; (9) civic control of technological developments. This analysis also revealed common relationships between student views which would not have been revealed under the original unique response model (URM) of VOSTS and also common viewpoint patterns that warrant further qualitative exploration.

  3. A comparative analysis of restorative materials used in abfraction lesions in tooth with and without occlusal restoration: Three-dimensional finite element analysis

    PubMed Central

    Srirekha, A; Bashetty, Kusum

    2013-01-01

    Objectives: The present comparative analysis aimed at evaluating the mechanical behavior of various restorative materials in abfraction lesion in the presence and absence of occlusal restoration. Materials and Methods: A three-dimensional finite-element analysis was performed. Six experimental models of mandibular first premolar were generated and divided into two groups (groups A and B) of three each. All the groups had cervical abfraction lesion restored with materials and in addition group A had class I occlusal restoration. A load of 90 N, 200 N, and 400 N were applied at 45° loading angle on the buccal inclines of buccal cusp and Von Mises stresses was chosen for analysis. Results: In all the models, the values of stress recorded at the cervical margin of the restorations were at their maxima. Irrespective of the occlusal restoration, all the materials performed well at 90 N and 200 N. At 400 N, only low-shrink composite showed stresses lesser than its tensile strength indicating its success even at higher load. Conclusion: Irrespective of occlusal restoration, restorative materials with low modulus of elasticity are successful in abfraction lesions at moderate tensile stresses; whereas materials with higher modulus of elasticity and mechanical properties can support higher loads and resist wear. Significance: The model allows comparison of different restorative materials for restoration of abfraction lesions in the presence and absence of occlusal restoration. The model can be used to validate more sophisticated computational models as well as to conduct various optimization studies. PMID:23716970

  4. Modeling time-to-event (survival) data using classification tree analysis.

    PubMed

    Linden, Ariel; Yarnold, Paul R

    2017-12-01

    Time to the occurrence of an event is often studied in health research. Survival analysis differs from other designs in that follow-up times for individuals who do not experience the event by the end of the study (called censored) are accounted for in the analysis. Cox regression is the standard method for analysing censored data, but the assumptions required of these models are easily violated. In this paper, we introduce classification tree analysis (CTA) as a flexible alternative for modelling censored data. Classification tree analysis is a "decision-tree"-like classification model that provides parsimonious, transparent (ie, easy to visually display and interpret) decision rules that maximize predictive accuracy, derives exact P values via permutation tests, and evaluates model cross-generalizability. Using empirical data, we identify all statistically valid, reproducible, longitudinally consistent, and cross-generalizable CTA survival models and then compare their predictive accuracy to estimates derived via Cox regression and an unadjusted naïve model. Model performance is assessed using integrated Brier scores and a comparison between estimated survival curves. The Cox regression model best predicts average incidence of the outcome over time, whereas CTA survival models best predict either relatively high, or low, incidence of the outcome over time. Classification tree analysis survival models offer many advantages over Cox regression, such as explicit maximization of predictive accuracy, parsimony, statistical robustness, and transparency. Therefore, researchers interested in accurate prognoses and clear decision rules should consider developing models using the CTA-survival framework. © 2017 John Wiley & Sons, Ltd.

  5. Benefits of explicit urban parameterization in regional climate modeling to study climate and city interactions

    NASA Astrophysics Data System (ADS)

    Daniel, M.; Lemonsu, Aude; Déqué, M.; Somot, S.; Alias, A.; Masson, V.

    2018-06-01

    Most climate models do not explicitly model urban areas and at best describe them as rock covers. Nonetheless, the very high resolutions reached now by the regional climate models may justify and require a more realistic parameterization of surface exchanges between urban canopy and atmosphere. To quantify the potential impact of urbanization on the regional climate, and evaluate the benefits of a detailed urban canopy model compared with a simpler approach, a sensitivity study was carried out over France at a 12-km horizontal resolution with the ALADIN-Climate regional model for 1980-2009 time period. Different descriptions of land use and urban modeling were compared, corresponding to an explicit modeling of cities with the urban canopy model TEB, a conventional and simpler approach representing urban areas as rocks, and a vegetated experiment for which cities are replaced by natural covers. A general evaluation of ALADIN-Climate was first done, that showed an overestimation of the incoming solar radiation but satisfying results in terms of precipitation and near-surface temperatures. The sensitivity analysis then highlighted that urban areas had a significant impact on modeled near-surface temperature. A further analysis on a few large French cities indicated that over the 30 years of simulation they all induced a warming effect both at daytime and nighttime with values up to + 1.5 °C for the city of Paris. The urban model also led to a regional warming extending beyond the urban areas boundaries. Finally, the comparison to temperature observations available for Paris area highlighted that the detailed urban canopy model improved the modeling of the urban heat island compared with a simpler approach.

  6. Application of third molar development and eruption models in estimating dental age in Malay sub-adults.

    PubMed

    Mohd Yusof, Mohd Yusmiaidil Putera; Cauwels, Rita; Deschepper, Ellen; Martens, Luc

    2015-08-01

    The third molar development (TMD) has been widely utilized as one of the radiographic method for dental age estimation. By using the same radiograph of the same individual, third molar eruption (TME) information can be incorporated to the TMD regression model. This study aims to evaluate the performance of dental age estimation in individual method models and the combined model (TMD and TME) based on the classic regressions of multiple linear and principal component analysis. A sample of 705 digital panoramic radiographs of Malay sub-adults aged between 14.1 and 23.8 years was collected. The techniques described by Gleiser and Hunt (modified by Kohler) and Olze were employed to stage the TMD and TME, respectively. The data was divided to develop three respective models based on the two regressions of multiple linear and principal component analysis. The trained models were then validated on the test sample and the accuracy of age prediction was compared between each model. The coefficient of determination (R²) and root mean square error (RMSE) were calculated. In both genders, adjusted R² yielded an increment in the linear regressions of combined model as compared to the individual models. The overall decrease in RMSE was detected in combined model as compared to TMD (0.03-0.06) and TME (0.2-0.8). In principal component regression, low value of adjusted R(2) and high RMSE except in male were exhibited in combined model. Dental age estimation is better predicted using combined model in multiple linear regression models. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  7. Analysis of composite plates by using mechanics of structure genome and comparison with ANSYS

    NASA Astrophysics Data System (ADS)

    Zhao, Banghua

    Motivated by a recently discovered concept, Structure Genome (SG) which is defined as the smallest mathematical building block of a structure, a new approach named Mechanics of Structure Genome (MSG) to model and analyze composite plates is introduced. MSG is implemented in a general-purpose code named SwiftComp(TM), which provides the constitutive models needed in structural analysis by homogenization and pointwise local fields by dehomogenization. To improve the user friendliness of SwiftComp(TM), a simple graphic user interface (GUI) based on ANSYS Mechanical APDL platform, called ANSYS-SwiftComp GUI is developed, which provides a convenient way to create some common SG models or arbitrary customized SG models in ANSYS and invoke SwiftComp(TM) to perform homogenization and dehomogenization. The global structural analysis can also be handled in ANSYS after homogenization, which could predict the global behavior and provide needed inputs for dehomogenization. To demonstrate the accuracy and efficiency of the MSG approach, several numerical cases are studied and compared using both MSG and ANSYS. In the ANSYS approach, 3D solid element models (ANSYS 3D approach) are used as reference models and the 2D shell element models created by ANSYS Composite PrepPost (ACP approach) are compared with the MSG approach. The results of the MSG approach agree well with the ANSYS 3D approach while being as efficient as the ACP approach. Therefore, the MSG approach provides an efficient and accurate new way to model composite plates.

  8. Modelling the cost effectiveness of antidepressant treatment in primary care.

    PubMed

    Revicki, D A; Brown, R E; Palmer, W; Bakish, D; Rosser, W W; Anton, S F; Feeny, D

    1995-12-01

    The aim of this study was to estimate the cost effectiveness of nefazodone compared with imipramine or fluoxetine in treating women with major depressive disorder. Clinical decision analysis and a Markov state-transition model were used to estimate the lifetime health outcomes and medical costs of 3 antidepressant treatments. The model, which represents ideal primary care practice, compares treatment with nefazodone to treatment with either imipramine or fluoxetine. The economic analysis was based on the healthcare system of the Canadian province of Ontario, and considered only direct medical costs. Health outcomes were expressed as quality-adjusted life years (QALYs) and costs were in 1993 Canadian dollars ($Can; $Can1 = $US0.75, September 1995). Incremental cost-utility ratios were calculated comparing the relative lifetime discounted medical costs and QALYs associated with nefazodone with those of imipramine or fluoxetine. Data for constructing the model and estimating necessary parameters were derived from the medical literature, clinical trial data, and physician judgement. Data included information on: Ontario primary care physicians' clinical management of major depression; medical resource use and costs; probabilities of recurrence of depression; suicide rates; compliance rates; and health utilities. Estimates of utilities for depression-related hypothetical health states were obtained from patients with major depression (n = 70). Medical costs and QALYs were discounted to present value using a 5% rate. Sensitivity analyses tested the assumptions of the model by varying the discount rate, depression recurrence rates, compliance rates, and the duration of the model. The base case analysis found that nefazodone treatment costs $Can1447 less per patient than imipramine treatment (discounted lifetime medical costs were $Can50,664 vs $Can52,111) and increases the number of QALYs by 0.72 (13.90 vs 13.18). Nefazodone treatment costs $Can14 less than fluoxetine treatment (estimated discounted lifetime medical costs were $Can50,664 vs $Can50,678) and produces slightly more QALYs (13.90 vs 13.79). In the sensitivity analyses, the cost-effectiveness ratios comparing nefazodone with imipramine ranged from cost saving to $Can17,326 per QALY gained. The cost-effectiveness ratios comparing nefazodone with fluoxetine ranged from cost saving to $Can7327 per QALY gained. The model was most sensitive to assumptions about treatment compliance rates and recurrence rates. The findings suggest that nefazodone may be a cost-effective treatment for major depression compared with imipramine or fluoxetine. The basic findings and conclusions do not change even after modifying model parameters within reasonable ranges.

  9. BSM2 Plant-Wide Model construction and comparative analysis with other methodologies for integrated modelling.

    PubMed

    Grau, P; Vanrolleghem, P; Ayesa, E

    2007-01-01

    In this paper, a new methodology for integrated modelling of the WWTP has been used for the construction of the Benchmark Simulation Model N degrees 2 (BSM2). The transformations-approach proposed in this methodology does not require the development of specific transformers to interface unit process models and allows the construction of tailored models for a particular WWTP guaranteeing the mass and charge continuity for the whole model. The BSM2 PWM constructed as case study, is evaluated by means of simulations under different scenarios and its validity in reproducing water and sludge lines in WWTP is demonstrated. Furthermore the advantages that this methodology presents compared to other approaches for integrated modelling are verified in terms of flexibility and coherence.

  10. Finding Groups Using Model-based Cluster Analysis: Heterogeneous Emotional Self-regulatory Processes and Heavy Alcohol Use Risk

    PubMed Central

    Mun, Eun-Young; von Eye, Alexander; Bates, Marsha E.; Vaschillo, Evgeny G.

    2010-01-01

    Model-based cluster analysis is a new clustering procedure to investigate population heterogeneity utilizing finite mixture multivariate normal densities. It is an inferentially based, statistically principled procedure that allows comparison of non-nested models using the Bayesian Information Criterion (BIC) to compare multiple models and identify the optimum number of clusters. The current study clustered 36 young men and women based on their baseline heart rate (HR) and HR variability (HRV), chronic alcohol use, and reasons for drinking. Two cluster groups were identified and labeled High Alcohol Risk and Normative groups. Compared to the Normative group, individuals in the High Alcohol Risk group had higher levels of alcohol use and more strongly endorsed disinhibition and suppression reasons for use. The High Alcohol Risk group showed significant HRV changes in response to positive and negative emotional and appetitive picture cues, compared to neutral cues. In contrast, the Normative group showed a significant HRV change only to negative cues. Findings suggest that the individuals with autonomic self-regulatory difficulties may be more susceptible to heavy alcohol use and use alcohol for emotional regulation. PMID:18331138

  11. Continuing data analysis of the AS/E grazing incidence X-ray telescope experiment on the OSO-4 satellite

    NASA Technical Reports Server (NTRS)

    Vaiana, G.; Haggerty, R.; Kahler, S.; Krieger, A.; Landini, M.; Timothy, A.; Webb, D.

    1973-01-01

    The work to correct and extend the calculation of the theoretical solar X-ray spectrum produced during earlier OSO-4 data analysis is reported along with the work to formulate models of active regions, and compare these models with the experimental values. An atlas of solar X-ray photographs is included, and solar X-ray observations are correlated with the solar wind.

  12. Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks

    PubMed Central

    Kaltenbacher, Barbara; Hasenauer, Jan

    2017-01-01

    Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351

  13. Monitoring anti-angiogenic therapy in colorectal cancer murine model using dynamic contrast-enhanced MRI: comparing pixel-by-pixel with region of interest analysis.

    PubMed

    Haney, C R; Fan, X; Markiewicz, E; Mustafi, D; Karczmar, G S; Stadler, W M

    2013-02-01

    Sorafenib is a multi-kinase inhibitor that blocks cell proliferation and angiogenesis. It is currently approved for advanced hepatocellular and renal cell carcinomas in humans, where its major mechanism of action is thought to be through inhibition of vascular endothelial growth factor and platelet-derived growth factor receptors. The purpose of this study was to determine whether pixel-by-pixel analysis of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is better able to capture the heterogeneous response of Sorafenib in a murine model of colorectal tumor xenografts (as compared with region of interest analysis). MRI was performed on a 9.4 T pre-clinical scanner on the initial treatment day. Then either vehicle or drug were gavaged daily (3 days) up to the final image. Four days later, the mice were again imaged. The two-compartment model and reference tissue method of DCE-MRI were used to analyze the data. The results demonstrated that the contrast agent distribution rate constant (K(trans)) were significantly reduced (p < 0.005) at day-4 of Sorafenib treatment. In addition, the K(trans) of nearby muscle was also reduced after Sorafenib treatment. The pixel-by-pixel analysis (compared to region of interest analysis) was better able to capture the heterogeneity of the tumor and the decrease in K(trans) four days after treatment. For both methods, the volume of the extravascular extracellular space did not change significantly after treatment. These results confirm that parameters such as K(trans), could provide a non-invasive biomarker to assess the response to anti-angiogenic therapies such as Sorafenib, but that the heterogeneity of response across a tumor requires a more detailed analysis than has typically been undertaken.

  14. A value-based medicine cost-utility analysis of idiopathic epiretinal membrane surgery.

    PubMed

    Gupta, Omesh P; Brown, Gary C; Brown, Melissa M

    2008-05-01

    To perform a reference case, cost-utility analysis of epiretinal membrane (ERM) surgery using current literature on outcomes and complications. Computer-based, value-based medicine analysis. Decision analyses were performed under two scenarios: ERM surgery in better-seeing eye and ERM surgery in worse-seeing eye. The models applied long-term published data primarily from the Blue Mountains Eye Study and the Beaver Dam Eye Study. Visual acuity and major complications were derived from 25-gauge pars plana vitrectomy studies. Patient-based, time trade-off utility values, Markov modeling, sensitivity analysis, and net present value adjustments were used in the design and calculation of results. Main outcome measures included the number of discounted quality-adjusted-life-years (QALYs) gained and dollars spent per QALY gained. ERM surgery in the better-seeing eye compared with observation resulted in a mean gain of 0.755 discounted QALYs (3% annual rate) per patient treated. This model resulted in $4,680 per QALY for this procedure. When sensitivity analysis was performed, utility values varied from $6,245 to $3,746/QALY gained, medical costs varied from $3,510 to $5,850/QALY gained, and ERM recurrence rate increased to $5,524/QALY. ERM surgery in the worse-seeing eye compared with observation resulted in a mean gain of 0.27 discounted QALYs per patient treated. The $/QALY was $16,146 with a range of $20,183 to $12,110 based on sensitivity analyses. Utility values ranged from $21,520 to $12,916/QALY and ERM recurrence rate increased to $16,846/QALY based on sensitivity analysis. ERM surgery is a very cost-effective procedure when compared with other interventions across medical subspecialties.

  15. An integrated workflow for analysis of ChIP-chip data.

    PubMed

    Weigelt, Karin; Moehle, Christoph; Stempfl, Thomas; Weber, Bernhard; Langmann, Thomas

    2008-08-01

    Although ChIP-chip is a powerful tool for genome-wide discovery of transcription factor target genes, the steps involving raw data analysis, identification of promoters, and correlation with binding sites are still laborious processes. Therefore, we report an integrated workflow for the analysis of promoter tiling arrays with the Genomatix ChipInspector system. We compare this tool with open-source software packages to identify PU.1 regulated genes in mouse macrophages. Our results suggest that ChipInspector data analysis, comparative genomics for binding site prediction, and pathway/network modeling significantly facilitate and enhance whole-genome promoter profiling to reveal in vivo sites of transcription factor-DNA interactions.

  16. On the Multilevel Nature of Meta-Analysis: A Tutorial, Comparison of Software Programs, and Discussion of Analytic Choices.

    PubMed

    Pastor, Dena A; Lazowski, Rory A

    2018-01-01

    The term "multilevel meta-analysis" is encountered not only in applied research studies, but in multilevel resources comparing traditional meta-analysis to multilevel meta-analysis. In this tutorial, we argue that the term "multilevel meta-analysis" is redundant since all meta-analysis can be formulated as a special kind of multilevel model. To clarify the multilevel nature of meta-analysis the four standard meta-analytic models are presented using multilevel equations and fit to an example data set using four software programs: two specific to meta-analysis (metafor in R and SPSS macros) and two specific to multilevel modeling (PROC MIXED in SAS and HLM). The same parameter estimates are obtained across programs underscoring that all meta-analyses are multilevel in nature. Despite the equivalent results, not all software programs are alike and differences are noted in the output provided and estimators available. This tutorial also recasts distinctions made in the literature between traditional and multilevel meta-analysis as differences between meta-analytic choices, not between meta-analytic models, and provides guidance to inform choices in estimators, significance tests, moderator analyses, and modeling sequence. The extent to which the software programs allow flexibility with respect to these decisions is noted, with metafor emerging as the most favorable program reviewed.

  17. Verification of finite element analysis of fixed partial denture with in vitro electronic strain measurement.

    PubMed

    Wang, Gaoqi; Zhang, Song; Bian, Cuirong; Kong, Hui

    2016-01-01

    The purpose of the study was to verify the finite element analysis model of three-unite fixed partial denture with in vitro electronic strain analysis and analyze clinical situation with the verified model. First, strain gauges were attached to the critical areas of a three-unit fixed partial denture. Strain values were measured under 300 N load perpendicular to the occlusal plane. Secondly, a three-dimensional finite element model in accordance with the electronic strain analysis experiment was constructed from the scanning data. And the strain values obtained by finite element analysis and in vitro measurements were compared. Finally, the clinical destruction of the fixed partial denture was evaluated with the verified finite element analysis model. There was a mutual agreement and consistency between the finite element analysis results and experimental data. The finite element analysis revealed that failure will occur in the veneer layer on buccal surface of the connector under occlusal force of 570 N. The results indicate that the electronic strain analysis is an appropriate and cost saving method to verify the finite element model. The veneer layer on buccal surface of the connector is the weakest area in the fixed partial denture. Copyright © 2015 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  18. Usage of Parameterized Fatigue Spectra and Physics-Based Systems Engineering Models for Wind Turbine Component Sizing: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsons, Taylor; Guo, Yi; Veers, Paul

    Software models that use design-level input variables and physics-based engineering analysis for estimating the mass and geometrical properties of components in large-scale machinery can be very useful for analyzing design trade-offs in complex systems. This study uses DriveSE, an OpenMDAO-based drivetrain model that uses stress and deflection criteria to size drivetrain components within a geared, upwind wind turbine. Because a full lifetime fatigue load spectrum can only be defined using computationally-expensive simulations in programs such as FAST, a parameterized fatigue loads spectrum that depends on wind conditions, rotor diameter, and turbine design life has been implemented. The parameterized fatigue spectrummore » is only used in this paper to demonstrate the proposed fatigue analysis approach. This paper details a three-part investigation of the parameterized approach and a comparison of the DriveSE model with and without fatigue analysis on the main shaft system. It compares loads from three turbines of varying size and determines if and when fatigue governs drivetrain sizing compared to extreme load-driven design. It also investigates the model's sensitivity to shaft material parameters. The intent of this paper is to demonstrate how fatigue considerations in addition to extreme loads can be brought into a system engineering optimization.« less

  19. The role of building models in the evaluation of heat-related risks

    NASA Astrophysics Data System (ADS)

    Buchin, Oliver; Jänicke, Britta; Meier, Fred; Scherer, Dieter; Ziegler, Felix

    2016-04-01

    Hazard-risk relationships in epidemiological studies are generally based on the outdoor climate, despite the fact that most of humans' lifetime is spent indoors. By coupling indoor and outdoor climates with a building model, the risk concept developed can still be based on the outdoor conditions but also includes exposure to the indoor climate. The influence of non-linear building physics and the impact of air conditioning on heat-related risks can be assessed in a plausible manner using this risk concept. For proof of concept, the proposed risk concept is compared to a traditional risk analysis. As an example, daily and city-wide mortality data of the age group 65 and older in Berlin, Germany, for the years 2001-2010 are used. Four building models with differing complexity are applied in a time-series regression analysis. This study shows that indoor hazard better explains the variability in the risk data compared to outdoor hazard, depending on the kind of building model. Simplified parameter models include the main non-linear effects and are proposed for the time-series analysis. The concept shows that the definitions of heat events, lag days, and acclimatization in a traditional hazard-risk relationship are influenced by the characteristics of the prevailing building stock.

  20. Validation of automatic segmentation of ribs for NTCP modeling.

    PubMed

    Stam, Barbara; Peulen, Heike; Rossi, Maddalena M G; Belderbos, José S A; Sonke, Jan-Jakob

    2016-03-01

    Determination of a dose-effect relation for rib fractures in a large patient group has been limited by the time consuming manual delineation of ribs. Automatic segmentation could facilitate such an analysis. We determine the accuracy of automatic rib segmentation in the context of normal tissue complication probability modeling (NTCP). Forty-one patients with stage I/II non-small cell lung cancer treated with SBRT to 54 Gy in 3 fractions were selected. Using the 4DCT derived mid-ventilation planning CT, all ribs were manually contoured and automatically segmented. Accuracy of segmentation was assessed using volumetric, shape and dosimetric measures. Manual and automatic dosimetric parameters Dx and EUD were tested for equivalence using the Two One-Sided T-test (TOST), and assessed for agreement using Bland-Altman analysis. NTCP models based on manual and automatic segmentation were compared. Automatic segmentation was comparable with the manual delineation in radial direction, but larger near the costal cartilage and vertebrae. Manual and automatic Dx and EUD were significantly equivalent. The Bland-Altman analysis showed good agreement. The two NTCP models were very similar. Automatic rib segmentation was significantly equivalent to manual delineation and can be used for NTCP modeling in a large patient group. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

Top