Sample records for comparative model analysis

  1. An Analysis Technique/Automated Tool for Comparing and Tracking Analysis Modes of Different Finite Element Models

    NASA Technical Reports Server (NTRS)

    Towner, Robert L.; Band, Jonathan L.

    2012-01-01

    An analysis technique was developed to compare and track mode shapes for different Finite Element Models. The technique may be applied to a variety of structural dynamics analyses, including model reduction validation (comparing unreduced and reduced models), mode tracking for various parametric analyses (e.g., launch vehicle model dispersion analysis to identify sensitivities to modal gain for Guidance, Navigation, and Control), comparing models of different mesh fidelity (e.g., a coarse model for a preliminary analysis compared to a higher-fidelity model for a detailed analysis) and mode tracking for a structure with properties that change over time (e.g., a launch vehicle from liftoff through end-of-burn, with propellant being expended during the flight). Mode shapes for different models are compared and tracked using several numerical indicators, including traditional Cross-Orthogonality and Modal Assurance Criteria approaches, as well as numerical indicators obtained by comparing modal strain energy and kinetic energy distributions. This analysis technique has been used to reliably identify correlated mode shapes for complex Finite Element Models that would otherwise be difficult to compare using traditional techniques. This improved approach also utilizes an adaptive mode tracking algorithm that allows for automated tracking when working with complex models and/or comparing a large group of models.

  2. Comparative analysis of zonal systems for macro-level crash modeling.

    PubMed

    Cai, Qing; Abdel-Aty, Mohamed; Lee, Jaeyoung; Eluru, Naveen

    2017-06-01

    Macro-level traffic safety analysis has been undertaken at different spatial configurations. However, clear guidelines for the appropriate zonal system selection for safety analysis are unavailable. In this study, a comparative analysis was conducted to determine the optimal zonal system for macroscopic crash modeling considering census tracts (CTs), state-wide traffic analysis zones (STAZs), and a newly developed traffic-related zone system labeled traffic analysis districts (TADs). Poisson lognormal models for three crash types (i.e., total, severe, and non-motorized mode crashes) are developed based on the three zonal systems without and with consideration of spatial autocorrelation. The study proposes a method to compare the modeling performance of the three types of geographic units at different spatial configurations through a grid based framework. Specifically, the study region is partitioned to grids of various sizes and the model prediction accuracy of the various macro models is considered within these grids of various sizes. These model comparison results for all crash types indicated that the models based on TADs consistently offer a better performance compared to the others. Besides, the models considering spatial autocorrelation outperform the ones that do not consider it. Based on the modeling results and motivation for developing the different zonal systems, it is recommended using CTs for socio-demographic data collection, employing TAZs for transportation demand forecasting, and adopting TADs for transportation safety planning. The findings from this study can help practitioners select appropriate zonal systems for traffic crash modeling, which leads to develop more efficient policies to enhance transportation safety. Copyright © 2017 Elsevier Ltd and National Safety Council. All rights reserved.

  3. Comparative analysis of used car price evaluation models

    NASA Astrophysics Data System (ADS)

    Chen, Chuancan; Hao, Lulu; Xu, Cong

    2017-05-01

    An accurate used car price evaluation is a catalyst for the healthy development of used car market. Data mining has been applied to predict used car price in several articles. However, little is studied on the comparison of using different algorithms in used car price estimation. This paper collects more than 100,000 used car dealing records throughout China to do empirical analysis on a thorough comparison of two algorithms: linear regression and random forest. These two algorithms are used to predict used car price in three different models: model for a certain car make, model for a certain car series and universal model. Results show that random forest has a stable but not ideal effect in price evaluation model for a certain car make, but it shows great advantage in the universal model compared with linear regression. This indicates that random forest is an optimal algorithm when handling complex models with a large number of variables and samples, yet it shows no obvious advantage when coping with simple models with less variables.

  4. MultiMetEval: Comparative and Multi-Objective Analysis of Genome-Scale Metabolic Models

    PubMed Central

    Gevorgyan, Albert; Kierzek, Andrzej M.; Breitling, Rainer; Takano, Eriko

    2012-01-01

    Comparative metabolic modelling is emerging as a novel field, supported by the development of reliable and standardized approaches for constructing genome-scale metabolic models in high throughput. New software solutions are needed to allow efficient comparative analysis of multiple models in the context of multiple cellular objectives. Here, we present the user-friendly software framework Multi-Metabolic Evaluator (MultiMetEval), built upon SurreyFBA, which allows the user to compose collections of metabolic models that together can be subjected to flux balance analysis. Additionally, MultiMetEval implements functionalities for multi-objective analysis by calculating the Pareto front between two cellular objectives. Using a previously generated dataset of 38 actinobacterial genome-scale metabolic models, we show how these approaches can lead to exciting novel insights. Firstly, after incorporating several pathways for the biosynthesis of natural products into each of these models, comparative flux balance analysis predicted that species like Streptomyces that harbour the highest diversity of secondary metabolite biosynthetic gene clusters in their genomes do not necessarily have the metabolic network topology most suitable for compound overproduction. Secondly, multi-objective analysis of biomass production and natural product biosynthesis in these actinobacteria shows that the well-studied occurrence of discrete metabolic switches during the change of cellular objectives is inherent to their metabolic network architecture. Comparative and multi-objective modelling can lead to insights that could not be obtained by normal flux balance analyses. MultiMetEval provides a powerful platform that makes these analyses straightforward for biologists. Sources and binaries of MultiMetEval are freely available from https://github.com/PiotrZakrzewski/MetEval/downloads. PMID:23272111

  5. Comparing model-based and model-free analysis methods for QUASAR arterial spin labeling perfusion quantification.

    PubMed

    Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J

    2013-05-01

    Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.

  6. Comparative dynamic analysis of the full Grossman model.

    PubMed

    Ried, W

    1998-08-01

    The paper applies the method of comparative dynamic analysis to the full Grossman model. For a particular class of solutions, it derives the equations implicitly defining the complete trajectories of the endogenous variables. Relying on the concept of Frisch decision functions, the impact of any parametric change on an endogenous variable can be decomposed into a direct and an indirect effect. The focus of the paper is on marginal changes in the rate of health capital depreciation. It also analyses the impact of either initial financial wealth or the initial stock of health capital. While the direction of most effects remains ambiguous in the full model, the assumption of a zero consumption benefit of health is sufficient to obtain a definite for any direct or indirect effect.

  7. Comparative analysis of existing models for power-grid synchronization

    NASA Astrophysics Data System (ADS)

    Nishikawa, Takashi; Motter, Adilson E.

    2015-01-01

    The dynamics of power-grid networks is becoming an increasingly active area of research within the physics and network science communities. The results from such studies are typically insightful and illustrative, but are often based on simplifying assumptions that can be either difficult to assess or not fully justified for realistic applications. Here we perform a comprehensive comparative analysis of three leading models recently used to study synchronization dynamics in power-grid networks—a fundamental problem of practical significance given that frequency synchronization of all power generators in the same interconnection is a necessary condition for a power grid to operate. We show that each of these models can be derived from first principles within a common framework based on the classical model of a generator, thereby clarifying all assumptions involved. This framework allows us to view power grids as complex networks of coupled second-order phase oscillators with both forcing and damping terms. Using simple illustrative examples, test systems, and real power-grid datasets, we study the inherent frequencies of the oscillators as well as their coupling structure, comparing across the different models. We demonstrate, in particular, that if the network structure is not homogeneous, generators with identical parameters need to be modeled as non-identical oscillators in general. We also discuss an approach to estimate the required (dynamical) system parameters that are unavailable in typical power-grid datasets, their use for computing the constants of each of the three models, and an open-source MATLAB toolbox that we provide for these computations.

  8. Comparative Statistical Analysis of Auroral Models

    DTIC Science & Technology

    2012-03-22

    was willing to add this project to her extremely busy schedule. Lastly, I must also express my sincere appreciation for the rest of the faculty and...models have been extensively used for estimating GPS and other communication satellite disturbances ( Newell et al., 2010a). The auroral oval...models predict changes in the auroral oval in response to various geomagnetic conditions. In 2010, Newell et al. conducted a comparative study of

  9. Comparing models for perfluorooctanoic acid pharmacokinetics using Bayesian analysis.

    PubMed

    Wambaugh, John F; Barton, Hugh A; Setzer, R Woodrow

    2008-12-01

    Selecting the appropriate pharmacokinetic (PK) model given the available data is investigated for perfluorooctanoic acid (PFOA), which has been widely analyzed with an empirical, one-compartment model. This research examined the results of experiments [Kemper R. A., DuPont Haskell Laboratories, USEPA Administrative Record AR-226.1499 (2003)] that administered single oral or iv doses of PFOA to adult male and female rats. PFOA concentration was observed over time; in plasma for some animals and in fecal and urinary excretion for others. There were four rats per dose group, for a total of 36 males and 36 females. Assuming that the PK parameters for each individual within a gender were drawn from the same, biologically varying population, plasma and excretion data were jointly analyzed using a hierarchical framework to separate uncertainty due to measurement error from actual biological variability. Bayesian analysis using Markov Chain Monte Carlo (MCMC) provides tools to perform such an analysis as well as quantitative diagnostics to evaluate and discriminate between models. Starting from a one-compartment PK model with separate clearances to urine and feces, the model was incrementally expanded using Bayesian measures to assess if the expansion was supported by the data. PFOA excretion is sexually dimorphic in rats; male rats have bi-phasic elimination that is roughly 40 times slower than that of the females, which appear to have a single elimination phase. The male and female data were analyzed separately, keeping only the parameters describing the measurement process in common. For male rats, including excretion data initially decreased certainty in the one-compartment parameter estimates compared to an analysis using plasma data only. Allowing a third, unspecified clearance improved agreement and increased certainty when all the data was used, however a significant amount of eliminated PFOA was estimated to be missing from the excretion data. Adding an additional

  10. A Comparative Evaluation of Mixed Dentition Analysis on Reliability of Cone Beam Computed Tomography Image Compared to Plaster Model.

    PubMed

    Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam

    2017-01-01

    The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t -test was performed to qualitatively evaluate the data and P < 0.05 was considered statistically significant. Statistically, significant results were obtained on data comparison between CBCT-derived images and plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis.

  11. Wellness Model of Supervision: A Comparative Analysis

    ERIC Educational Resources Information Center

    Lenz, A. Stephen; Sangganjanavanich, Varunee Faii; Balkin, Richard S.; Oliver, Marvarene; Smith, Robert L.

    2012-01-01

    This quasi-experimental study compared the effectiveness of the Wellness Model of Supervision (WELMS; Lenz & Smith, 2010) with alternative supervision models for developing wellness constructs, total personal wellness, and helping skills among counselors-in-training. Participants were 32 master's-level counseling students completing their…

  12. A Comparative Evaluation of Mixed Dentition Analysis on Reliability of Cone Beam Computed Tomography Image Compared to Plaster Model

    PubMed Central

    Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam

    2017-01-01

    Aims and Objective: The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Materials and Methods: Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t-test was performed to qualitatively evaluate the data and P < 0.05 was considered statistically significant. Results: Statistically, significant results were obtained on data comparison between CBCT-derived images and plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. Conclusion: CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis. PMID:28852639

  13. Comparative Analysis of Models of the Earth's Gravity: 3. Accuracy of Predicting EAS Motion

    NASA Astrophysics Data System (ADS)

    Kuznetsov, E. D.; Berland, V. E.; Wiebe, Yu. S.; Glamazda, D. V.; Kajzer, G. T.; Kolesnikov, V. I.; Khremli, G. P.

    2002-05-01

    This paper continues a comparative analysis of modern satellite models of the Earth's gravity which we started in [6, 7]. In the cited works, the uniform norms of spherical functions were compared with their gradients for individual harmonics of the geopotential expansion [6] and the potential differences were compared with the gravitational accelerations obtained in various models of the Earth's gravity [7]. In practice, it is important to know how consistently the EAS motion is represented by various geopotential models. Unless otherwise stated, a model version in which the equations of motion are written using the classical Encke scheme and integrated together with the variation equations by the implicit one-step Everhart's algorithm [1] was used. When calculating coordinates and velocities on the integration step (at given instants of time), the approximate Everhart formula was employed.

  14. Comparative Proteomic Analysis of Two Uveitis Models in Lewis Rats.

    PubMed

    Pepple, Kathryn L; Rotkis, Lauren; Wilson, Leslie; Sandt, Angela; Van Gelder, Russell N

    2015-12-01

    Inflammation generates changes in the protein constituents of the aqueous humor. Proteins that change in multiple models of uveitis may be good biomarkers of disease or targets for therapeutic intervention. The present study was conducted to identify differentially-expressed proteins in the inflamed aqueous humor. Two models of uveitis were induced in Lewis rats: experimental autoimmune uveitis (EAU) and primed mycobacterial uveitis (PMU). Differential gel electrophoresis was used to compare naïve and inflamed aqueous humor. Differentially-expressed proteins were separated by using 2-D gel electrophoresis and excised for identification with matrix-assisted laser desorption/ionization-time of flight (MALDI-TOF). Expression of select proteins was verified by Western blot analysis in both the aqueous and vitreous. The inflamed aqueous from both models demonstrated an increase in total protein concentration when compared to naïve aqueous. Calprotectin, a heterodimer of S100A8 and S100A9, was increased in the aqueous in both PMU and EAU. In the vitreous, S100A8 and S100A9 were preferentially elevated in PMU. Apolipoprotein E was elevated in the aqueous of both uveitis models but was preferentially elevated in EAU. Beta-B2-crystallin levels decreased in the aqueous and vitreous of EAU but not PMU. The proinflammatory molecules S100A8 and S100A9 were elevated in both models of uveitis but may play a more significant role in PMU than EAU. The neuroprotective protein β-B2-crystallin was found to decline in EAU. Therapies to modulate these proteins in vivo may be good targets in the treatment of ocular inflammation.

  15. The role of empathy and emotional intelligence in nurses' communication attitudes using regression models and fuzzy-set qualitative comparative analysis models.

    PubMed

    Giménez-Espert, María Del Carmen; Prado-Gascó, Vicente Javier

    2018-03-01

    To analyse link between empathy and emotional intelligence as a predictor of nurses' attitudes towards communication while comparing the contribution of emotional aspects and attitudinal elements on potential behaviour. Nurses' attitudes towards communication, empathy and emotional intelligence are key skills for nurses involved in patient care. There are currently no studies analysing this link, and its investigation is needed because attitudes may influence communication behaviours. Correlational study. To attain this goal, self-reported instruments (attitudes towards communication of nurses, trait emotional intelligence (Trait Emotional Meta-Mood Scale) and Jefferson Scale of Nursing Empathy (Jefferson Scale Nursing Empathy) were collected from 460 nurses between September 2015-February 2016. Two different analytical methodologies were used: traditional regression models and fuzzy-set qualitative comparative analysis models. The results of the regression model suggest that cognitive dimensions of attitude are a significant and positive predictor of the behavioural dimension. The perspective-taking dimension of empathy and the emotional-clarity dimension of emotional intelligence were significant positive predictors of the dimensions of attitudes towards communication, except for the affective dimension (for which the association was negative). The results of the fuzzy-set qualitative comparative analysis models confirm that the combination of high levels of cognitive dimension of attitudes, perspective-taking and emotional clarity explained high levels of the behavioural dimension of attitude. Empathy and emotional intelligence are predictors of nurses' attitudes towards communication, and the cognitive dimension of attitude is a good predictor of the behavioural dimension of attitudes towards communication of nurses in both regression models and fuzzy-set qualitative comparative analysis. In general, the fuzzy-set qualitative comparative analysis models appear

  16. Multilevel Structural Equation Models for the Analysis of Comparative Data on Educational Performance

    ERIC Educational Resources Information Center

    Goldstein, Harvey; Bonnet, Gerard; Rocher, Thierry

    2007-01-01

    The Programme for International Student Assessment comparative study of reading performance among 15-year-olds is reanalyzed using statistical procedures that allow the full complexity of the data structures to be explored. The article extends existing multilevel factor analysis and structural equation models and shows how this can extract richer…

  17. Bayesian models for comparative analysis integrating phylogenetic uncertainty.

    PubMed

    de Villemereuil, Pierre; Wells, Jessie A; Edwards, Robert D; Blomberg, Simon P

    2012-06-28

    Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses

  18. Bayesian models for comparative analysis integrating phylogenetic uncertainty

    PubMed Central

    2012-01-01

    Background Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for

  19. Genome-scale metabolic modeling of Mucor circinelloides and comparative analysis with other oleaginous species.

    PubMed

    Vongsangnak, Wanwipa; Klanchui, Amornpan; Tawornsamretkit, Iyarest; Tatiyaborwornchai, Witthawin; Laoteng, Kobkul; Meechai, Asawin

    2016-06-01

    We present a novel genome-scale metabolic model iWV1213 of Mucor circinelloides, which is an oleaginous fungus for industrial applications. The model contains 1213 genes, 1413 metabolites and 1326 metabolic reactions across different compartments. We demonstrate that iWV1213 is able to accurately predict the growth rates of M. circinelloides on various nutrient sources and culture conditions using Flux Balance Analysis and Phenotypic Phase Plane analysis. Comparative analysis of three oleaginous genome-scale models, including M. circinelloides (iWV1213), Mortierella alpina (iCY1106) and Yarrowia lipolytica (iYL619_PCP) revealed that iWV1213 possesses a higher number of genes involved in carbohydrate, amino acid, and lipid metabolisms that might contribute to its versatility in nutrient utilization. Moreover, the identification of unique and common active reactions among the Zygomycetes oleaginous models using Flux Variability Analysis unveiled a set of gene/enzyme candidates as metabolic engineering targets for cellular improvement. Thus, iWV1213 offers a powerful metabolic engineering tool for multi-level omics analysis, enabling strain optimization as a cell factory platform of lipid-based production. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Comparative Proteomic Analysis of Two Uveitis Models in Lewis Rats

    PubMed Central

    Pepple, Kathryn L.; Rotkis, Lauren; Wilson, Leslie; Sandt, Angela; Van Gelder, Russell N.

    2015-01-01

    Purpose Inflammation generates changes in the protein constituents of the aqueous humor. Proteins that change in multiple models of uveitis may be good biomarkers of disease or targets for therapeutic intervention. The present study was conducted to identify differentially-expressed proteins in the inflamed aqueous humor. Methods Two models of uveitis were induced in Lewis rats: experimental autoimmune uveitis (EAU) and primed mycobacterial uveitis (PMU). Differential gel electrophoresis was used to compare naïve and inflamed aqueous humor. Differentially-expressed proteins were separated by using 2-D gel electrophoresis and excised for identification with matrix-assisted laser desorption/ionization–time of flight (MALDI-TOF). Expression of select proteins was verified by Western blot analysis in both the aqueous and vitreous. Results The inflamed aqueous from both models demonstrated an increase in total protein concentration when compared to naïve aqueous. Calprotectin, a heterodimer of S100A8 and S100A9, was increased in the aqueous in both PMU and EAU. In the vitreous, S100A8 and S100A9 were preferentially elevated in PMU. Apolipoprotein E was elevated in the aqueous of both uveitis models but was preferentially elevated in EAU. Beta-B2–crystallin levels decreased in the aqueous and vitreous of EAU but not PMU. Conclusions The proinflammatory molecules S100A8 and S100A9 were elevated in both models of uveitis but may play a more significant role in PMU than EAU. The neuroprotective protein β-B2–crystallin was found to decline in EAU. Therapies to modulate these proteins in vivo may be good targets in the treatment of ocular inflammation. PMID:26747776

  1. Hidden Markov models for evolution and comparative genomics analysis.

    PubMed

    Bykova, Nadezda A; Favorov, Alexander V; Mironov, Andrey A

    2013-01-01

    The problem of reconstruction of ancestral states given a phylogeny and data from extant species arises in a wide range of biological studies. The continuous-time Markov model for the discrete states evolution is generally used for the reconstruction of ancestral states. We modify this model to account for a case when the states of the extant species are uncertain. This situation appears, for example, if the states for extant species are predicted by some program and thus are known only with some level of reliability; it is common for bioinformatics field. The main idea is formulation of the problem as a hidden Markov model on a tree (tree HMM, tHMM), where the basic continuous-time Markov model is expanded with the introduction of emission probabilities of observed data (e.g. prediction scores) for each underlying discrete state. Our tHMM decoding algorithm allows us to predict states at the ancestral nodes as well as to refine states at the leaves on the basis of quantitative comparative genomics. The test on the simulated data shows that the tHMM approach applied to the continuous variable reflecting the probabilities of the states (i.e. prediction score) appears to be more accurate then the reconstruction from the discrete states assignment defined by the best score threshold. We provide examples of applying our model to the evolutionary analysis of N-terminal signal peptides and transcription factor binding sites in bacteria. The program is freely available at http://bioinf.fbb.msu.ru/~nadya/tHMM and via web-service at http://bioinf.fbb.msu.ru/treehmmweb.

  2. Systems thinking, the Swiss Cheese Model and accident analysis: a comparative systemic analysis of the Grayrigg train derailment using the ATSB, AcciMap and STAMP models.

    PubMed

    Underwood, Peter; Waterson, Patrick

    2014-07-01

    The Swiss Cheese Model (SCM) is the most popular accident causation model and is widely used throughout various industries. A debate exists in the research literature over whether the SCM remains a viable tool for accident analysis. Critics of the model suggest that it provides a sequential, oversimplified view of accidents. Conversely, proponents suggest that it embodies the concepts of systems theory, as per the contemporary systemic analysis techniques. The aim of this paper was to consider whether the SCM can provide a systems thinking approach and remain a viable option for accident analysis. To achieve this, the train derailment at Grayrigg was analysed with an SCM-based model (the ATSB accident investigation model) and two systemic accident analysis methods (AcciMap and STAMP). The analysis outputs and usage of the techniques were compared. The findings of the study showed that each model applied the systems thinking approach. However, the ATSB model and AcciMap graphically presented their findings in a more succinct manner, whereas STAMP more clearly embodied the concepts of systems theory. The study suggests that, whilst the selection of an analysis method is subject to trade-offs that practitioners and researchers must make, the SCM remains a viable model for accident analysis. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Comparative analysis of Goodwin's business cycle models

    NASA Astrophysics Data System (ADS)

    Antonova, A. O.; Reznik, S.; Todorov, M. D.

    2016-10-01

    We compare the behavior of solutions of Goodwin's business cycle equation in the form of neutral delay differential equation with fixed delay (NDDE model) and in the form of the differential equations of 3rd, 4th and 5th orders (ODE model's). Such ODE model's (Taylor series expansion of NDDE in powers of θ) are proposed in N. Dharmaraj and K. Vela Velupillai [6] for investigation of the short periodic sawthooth oscillations in NDDE. We show that the ODE's of 3rd, 4th and 5th order may approximate the asymptotic behavior of only main Goodwin's mode, but not the sawthooth modes. If the order of the Taylor series expansion exceeds 5, then the approximate ODE becomes unstable independently of time lag θ.

  4. Modelling submerged coastal environments: Remote sensing technologies, techniques, and comparative analysis

    NASA Astrophysics Data System (ADS)

    Dillon, Chris

    Built upon remote sensing and GIS littoral zone characterization methodologies of the past decade, a series of loosely coupled models aimed to test, compare and synthesize multi-beam SONAR (MBES), Airborne LiDAR Bathymetry (ALB), and satellite based optical data sets in the Gulf of St. Lawrence, Canada, eco-region. Bathymetry and relative intensity metrics for the MBES and ALB data sets were run through a quantitative and qualitative comparison, which included outputs from the Benthic Terrain Modeller (BTM) tool. Substrate classification based on relative intensities of respective data sets and textural indices generated using grey level co-occurrence matrices (GLCM) were investigated. A spatial modelling framework built in ArcGIS(TM) for the derivation of bathymetric data sets from optical satellite imagery was also tested for proof of concept and validation. Where possible, efficiencies and semi-automation for repeatable testing was achieved using ArcGIS(TM) ModelBuilder. The findings from this study could assist future decision makers in the field of coastal management and hydrographic studies. Keywords: Seafloor terrain characterization, Benthic Terrain Modeller (BTM), Multi-beam SONAR, Airborne LiDAR Bathymetry, Satellite Derived Bathymetry, ArcGISTM ModelBuilder, Textural analysis, Substrate classification.

  5. How to Construct More Accurate Student Models: Comparing and Optimizing Knowledge Tracing and Performance Factor Analysis

    ERIC Educational Resources Information Center

    Gong, Yue; Beck, Joseph E.; Heffernan, Neil T.

    2011-01-01

    Student modeling is a fundamental concept applicable to a variety of intelligent tutoring systems (ITS). However, there is not a lot of practical guidance on how to construct and train such models. This paper compares two approaches for student modeling, Knowledge Tracing (KT) and Performance Factors Analysis (PFA), by evaluating their predictive…

  6. Comparative analysis of bleeding risk by the location and shape of arachnoid cysts: a finite element model analysis.

    PubMed

    Lee, Chang-Hyun; Han, In Seok; Lee, Ji Yeoun; Phi, Ji Hoon; Kim, Seung-Ki; Kim, Young-Eun; Wang, Kyu-Chang

    2017-01-01

    Although arachnoid cysts (ACs) are observed in various locations, only sylvian ACs are mainly regarded to be associated with bleeding. The reason for this selective association of sylvian ACs with bleeding is not understood well. This study is to investigate the effect of the location and shape of ACs on the risk of bleeding. A developed finite element model of the head/brain was modified for models of sylvian, suprasellar, and posterior fossa ACs. A spherical AC was placed at each location to compare the effect of AC location. Bowl-shaped and oval-shaped AC models were developed to compare the effect by shape. The shear force on the spot-weld elements (SFSW) was measured between the dura and the outer wall of the ACs or the comparable arachnoid membrane in the normal model. All AC models revealed higher SFSW than comparable normal models. By location, sylvian AC displayed the highest SFSW for frontal and lateral impacts. By shape, small outer wall AC models showed higher SFSW than large wall models in sylvian area and lower SFSW than large ones in posterior fossa. In regression analysis, the presence of AC was the only independent risk of bleeding. The bleeding mechanism of ACs is very complex, and the risk quantification failed to show a significant role of location and shape of ACs. The presence of AC increases shear force on impact condition and may be a risk factor of bleeding, and sylvian location of AC may not have additive risks of AC bleeding.

  7. Computer-aided modelling and analysis of PV systems: a comparative study.

    PubMed

    Koukouvaos, Charalambos; Kandris, Dionisis; Samarakou, Maria

    2014-01-01

    Modern scientific advances have enabled remarkable efficacy for photovoltaic systems with regard to the exploitation of solar energy, boosting them into having a rapidly growing position among the systems developed for the production of renewable energy. However, in many cases the design, analysis, and control of photovoltaic systems are tasks which are quite complex and thus difficult to be carried out. In order to cope with this kind of problems, appropriate software tools have been developed either as standalone products or parts of general purpose software platforms used to model and simulate the generation, transmission, and distribution of solar energy. The utilization of this kind of software tools may be extremely helpful to the successful performance evaluation of energy systems with maximum accuracy and minimum cost in time and effort. The work presented in this paper aims on a first level at the performance analysis of various configurations of photovoltaic systems through computer-aided modelling. On a second level, it provides a comparative evaluation of the credibility of two of the most advanced graphical programming environments, namely, Simulink and LabVIEW, with regard to their application in photovoltaic systems.

  8. Computer-Aided Modelling and Analysis of PV Systems: A Comparative Study

    PubMed Central

    Koukouvaos, Charalambos

    2014-01-01

    Modern scientific advances have enabled remarkable efficacy for photovoltaic systems with regard to the exploitation of solar energy, boosting them into having a rapidly growing position among the systems developed for the production of renewable energy. However, in many cases the design, analysis, and control of photovoltaic systems are tasks which are quite complex and thus difficult to be carried out. In order to cope with this kind of problems, appropriate software tools have been developed either as standalone products or parts of general purpose software platforms used to model and simulate the generation, transmission, and distribution of solar energy. The utilization of this kind of software tools may be extremely helpful to the successful performance evaluation of energy systems with maximum accuracy and minimum cost in time and effort. The work presented in this paper aims on a first level at the performance analysis of various configurations of photovoltaic systems through computer-aided modelling. On a second level, it provides a comparative evaluation of the credibility of two of the most advanced graphical programming environments, namely, Simulink and LabVIEW, with regard to their application in photovoltaic systems. PMID:24772007

  9. Comparative Analysis of InSAR Digital Surface Models for Test Area Bucharest

    NASA Astrophysics Data System (ADS)

    Dana, Iulia; Poncos, Valentin; Teleaga, Delia

    2010-03-01

    This paper presents the results of the interferometric processing of ERS Tandem, ENVISAT and TerraSAR- X for digital surface model (DSM) generation. The selected test site is Bucharest (Romania), a built-up area characterized by the usual urban complex pattern: mixture of buildings with different height levels, paved roads, vegetation, and water bodies. First, the DSMs were generated following the standard interferometric processing chain. Then, the accuracy of the DSMs was analyzed against the SPOT HRS model (30 m resolution at the equator). A DSM derived by optical stereoscopic processing of SPOT 5 HRG data and also the SRTM (3 arc seconds resolution at the equator) DSM have been included in the comparative analysis.

  10. Comparative analysis of numerical models of pipe handling equipment used in offshore drilling applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pawlus, Witold, E-mail: witold.p.pawlus@ieee.org; Ebbesen, Morten K.; Hansen, Michael R.

    Design of offshore drilling equipment is a task that involves not only analysis of strict machine specifications and safety requirements but also consideration of changeable weather conditions and harsh environment. These challenges call for a multidisciplinary approach and make the design process complex. Various modeling software products are currently available to aid design engineers in their effort to test and redesign equipment before it is manufactured. However, given the number of available modeling tools and methods, the choice of the proper modeling methodology becomes not obvious and – in some cases – troublesome. Therefore, we present a comparative analysis ofmore » two popular approaches used in modeling and simulation of mechanical systems: multibody and analytical modeling. A gripper arm of the offshore vertical pipe handling machine is selected as a case study for which both models are created. In contrast to some other works, the current paper shows verification of both systems by benchmarking their simulation results against each other. Such criteria as modeling effort and results accuracy are evaluated to assess which modeling strategy is the most suitable given its eventual application.« less

  11. Measurement error in time-series analysis: a simulation study comparing modelled and monitored data.

    PubMed

    Butland, Barbara K; Armstrong, Ben; Atkinson, Richard W; Wilkinson, Paul; Heal, Mathew R; Doherty, Ruth M; Vieno, Massimo

    2013-11-13

    Assessing health effects from background exposure to air pollution is often hampered by the sparseness of pollution monitoring networks. However, regional atmospheric chemistry-transport models (CTMs) can provide pollution data with national coverage at fine geographical and temporal resolution. We used statistical simulation to compare the impact on epidemiological time-series analysis of additive measurement error in sparse monitor data as opposed to geographically and temporally complete model data. Statistical simulations were based on a theoretical area of 4 regions each consisting of twenty-five 5 km × 5 km grid-squares. In the context of a 3-year Poisson regression time-series analysis of the association between mortality and a single pollutant, we compared the error impact of using daily grid-specific model data as opposed to daily regional average monitor data. We investigated how this comparison was affected if we changed the number of grids per region containing a monitor. To inform simulations, estimates (e.g. of pollutant means) were obtained from observed monitor data for 2003-2006 for national network sites across the UK and corresponding model data that were generated by the EMEP-WRF CTM. Average within-site correlations between observed monitor and model data were 0.73 and 0.76 for rural and urban daily maximum 8-hour ozone respectively, and 0.67 and 0.61 for rural and urban loge(daily 1-hour maximum NO2). When regional averages were based on 5 or 10 monitors per region, health effect estimates exhibited little bias. However, with only 1 monitor per region, the regression coefficient in our time-series analysis was attenuated by an estimated 6% for urban background ozone, 13% for rural ozone, 29% for urban background loge(NO2) and 38% for rural loge(NO2). For grid-specific model data the corresponding figures were 19%, 22%, 54% and 44% respectively, i.e. similar for rural loge(NO2) but more marked for urban loge(NO2). Even if correlations between

  12. Comparative Analysis of River Flow Modelling by Using Supervised Learning Technique

    NASA Astrophysics Data System (ADS)

    Ismail, Shuhaida; Mohamad Pandiahi, Siraj; Shabri, Ani; Mustapha, Aida

    2018-04-01

    The goal of this research is to investigate the efficiency of three supervised learning algorithms for forecasting monthly river flow of the Indus River in Pakistan, spread over 550 square miles or 1800 square kilometres. The algorithms include the Least Square Support Vector Machine (LSSVM), Artificial Neural Network (ANN) and Wavelet Regression (WR). The forecasting models predict the monthly river flow obtained from the three models individually for river flow data and the accuracy of the all models were then compared against each other. The monthly river flow of the said river has been forecasted using these three models. The obtained results were compared and statistically analysed. Then, the results of this analytical comparison showed that LSSVM model is more precise in the monthly river flow forecasting. It was found that LSSVM has he higher r with the value of 0.934 compared to other models. This indicate that LSSVM is more accurate and efficient as compared to the ANN and WR model.

  13. Comparative Analysis of Modeling Studies on China's Future Energy and Emissions Outlook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Nina; Zhou, Nan; Fridley, David

    The past decade has seen the development of various scenarios describing long-term patterns of future Greenhouse Gas (GHG) emissions, with each new approach adding insights to our understanding of the changing dynamics of energy consumption and aggregate future energy trends. With the recent growing focus on China's energy use and emission mitigation potential, a range of Chinese outlook models have been developed across different institutions including in China's Energy Research Institute's 2050 China Energy and CO2 Emissions Report, McKinsey & Co's China's Green Revolution report, the UK Sussex Energy Group and Tyndall Centre's China's Energy Transition report, and the China-specificmore » section of the IEA World Energy Outlook 2009. At the same time, the China Energy Group at Lawrence Berkeley National Laboratory (LBNL) has developed a bottom-up, end-use energy model for China with scenario analysis of energy and emission pathways out to 2050. A robust and credible energy and emission model will play a key role in informing policymakers by assessing efficiency policy impacts and understanding the dynamics of future energy consumption and energy saving and emission reduction potential. This is especially true for developing countries such as China, where uncertainties are greater while the economy continues to undergo rapid growth and industrialization. A slightly different assumption or storyline could result in significant discrepancies among different model results. Therefore, it is necessary to understand the key models in terms of their scope, methodologies, key driver assumptions and the associated findings. A comparative analysis of LBNL's energy end-use model scenarios with the five above studies was thus conducted to examine similarities and divergences in methodologies, scenario storylines, macroeconomic drivers and assumptions as well as aggregate energy and emission scenario results. Besides directly tracing different energy and CO{sub 2} savings

  14. Multi-objective calibration and uncertainty analysis of hydrologic models; A comparative study between formal and informal methods

    NASA Astrophysics Data System (ADS)

    Shafii, M.; Tolson, B.; Matott, L. S.

    2012-04-01

    Hydrologic modeling has benefited from significant developments over the past two decades. This has resulted in building of higher levels of complexity into hydrologic models, which eventually makes the model evaluation process (parameter estimation via calibration and uncertainty analysis) more challenging. In order to avoid unreasonable parameter estimates, many researchers have suggested implementation of multi-criteria calibration schemes. Furthermore, for predictive hydrologic models to be useful, proper consideration of uncertainty is essential. Consequently, recent research has emphasized comprehensive model assessment procedures in which multi-criteria parameter estimation is combined with statistically-based uncertainty analysis routines such as Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. Such a procedure relies on the use of formal likelihood functions based on statistical assumptions, and moreover, the Bayesian inference structured on MCMC samplers requires a considerably large number of simulations. Due to these issues, especially in complex non-linear hydrological models, a variety of alternative informal approaches have been proposed for uncertainty analysis in the multi-criteria context. This study aims at exploring a number of such informal uncertainty analysis techniques in multi-criteria calibration of hydrological models. The informal methods addressed in this study are (i) Pareto optimality which quantifies the parameter uncertainty using the Pareto solutions, (ii) DDS-AU which uses the weighted sum of objective functions to derive the prediction limits, and (iii) GLUE which describes the total uncertainty through identification of behavioral solutions. The main objective is to compare such methods with MCMC-based Bayesian inference with respect to factors such as computational burden, and predictive capacity, which are evaluated based on multiple comparative measures. The measures for comparison are calculated both for

  15. Model Ambiguities in Configurational Comparative Research

    ERIC Educational Resources Information Center

    Baumgartner, Michael; Thiem, Alrik

    2017-01-01

    For many years, sociologists, political scientists, and management scholars have readily relied on Qualitative Comparative Analysis (QCA) for the purpose of configurational causal modeling. However, this article reveals that a severe problem in the application of QCA has gone unnoticed so far: model ambiguities. These arise when multiple causal…

  16. Comparative mRNA analysis of behavioral and genetic mouse models of aggression.

    PubMed

    Malki, Karim; Tosto, Maria G; Pain, Oliver; Sluyter, Frans; Mineur, Yann S; Crusio, Wim E; de Boer, Sietse; Sandnabba, Kenneth N; Kesserwani, Jad; Robinson, Edward; Schalkwyk, Leonard C; Asherson, Philip

    2016-04-01

    Mouse models of aggression have traditionally compared strains, most notably BALB/cJ and C57BL/6. However, these strains were not designed to study aggression despite differences in aggression-related traits and distinct reactivity to stress. This study evaluated expression of genes differentially regulated in a stress (behavioral) mouse model of aggression with those from a recent genetic mouse model aggression. The study used a discovery-replication design using two independent mRNA studies from mouse brain tissue. The discovery study identified strain (BALB/cJ and C57BL/6J) × stress (chronic mild stress or control) interactions. Probe sets differentially regulated in the discovery set were intersected with those uncovered in the replication study, which evaluated differences between high and low aggressive animals from three strains specifically bred to study aggression. Network analysis was conducted on overlapping genes uncovered across both studies. A significant overlap was found with the genetic mouse study sharing 1,916 probe sets with the stress model. Fifty-one probe sets were found to be strongly dysregulated across both studies mapping to 50 known genes. Network analysis revealed two plausible pathways including one centered on the UBC gene hub which encodes ubiquitin, a protein well-known for protein degradation, and another on P38 MAPK. Findings from this study support the stress model of aggression, which showed remarkable molecular overlap with a genetic model. The study uncovered a set of candidate genes including the Erg2 gene, which has previously been implicated in different psychopathologies. The gene networks uncovered points at a Redox pathway as potentially being implicated in aggressive related behaviors. © 2016 Wiley Periodicals, Inc.

  17. Comparative Analysis of VaR Estimation of Double Long-Memory GARCH Models: Empirical Analysis of China's Stock Market

    NASA Astrophysics Data System (ADS)

    Cao, Guangxi; Guo, Jianping; Xu, Lin

    GARCH models are widely used to model the volatility of financial assets and measure VaR. Based on the characteristics of long-memory and lepkurtosis and fat tail of stock market return series, we compared the ability of double long-memory GARCH models with skewed student-t-distribution to compute VaR, through the empirical analysis of Shanghai Composite Index (SHCI) and Shenzhen Component Index (SZCI). The results show that the ARFIMA-HYGARCH model performance better than others, and at less than or equal to 2.5 percent of the level of VaR, double long-memory GARCH models have stronger ability to evaluate in-sample VaRs in long position than in short position while there is a diametrically opposite conclusion for ability of out-of-sample VaR forecast.

  18. Conceptual model of iCAL4LA: Proposing the components using comparative analysis

    NASA Astrophysics Data System (ADS)

    Ahmad, Siti Zulaiha; Mutalib, Ariffin Abdul

    2016-08-01

    This paper discusses an on-going study that initiates an initial process in determining the common components for a conceptual model of interactive computer-assisted learning that is specifically designed for low achieving children. This group of children needs a specific learning support that can be used as an alternative learning material in their learning environment. In order to develop the conceptual model, this study extracts the common components from 15 strongly justified computer assisted learning studies. A comparative analysis has been conducted to determine the most appropriate components by using a set of specific indication classification to prioritize the applicability. The results of the extraction process reveal 17 common components for consideration. Later, based on scientific justifications, 16 of them were selected as the proposed components for the model.

  19. A Comparative of business process modelling techniques

    NASA Astrophysics Data System (ADS)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  20. Accuracy of Bolton analysis measured in laser scanned digital models compared with plaster models (gold standard) and cone-beam computer tomography images.

    PubMed

    Kim, Jooseong; Lagravére, Manuel O

    2016-01-01

    The aim of this study was to compare the accuracy of Bolton analysis obtained from digital models scanned with the Ortho Insight three-dimensional (3D) laser scanner system to those obtained from cone-beam computed tomography (CBCT) images and traditional plaster models. CBCT scans and plaster models were obtained from 50 patients. Plaster models were scanned using the Ortho Insight 3D laser scanner; Bolton ratios were calculated with its software. CBCT scans were imported and analyzed using AVIZO software. Plaster models were measured with a digital caliper. Data were analyzed with descriptive statistics and the intraclass correlation coefficient (ICC). Anterior and overall Bolton ratios obtained by the three different modalities exhibited excellent agreement (> 0.970). The mean differences between the scanned digital models and physical models and between the CBCT images and scanned digital models for overall Bolton ratios were 0.41 ± 0.305% and 0.45 ± 0.456%, respectively; for anterior Bolton ratios, 0.59 ± 0.520% and 1.01 ± 0.780%, respectively. ICC results showed that intraexaminer error reliability was generally excellent (> 0.858 for all three diagnostic modalities), with < 1.45% discrepancy in the Bolton analysis. Laser scanned digital models are highly accurate compared to physical models and CBCT scans for assessing the spatial relationships of dental arches for orthodontic diagnosis.

  1. Accuracy of Bolton analysis measured in laser scanned digital models compared with plaster models (gold standard) and cone-beam computer tomography images

    PubMed Central

    Kim, Jooseong

    2016-01-01

    Objective The aim of this study was to compare the accuracy of Bolton analysis obtained from digital models scanned with the Ortho Insight three-dimensional (3D) laser scanner system to those obtained from cone-beam computed tomography (CBCT) images and traditional plaster models. Methods CBCT scans and plaster models were obtained from 50 patients. Plaster models were scanned using the Ortho Insight 3D laser scanner; Bolton ratios were calculated with its software. CBCT scans were imported and analyzed using AVIZO software. Plaster models were measured with a digital caliper. Data were analyzed with descriptive statistics and the intraclass correlation coefficient (ICC). Results Anterior and overall Bolton ratios obtained by the three different modalities exhibited excellent agreement (> 0.970). The mean differences between the scanned digital models and physical models and between the CBCT images and scanned digital models for overall Bolton ratios were 0.41 ± 0.305% and 0.45 ± 0.456%, respectively; for anterior Bolton ratios, 0.59 ± 0.520% and 1.01 ± 0.780%, respectively. ICC results showed that intraexaminer error reliability was generally excellent (> 0.858 for all three diagnostic modalities), with < 1.45% discrepancy in the Bolton analysis. Conclusions Laser scanned digital models are highly accurate compared to physical models and CBCT scans for assessing the spatial relationships of dental arches for orthodontic diagnosis. PMID:26877978

  2. Comparative analysis of economic models in selected solar energy computer programs

    NASA Astrophysics Data System (ADS)

    Powell, J. W.; Barnes, K. A.

    1982-01-01

    The economic evaluation models in five computer programs widely used for analyzing solar energy systems (F-CHART 3.0, F-CHART 4.0, SOLCOST, BLAST, and DOE-2) are compared. Differences in analysis techniques and assumptions among the programs are assessed from the point of view of consistency with the Federal requirements for life cycle costing (10 CFR Part 436), effect on predicted economic performance, and optimal system size, case of use, and general applicability to diverse systems types and building types. The FEDSOL program developed by the National Bureau of Standards specifically to meet the Federal life cycle cost requirements serves as a basis for the comparison. Results of the study are illustrated in test cases of two different types of Federally owned buildings: a single family residence and a low rise office building.

  3. The effectiveness of physical models in teaching anatomy: a meta-analysis of comparative studies.

    PubMed

    Yammine, Kaissar; Violato, Claudio

    2016-10-01

    There are various educational methods used in anatomy teaching. While three dimensional (3D) visualization technologies are gaining ground due to their ever-increasing realism, reports investigating physical models as a low-cost 3D traditional method are still the subject of considerable interest. The aim of this meta-analysis is to quantitatively assess the effectiveness of such models based on comparative studies. Eight studies (7 randomized trials; 1 quasi-experimental) including 16 comparison arms and 820 learners met the inclusion criteria. Primary outcomes were defined as factual, spatial and overall percentage scores. The meta-analytical results are: educational methods using physical models yielded significantly better results when compared to all other educational methods for the overall knowledge outcome (p < 0.001) and for spatial knowledge acquisition (p < 0.001). Significantly better results were also found with regard to the long-retention knowledge outcome (p < 0.01). No significance was found for the factual knowledge acquisition outcome. The evidence in the present systematic review was found to have high internal validity and at least an acceptable strength. In conclusion, physical anatomical models offer a promising tool for teaching gross anatomy in 3D representation due to their easy accessibility and educational effectiveness. Such models could be a practical tool to bring up the learners' level of gross anatomy knowledge at low cost.

  4. Comparative study on DuPont analysis and DEA models for measuring stock performance using financial ratio

    NASA Astrophysics Data System (ADS)

    Arsad, Roslah; Shaari, Siti Nabilah Mohd; Isa, Zaidi

    2017-11-01

    Determining stock performance using financial ratio is challenging for many investors and researchers. Financial ratio can indicate the strengths and weaknesses of a company's stock performance. There are five categories of financial ratios namely liquidity, efficiency, leverage, profitability and market ratios. It is important to interpret the ratio correctly for proper financial decision making. The purpose of this study is to compare the performance of listed companies in Bursa Malaysia using Data Envelopment Analysis (DEA) and DuPont analysis Models. The study is conducted in 2015 involving 116 consumer products companies listed in Bursa Malaysia. The estimation method of Data Envelopment Analysis computes the efficiency scores and ranks the companies accordingly. The Alirezaee and Afsharian's method of analysis based Charnes, Cooper and Rhodes (CCR) where Constant Return to Scale (CRS) is employed. The DuPont analysis is a traditional tool for measuring the operating performance of companies. In this study, DuPont analysis is used to evaluate three different aspects such as profitability, efficiency of assets utilization and financial leverage. Return on Equity (ROE) is also calculated in DuPont analysis. This study finds that both analysis models provide different rankings of the selected samples. Hypothesis testing based on Pearson's correlation, indicates that there is no correlation between rankings produced by DEA and DuPont analysis. The DEA ranking model proposed by Alirezaee and Asharian is unstable. The method cannot provide complete ranking because the values of Balance Index is equal and zero.

  5. Comparative Protein Structure Modeling Using MODELLER

    PubMed Central

    Webb, Benjamin; Sali, Andrej

    2016-01-01

    Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. PMID:27322406

  6. Predicting Air Permeability of Handloom Fabrics: A Comparative Analysis of Regression and Artificial Neural Network Models

    NASA Astrophysics Data System (ADS)

    Mitra, Ashis; Majumdar, Prabal Kumar; Bannerjee, Debamalya

    2013-03-01

    This paper presents a comparative analysis of two modeling methodologies for the prediction of air permeability of plain woven handloom cotton fabrics. Four basic fabric constructional parameters namely ends per inch, picks per inch, warp count and weft count have been used as inputs for artificial neural network (ANN) and regression models. Out of the four regression models tried, interaction model showed very good prediction performance with a meager mean absolute error of 2.017 %. However, ANN models demonstrated superiority over the regression models both in terms of correlation coefficient and mean absolute error. The ANN model with 10 nodes in the single hidden layer showed very good correlation coefficient of 0.982 and 0.929 and mean absolute error of only 0.923 and 2.043 % for training and testing data respectively.

  7. Comparative Protein Structure Modeling Using MODELLER.

    PubMed

    Webb, Benjamin; Sali, Andrej

    2014-09-08

    Functional characterization of a protein sequence is one of the most frequent problems in biology. This task is usually facilitated by accurate three-dimensional (3-D) structure of the studied protein. In the absence of an experimentally determined structure, comparative or homology modeling can sometimes provide a useful 3-D model for a protein that is related to at least one known protein structure. Comparative modeling predicts the 3-D structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. Copyright © 2014 John Wiley & Sons, Inc.

  8. Comparing the Fit of Item Response Theory and Factor Analysis Models

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Alberto; Cai, Li; Hernandez, Adolfo

    2011-01-01

    Linear factor analysis (FA) models can be reliably tested using test statistics based on residual covariances. We show that the same statistics can be used to reliably test the fit of item response theory (IRT) models for ordinal data (under some conditions). Hence, the fit of an FA model and of an IRT model to the same data set can now be…

  9. Comparative Logic Modeling for Policy Analysis: The Case of HIV Testing Policy Change at the Department of Veterans Affairs

    PubMed Central

    Langer, Erika M; Gifford, Allen L; Chan, Kee

    2011-01-01

    Objective Logic models have been used to evaluate policy programs, plan projects, and allocate resources. Logic Modeling for policy analysis has been used rarely in health services research but can be helpful in evaluating the content and rationale of health policies. Comparative Logic Modeling is used here on human immunodeficiency virus (HIV) policy statements from the Department of Veterans Affairs (VA) and Centers for Disease Control and Prevention (CDC). We created visual representations of proposed HIV screening policy components in order to evaluate their structural logic and research-based justifications. Data Sources and Study Design We performed content analysis of VA and CDC HIV testing policy documents in a retrospective case study. Data Collection Using comparative Logic Modeling, we examined the content and primary sources of policy statements by the VA and CDC. We then quantified evidence-based causal inferences within each statement. Principal Findings VA HIV testing policy structure largely replicated that of the CDC guidelines. Despite similar design choices, chosen research citations did not overlap. The agencies used evidence to emphasize different components of the policies. Conclusion Comparative Logic Modeling can be used by health services researchers and policy analysts more generally to evaluate structural differences in health policies and to analyze research-based rationales used by policy makers. PMID:21689094

  10. A comparative analysis of modeled and monitored ambient hazardous air pollutants in Texas: a novel approach using concordance correlation.

    PubMed

    Lupo, Philip J; Symanski, Elaine

    2009-11-01

    Often, in studies evaluating the health effects of hazardous air pollutants (HAPs), researchers rely on ambient air levels to estimate exposure. Two potential data sources are modeled estimates from the U.S. Environmental Protection Agency (EPA) Assessment System for Population Exposure Nationwide (ASPEN) and ambient air pollutant measurements from monitoring networks. The goal was to conduct comparisons of modeled and monitored estimates of HAP levels in the state of Texas using traditional approaches and a previously unexploited method, concordance correlation analysis, to better inform decisions regarding agreement. Census tract-level ASPEN estimates and monitoring data for all HAPs throughout Texas, available from the EPA Air Quality System, were obtained for 1990, 1996, and 1999. Monitoring sites were mapped to census tracts using U.S. Census data. Exclusions were applied to restrict the monitored data to measurements collected using a common sampling strategy with minimal missing values over time. Comparisons were made for 28 HAPs in 38 census tracts located primarily in urban areas throughout Texas. For each pollutant and by year of assessment, modeled and monitored air pollutant annual levels were compared using standard methods (i.e., ratios of model-to-monitor annual levels). Concordance correlation analysis was also used, which assesses linearity and agreement while providing a formal method of statistical inference. Forty-eight percent of the median model-to-monitor values fell between 0.5 and 2, whereas only 17% of concordance correlation coefficients were significant and greater than 0.5. On the basis of concordance correlation analysis, the findings indicate there is poorer agreement when compared with the previously applied ad hoc methods to assess comparability between modeled and monitored levels of ambient HAPs.

  11. Comparative empirical analysis of flow-weighted transit route networks in R-space and evolution modeling

    NASA Astrophysics Data System (ADS)

    Huang, Ailing; Zang, Guangzhi; He, Zhengbing; Guan, Wei

    2017-05-01

    Urban public transit system is a typical mixed complex network with dynamic flow, and its evolution should be a process coupling topological structure with flow dynamics, which has received little attention. This paper presents the R-space to make a comparative empirical analysis on Beijing’s flow-weighted transit route network (TRN) and we found that both the Beijing’s TRNs in the year of 2011 and 2015 exhibit the scale-free properties. As such, we propose an evolution model driven by flow to simulate the development of TRNs with consideration of the passengers’ dynamical behaviors triggered by topological change. The model simulates that the evolution of TRN is an iterative process. At each time step, a certain number of new routes are generated driven by travel demands, which leads to dynamical evolution of new routes’ flow and triggers perturbation in nearby routes that will further impact the next round of opening new routes. We present the theoretical analysis based on the mean-field theory, as well as the numerical simulation for this model. The results obtained agree well with our empirical analysis results, which indicate that our model can simulate the TRN evolution with scale-free properties for distributions of node’s strength and degree. The purpose of this paper is to illustrate the global evolutional mechanism of transit network that will be used to exploit planning and design strategies for real TRNs.

  12. A comparative analysis of hazard models for predicting debris flows in Madison County, VA

    USGS Publications Warehouse

    Morrissey, Meghan M.; Wieczorek, Gerald F.; Morgan, Benjamin A.

    2001-01-01

    During the rainstorm of June 27, 1995, roughly 330-750 mm of rain fell within a sixteen-hour period, initiating floods and over 600 debris flows in a small area (130 km2) of Madison County, Virginia. Field studies showed that the majority (70%) of these debris flows initiated with a thickness of 0.5 to 3.0 m in colluvium on slopes from 17 o to 41 o (Wieczorek et al., 2000). This paper evaluated and compared the approaches of SINMAP, LISA, and Iverson's (2000) transient response model for slope stability analysis by applying each model to the landslide data from Madison County. Of these three stability models, only Iverson's transient response model evaluated stability conditions as a function of time and depth. Iverson?s model would be the preferred method of the three models to evaluate landslide hazards on a regional scale in areas prone to rain-induced landslides as it considers both the transient and spatial response of pore pressure in its calculation of slope stability. The stability calculation used in SINMAP and LISA is similar and utilizes probability distribution functions for certain parameters. Unlike SINMAP that only considers soil cohesion, internal friction angle and rainfall-rate distributions, LISA allows the use of distributed data for all parameters, so it is the preferred model to evaluate slope stability over SINMAP. Results from all three models suggested similar soil and hydrologic properties for triggering the landslides that occurred during the 1995 storm in Madison County, Virginia. The colluvium probably had cohesion of less than 2KPa. The root-soil system is above the failure plane and consequently root strength and tree surcharge had negligible effect on slope stability. The result that the final location of the water table was near the ground surface is supported by the water budget analysis of the rainstorm conducted by Smith et al. (1996).

  13. Comparing the Performance of Three Land Models in Global C Cycle Simulations: A Detailed Structural Analysis: Structural Analysis of Land Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rafique, Rashid; Xia, Jianyang; Hararuk, Oleksandra

    Land models are valuable tools to understand the dynamics of global carbon (C) cycle. Various models have been developed and used for predictions of future C dynamics but uncertainties still exist. Diagnosing the models’ behaviors in terms of structures can help to narrow down the uncertainties in prediction of C dynamics. In this study three widely used land surface models, namely CSIRO’s Atmosphere Biosphere Land Exchange (CABLE) with 9 C pools, Community Land Model (version 3.5) combined with Carnegie-Ames-Stanford Approach (CLM-CASA) with 12 C pools and Community Land Model (version 4) (CLM4) with 26 C pools were driven by themore » observed meteorological forcing. The simulated C storage and residence time were used for analysis. The C storage and residence time were computed globally for all individual soil and plant pools, as well as net primary productivity (NPP) and its allocation to different plant components’ based on these models. Remotely sensed NPP and statistically derived HWSD, and GLC2000 datasets were used as a reference to evaluate the performance of these models. Results showed that CABLE exhibited better agreement with referenced C storage and residence time for plant and soil pools, as compared with CLM-CASA and CLM4. CABLE had longer bulk residence time for soil C pools and stored more C in roots, whereas, CLM-CASA and CLM4 stored more C in woody pools due to differential NPP allocation. Overall, these results indicate that the differences in C storage and residence times in three models are largely due to the differences in their fundamental structures (number of C pools), NPP allocation and C transfer rates. Our results have implications in model development and provide a general framework to explain the bias/uncertainties in simulation of C storage and residence times from the perspectives of model structures.« less

  14. A comparative modeling analysis of multiscale temporal variability of rainfall in Australia

    NASA Astrophysics Data System (ADS)

    Samuel, Jos M.; Sivapalan, Murugesu

    2008-07-01

    The effects of long-term natural climate variability and human-induced climate change on rainfall variability have become the focus of much concern and recent research efforts. In this paper, we present the results of a comparative analysis of observed multiscale temporal variability of rainfall in the Perth, Newcastle, and Darwin regions of Australia. This empirical and stochastic modeling analysis explores multiscale rainfall variability, i.e., ranging from short to long term, including within-storm patterns, and intra-annual, interannual, and interdecadal variabilities, using data taken from each of these regions. The analyses investigated how storm durations, interstorm periods, and average storm rainfall intensities differ for different climate states and demonstrated significant differences in this regard between the three selected regions. In Perth, the average storm intensity is stronger during La Niña years than during El Niño years, whereas in Newcastle and Darwin storm duration is longer during La Niña years. Increase of either storm duration or average storm intensity is the cause of higher average annual rainfall during La Niña years as compared to El Niño years. On the other hand, within-storm variability does not differ significantly between different ENSO states in all three locations. In the case of long-term rainfall variability, the statistical analyses indicated that in Newcastle the long-term rainfall pattern reflects the variability of the Interdecadal Pacific Oscillation (IPO) index, whereas in Perth and Darwin the long-term variability exhibits a step change in average annual rainfall (up in Darwin and down in Perth) which occurred around 1970. The step changes in Perth and Darwin and the switch in IPO states in Newcastle manifested differently in the three study regions in terms of changes in the annual number of rainy days or the average daily rainfall intensity or both. On the basis of these empirical data analyses, a stochastic

  15. Comparative Analysis of Predictive Models for Liver Toxicity Using ToxCast Assays and Quantitative Structure-Activity Relationships (MCBIOS)

    EPA Science Inventory

    Comparative Analysis of Predictive Models for Liver Toxicity Using ToxCast Assays and Quantitative Structure-Activity Relationships Jie Liu1,2, Richard Judson1, Matthew T. Martin1, Huixiao Hong3, Imran Shah1 1National Center for Computational Toxicology (NCCT), US EPA, RTP, NC...

  16. Comparative Analysis.

    DTIC Science & Technology

    1987-11-01

    differential qualita- tive (DQ) analysis, which solves the task, providing explanations suitable for use by design systems, automated diagnosis, intelligent...solves the task, providing explanations suitable for use by design systems, automated diagnosis, intelligent tutoring systems, and explanation based...comparative analysis as an important component; the explanation is used in many different ways. * One way method of automated design is the principlvd

  17. Comparative Sensitivity Analysis of Muscle Activation Dynamics

    PubMed Central

    Günther, Michael; Götz, Thomas

    2015-01-01

    We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379

  18. Linked Sensitivity Analysis, Calibration, and Uncertainty Analysis Using a System Dynamics Model for Stroke Comparative Effectiveness Research.

    PubMed

    Tian, Yuan; Hassmiller Lich, Kristen; Osgood, Nathaniel D; Eom, Kirsten; Matchar, David B

    2016-11-01

    As health services researchers and decision makers tackle more difficult problems using simulation models, the number of parameters and the corresponding degree of uncertainty have increased. This often results in reduced confidence in such complex models to guide decision making. To demonstrate a systematic approach of linked sensitivity analysis, calibration, and uncertainty analysis to improve confidence in complex models. Four techniques were integrated and applied to a System Dynamics stroke model of US veterans, which was developed to inform systemwide intervention and research planning: Morris method (sensitivity analysis), multistart Powell hill-climbing algorithm and generalized likelihood uncertainty estimation (calibration), and Monte Carlo simulation (uncertainty analysis). Of 60 uncertain parameters, sensitivity analysis identified 29 needing calibration, 7 that did not need calibration but significantly influenced key stroke outcomes, and 24 not influential to calibration or stroke outcomes that were fixed at their best guess values. One thousand alternative well-calibrated baselines were obtained to reflect calibration uncertainty and brought into uncertainty analysis. The initial stroke incidence rate among veterans was identified as the most influential uncertain parameter, for which further data should be collected. That said, accounting for current uncertainty, the analysis of 15 distinct prevention and treatment interventions provided a robust conclusion that hypertension control for all veterans would yield the largest gain in quality-adjusted life years. For complex health care models, a mixed approach was applied to examine the uncertainty surrounding key stroke outcomes and the robustness of conclusions. We demonstrate that this rigorous approach can be practical and advocate for such analysis to promote understanding of the limits of certainty in applying models to current decisions and to guide future data collection. © The Author(s) 2016.

  19. Multi-criteria comparative evaluation of spallation reaction models

    NASA Astrophysics Data System (ADS)

    Andrianov, Andrey; Andrianova, Olga; Konobeev, Alexandr; Korovin, Yury; Kuptsov, Ilya

    2017-09-01

    This paper presents an approach to a comparative evaluation of the predictive ability of spallation reaction models based on widely used, well-proven multiple-criteria decision analysis methods (MAVT/MAUT, AHP, TOPSIS, PROMETHEE) and the results of such a comparison for 17 spallation reaction models in the presence of the interaction of high-energy protons with natPb.

  20. Development of multivariate NTCP models for radiation-induced hypothyroidism: a comparative analysis.

    PubMed

    Cella, Laura; Liuzzi, Raffaele; Conson, Manuel; D'Avino, Vittoria; Salvatore, Marco; Pacelli, Roberto

    2012-12-27

    Hypothyroidism is a frequent late side effect of radiation therapy of the cervical region. Purpose of this work is to develop multivariate normal tissue complication probability (NTCP) models for radiation-induced hypothyroidism (RHT) and to compare them with already existing NTCP models for RHT. Fifty-three patients treated with sequential chemo-radiotherapy for Hodgkin's lymphoma (HL) were retrospectively reviewed for RHT events. Clinical information along with thyroid gland dose distribution parameters were collected and their correlation to RHT was analyzed by Spearman's rank correlation coefficient (Rs). Multivariate logistic regression method using resampling methods (bootstrapping) was applied to select model order and parameters for NTCP modeling. Model performance was evaluated through the area under the receiver operating characteristic curve (AUC). Models were tested against external published data on RHT and compared with other published NTCP models. If we express the thyroid volume exceeding X Gy as a percentage (Vx(%)), a two-variable NTCP model including V30(%) and gender resulted to be the optimal predictive model for RHT (Rs = 0.615, p < 0.001. AUC = 0.87). Conversely, if absolute thyroid volume exceeding X Gy (Vx(cc)) was analyzed, an NTCP model based on 3 variables including V30(cc), thyroid gland volume and gender was selected as the most predictive model (Rs = 0.630, p < 0.001. AUC = 0.85). The three-variable model performs better when tested on an external cohort characterized by large inter-individuals variation in thyroid volumes (AUC = 0.914, 95% CI 0.760-0.984). A comparable performance was found between our model and that proposed in the literature based on thyroid gland mean dose and volume (p = 0.264). The absolute volume of thyroid gland exceeding 30 Gy in combination with thyroid gland volume and gender provide an NTCP model for RHT with improved prediction capability not only within our patient population but also in an external

  1. Comparative analysis of detection methods for congenital cytomegalovirus infection in a Guinea pig model.

    PubMed

    Park, Albert H; Mann, David; Error, Marc E; Miller, Matthew; Firpo, Matthew A; Wang, Yong; Alder, Stephen C; Schleiss, Mark R

    2013-01-01

    To assess the validity of the guinea pig as a model for congenital cytomegalovirus (CMV) infection by comparing the effectiveness of detecting the virus by real-time polymerase chain reaction (PCR) in blood, urine, and saliva. Case-control study. Academic research. Eleven pregnant Hartley guinea pigs. Blood, urine, and saliva samples were collected from guinea pig pups delivered from pregnant dams inoculated with guinea pig CMV. These samples were then evaluated for the presence of guinea pig CMV by real-time PCR assuming 100% transmission. Thirty-one pups delivered from 9 inoculated pregnant dams and 8 uninfected control pups underwent testing for guinea pig CMV and for auditory brainstem response hearing loss. Repeated-measures analysis of variance demonstrated no statistically significantly lower weight for the infected pups compared with the noninfected control pups. Six infected pups demonstrated auditory brainstem response hearing loss. The sensitivity and specificity of the real-time PCR assay on saliva samples were 74.2% and 100.0%, respectively. The sensitivity of the real-time PCR on blood and urine samples was significantly lower than that on saliva samples. Real-time PCR assays of blood, urine, and saliva revealed that saliva samples show high sensitivity and specificity for detecting congenital CMV infection in guinea pigs. This finding is consistent with recent screening studies in human newborns. The guinea pig may be a good animal model in which to compare different diagnostic assays for congenital CMV infection.

  2. Comparative Analysis of Soft Computing Models in Prediction of Bending Rigidity of Cotton Woven Fabrics

    NASA Astrophysics Data System (ADS)

    Guruprasad, R.; Behera, B. K.

    2015-10-01

    Quantitative prediction of fabric mechanical properties is an essential requirement for design engineering of textile and apparel products. In this work, the possibility of prediction of bending rigidity of cotton woven fabrics has been explored with the application of Artificial Neural Network (ANN) and two hybrid methodologies, namely Neuro-genetic modeling and Adaptive Neuro-Fuzzy Inference System (ANFIS) modeling. For this purpose, a set of cotton woven grey fabrics was desized, scoured and relaxed. The fabrics were then conditioned and tested for bending properties. With the database thus created, a neural network model was first developed using back propagation as the learning algorithm. The second model was developed by applying a hybrid learning strategy, in which genetic algorithm was first used as a learning algorithm to optimize the number of neurons and connection weights of the neural network. The Genetic algorithm optimized network structure was further allowed to learn using back propagation algorithm. In the third model, an ANFIS modeling approach was attempted to map the input-output data. The prediction performances of the models were compared and a sensitivity analysis was reported. The results show that the prediction by neuro-genetic and ANFIS models were better in comparison with that of back propagation neural network model.

  3. Statistical correlation analysis for comparing vibration data from test and analysis

    NASA Technical Reports Server (NTRS)

    Butler, T. G.; Strang, R. F.; Purves, L. R.; Hershfeld, D. J.

    1986-01-01

    A theory was developed to compare vibration modes obtained by NASTRAN analysis with those obtained experimentally. Because many more analytical modes can be obtained than experimental modes, the analytical set was treated as expansion functions for putting both sources in comparative form. The dimensional symmetry was developed for three general cases: nonsymmetric whole model compared with a nonsymmetric whole structural test, symmetric analytical portion compared with a symmetric experimental portion, and analytical symmetric portion with a whole experimental test. The theory was coded and a statistical correlation program was installed as a utility. The theory is established with small classical structures.

  4. Comparative Modelling of the Spectra of Cool Giants

    NASA Technical Reports Server (NTRS)

    Lebzelter, T.; Heiter, U.; Abia, C.; Eriksson, K.; Ireland, M.; Neilson, H.; Nowotny, W; Maldonado, J; Merle, T.; Peterson, R.; hide

    2012-01-01

    Our ability to extract information from the spectra of stars depends on reliable models of stellar atmospheres and appropriate techniques for spectral synthesis. Various model codes and strategies for the analysis of stellar spectra are available today. Aims. We aim to compare the results of deriving stellar parameters using different atmosphere models and different analysis strategies. The focus is set on high-resolution spectroscopy of cool giant stars. Methods. Spectra representing four cool giant stars were made available to various groups and individuals working in the area of spectral synthesis, asking them to derive stellar parameters from the data provided. The results were discussed at a workshop in Vienna in 2010. Most of the major codes currently used in the astronomical community for analyses of stellar spectra were included in this experiment. Results. We present the results from the different groups, as well as an additional experiment comparing the synthetic spectra produced by various codes for a given set of stellar parameters. Similarities and differences of the results are discussed. Conclusions. Several valid approaches to analyze a given spectrum of a star result in quite a wide range of solutions. The main causes for the differences in parameters derived by different groups seem to lie in the physical input data and in the details of the analysis method. This clearly shows how far from a definitive abundance analysis we still are.

  5. Comparative and Evolutionary Analysis of Grass Pollen Allergens Using Brachypodium distachyon as a Model System

    PubMed Central

    Sharma, Akanksha; Sharma, Niharika; Bhalla, Prem; Singh, Mohan

    2017-01-01

    Comparative genomics have facilitated the mining of biological information from a genome sequence, through the detection of similarities and differences with genomes of closely or more distantly related species. By using such comparative approaches, knowledge can be transferred from the model to non-model organisms and insights can be gained in the structural and evolutionary patterns of specific genes. In the absence of sequenced genomes for allergenic grasses, this study was aimed at understanding the structure, organisation and expression profiles of grass pollen allergens using the genomic data from Brachypodium distachyon as it is phylogenetically related to the allergenic grasses. Combining genomic data with the anther RNA-Seq dataset revealed 24 pollen allergen genes belonging to eight allergen groups mapping on the five chromosomes in B. distachyon. High levels of anther-specific expression profiles were observed for the 24 identified putative allergen-encoding genes in Brachypodium. The genomic evidence suggests that gene encoding the group 5 allergen, the most potent trigger of hay fever and allergic asthma originated as a pollen specific orphan gene in a common grass ancestor of Brachypodium and Triticiae clades. Gene structure analysis showed that the putative allergen-encoding genes in Brachypodium either lack or contain reduced number of introns. Promoter analysis of the identified Brachypodium genes revealed the presence of specific cis-regulatory sequences likely responsible for high anther/pollen-specific expression. With the identification of putative allergen-encoding genes in Brachypodium, this study has also described some important plant gene families (e.g. expansin superfamily, EF-Hand family, profilins etc) for the first time in the model plant Brachypodium. Altogether, the present study provides new insights into structural characterization and evolution of pollen allergens and will further serve as a base for their functional

  6. Comparative and Evolutionary Analysis of Grass Pollen Allergens Using Brachypodium distachyon as a Model System.

    PubMed

    Sharma, Akanksha; Sharma, Niharika; Bhalla, Prem; Singh, Mohan

    2017-01-01

    Comparative genomics have facilitated the mining of biological information from a genome sequence, through the detection of similarities and differences with genomes of closely or more distantly related species. By using such comparative approaches, knowledge can be transferred from the model to non-model organisms and insights can be gained in the structural and evolutionary patterns of specific genes. In the absence of sequenced genomes for allergenic grasses, this study was aimed at understanding the structure, organisation and expression profiles of grass pollen allergens using the genomic data from Brachypodium distachyon as it is phylogenetically related to the allergenic grasses. Combining genomic data with the anther RNA-Seq dataset revealed 24 pollen allergen genes belonging to eight allergen groups mapping on the five chromosomes in B. distachyon. High levels of anther-specific expression profiles were observed for the 24 identified putative allergen-encoding genes in Brachypodium. The genomic evidence suggests that gene encoding the group 5 allergen, the most potent trigger of hay fever and allergic asthma originated as a pollen specific orphan gene in a common grass ancestor of Brachypodium and Triticiae clades. Gene structure analysis showed that the putative allergen-encoding genes in Brachypodium either lack or contain reduced number of introns. Promoter analysis of the identified Brachypodium genes revealed the presence of specific cis-regulatory sequences likely responsible for high anther/pollen-specific expression. With the identification of putative allergen-encoding genes in Brachypodium, this study has also described some important plant gene families (e.g. expansin superfamily, EF-Hand family, profilins etc) for the first time in the model plant Brachypodium. Altogether, the present study provides new insights into structural characterization and evolution of pollen allergens and will further serve as a base for their functional

  7. Tuning supersymmetric models at the LHC: a comparative analysis at two-loop level.

    NASA Astrophysics Data System (ADS)

    Ghilencea, D. M.; Lee, H. M.; Park, M.

    2012-07-01

    We provide a comparative study of the fine tuning amount (Δ) at the two-loop leading log level in supersymmetric models commonly used in SUSY searches at the LHC. These are the constrained MSSM (CMSSM), non-universal Higgs masses models (NUHM1, NUHM2), non-universal gaugino masses model (NUGM) and GUT related gaugino masses models (NUGMd). Two definitions of the fine tuning are used, the first (Δmax) measures maximal fine-tuning w.r.t. individual parameters while the second (Δ q ) adds their contribution in "quadrature". As a direct consequence of two theoretical constraints (the EW minimum conditions), fine tuning (Δ q ) emerges at the mathematical level as a suppressing factor (effective prior) of the averaged likelihood ( L ) under the priors, under the integral of the global probability of measuring the data (Bayesian evidence p( D)). For each model, there is little difference between Δ q , Δmax in the region allowed by the data, with similar behaviour as functions of the Higgs, gluino, stop mass or SUSY scale ( {m_{{SUSY}}} = {( {{m_{{overline t 1}}}{m_{{overline t 2}}}} )^{{{{1} / {2} .}}}} ) or dark matter and g - 2 constraints. The analysis has the advantage that by replacing any of these mass scales or constraints by their latest bounds one easily infers for each model the value of Δ q , Δmax or vice versa. For all models, minimal fine tuning is achieved for M higgs near 115 GeV with a Δ q ≈ Δmax ≈ 10 to 100 depending on the model, and in the CMSSM this is actually a global minimum. Due to a strong (≈ exponential) dependence of Δ on M higgs, for a Higgs mass near 125 GeV, the above values of Δ q ≈ Δmax increase to between 500 and 1000. Possible corrections to these values are briefly discussed.

  8. Comparing the High School English Curriculum in Turkey through Multi-Analysis

    ERIC Educational Resources Information Center

    Batdi, Veli

    2017-01-01

    This study aimed to compare the High School English Curriculum (HSEC) in accordance with Stufflebeam's context, input, process and product (CIPP) model through multi-analysis. The research includes both quantitative and qualitative aspects. A descriptive analysis was operated through Rasch Measurement Model; SPSS program for the quantitative…

  9. BSM2 Plant-Wide Model construction and comparative analysis with other methodologies for integrated modelling.

    PubMed

    Grau, P; Vanrolleghem, P; Ayesa, E

    2007-01-01

    In this paper, a new methodology for integrated modelling of the WWTP has been used for the construction of the Benchmark Simulation Model N degrees 2 (BSM2). The transformations-approach proposed in this methodology does not require the development of specific transformers to interface unit process models and allows the construction of tailored models for a particular WWTP guaranteeing the mass and charge continuity for the whole model. The BSM2 PWM constructed as case study, is evaluated by means of simulations under different scenarios and its validity in reproducing water and sludge lines in WWTP is demonstrated. Furthermore the advantages that this methodology presents compared to other approaches for integrated modelling are verified in terms of flexibility and coherence.

  10. Comparative Analysis of Vertebrate Diurnal/Circadian Transcriptomes

    PubMed Central

    Boyle, Greg; Richter, Kerstin; Priest, Henry D.; Traver, David; Mockler, Todd C.; Chang, Jeffrey T.; Kay, Steve A.

    2017-01-01

    From photosynthetic bacteria to mammals, the circadian clock evolved to track diurnal rhythms and enable organisms to anticipate daily recurring changes such as temperature and light. It orchestrates a broad spectrum of physiology such as the sleep/wake and eating/fasting cycles. While we have made tremendous advances in our understanding of the molecular details of the circadian clock mechanism and how it is synchronized with the environment, we still have rudimentary knowledge regarding its connection to help regulate diurnal physiology. One potential reason is the sheer size of the output network. Diurnal/circadian transcriptomic studies are reporting that around 10% of the expressed genome is rhythmically controlled. Zebrafish is an important model system for the study of the core circadian mechanism in vertebrate. As Zebrafish share more than 70% of its genes with human, it could also be an additional model in addition to rodent for exploring the diurnal/circadian output with potential for translational relevance. Here we performed comparative diurnal/circadian transcriptome analysis with established mouse liver and other tissue datasets. First, by combining liver tissue sampling in a 48h time series, transcription profiling using oligonucleotide arrays and bioinformatics analysis, we profiled rhythmic transcripts and identified 2609 rhythmic genes. The comparative analysis revealed interesting features of the output network regarding number of rhythmic genes, proportion of tissue specific genes and the extent of transcription factor family expression. Undoubtedly, the Zebrafish model system will help identify new vertebrate outputs and their regulators and provides leads for further characterization of the diurnal cis-regulatory network. PMID:28076377

  11. Comparative analysis of model behaviour for flood prediction purposes using Self-Organizing Maps

    NASA Astrophysics Data System (ADS)

    Herbst, M.; Casper, M. C.; Grundmann, J.; Buchholz, O.

    2009-03-01

    Distributed watershed models constitute a key component in flood forecasting systems. It is widely recognized that models because of their structural differences have varying capabilities of capturing different aspects of the system behaviour equally well. Of course, this also applies to the reproduction of peak discharges by a simulation model which is of particular interest regarding the flood forecasting problem. In our study we use a Self-Organizing Map (SOM) in combination with index measures which are derived from the flow duration curve in order to examine the conditions under which three different distributed watershed models are capable of reproducing flood events present in the calibration data. These indices are specifically conceptualized to extract data on the peak discharge characteristics of model output time series which are obtained from Monte-Carlo simulations with the distributed watershed models NASIM, LARSIM and WaSIM-ETH. The SOM helps to analyze this data by producing a discretized mapping of their distribution in the index space onto a two dimensional plane such that their pattern and consequently the patterns of model behaviour can be conveyed in a comprehensive manner. It is demonstrated how the SOM provides useful information about details of model behaviour and also helps identifying the model parameters that are relevant for the reproduction of peak discharges and thus for flood prediction problems. It is further shown how the SOM can be used to identify those parameter sets from among the Monte-Carlo data that most closely approximate the peak discharges of a measured time series. The results represent the characteristics of the observed time series with partially superior accuracy than the reference simulation obtained by implementing a simple calibration strategy using the global optimization algorithm SCE-UA. The most prominent advantage of using SOM in the context of model analysis is that it allows to comparatively evaluating the

  12. Comparative analysis of stress in a new proposal of dental implants.

    PubMed

    Valente, Mariana Lima da Costa; de Castro, Denise Tornavoi; Macedo, Ana Paula; Shimano, Antonio Carlos; Dos Reis, Andréa Cândido

    2017-08-01

    The purpose of this study was to compare, through photoelastic analysis, the stress distribution around conventional and modified external hexagon (EH) and morse taper (MT) dental implant connections. Four photoelastic models were prepared (n=1): Model 1 - conventional EH cylindrical implant (Ø 4.0mm×11mm - Neodent®), Model 2 - modified EH cylindrical implant, Model 3 - conventional MT Conical implant (Ø 4.3mm×10mm - Neodent®) and Model 4 - modified MT conical implant. 100 and 150N axial and oblique loads (30° tilt) were applied in the devices coupled to the implants. A plane transmission polariscope was used in the analysis of fringes and each position of interest was recorded by a digital camera. The Tardy method was used to quantify the fringe order (n), that calculates the maximum shear stress (τ) value in each selected point. The results showed lower stress concentration in the modified cylindrical implant (EH) compared to the conventional model, with application of 150N axial and 100N oblique loads. Lower stress was observed for the modified conical (MT) implant with the application of 100 and 150N oblique loads, which was not observed for the conventional implant model. The comparative analysis of the models showed that the new design proposal generates good stress distribution, especially in the cervical third, suggesting the preservation of bone tissue in the bone crest region. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. The aquatic animals' transcriptome resource for comparative functional analysis.

    PubMed

    Chou, Chih-Hung; Huang, Hsi-Yuan; Huang, Wei-Chih; Hsu, Sheng-Da; Hsiao, Chung-Der; Liu, Chia-Yu; Chen, Yu-Hung; Liu, Yu-Chen; Huang, Wei-Yun; Lee, Meng-Lin; Chen, Yi-Chang; Huang, Hsien-Da

    2018-05-09

    Aquatic animals have great economic and ecological importance. Among them, non-model organisms have been studied regarding eco-toxicity, stress biology, and environmental adaptation. Due to recent advances in next-generation sequencing techniques, large amounts of RNA-seq data for aquatic animals are publicly available. However, currently there is no comprehensive resource exist for the analysis, unification, and integration of these datasets. This study utilizes computational approaches to build a new resource of transcriptomic maps for aquatic animals. This aquatic animal transcriptome map database dbATM provides de novo assembly of transcriptome, gene annotation and comparative analysis of more than twenty aquatic organisms without draft genome. To improve the assembly quality, three computational tools (Trinity, Oases and SOAPdenovo-Trans) were employed to enhance individual transcriptome assembly, and CAP3 and CD-HIT-EST software were then used to merge these three assembled transcriptomes. In addition, functional annotation analysis provides valuable clues to gene characteristics, including full-length transcript coding regions, conserved domains, gene ontology and KEGG pathways. Furthermore, all aquatic animal genes are essential for comparative genomics tasks such as constructing homologous gene groups and blast databases and phylogenetic analysis. In conclusion, we establish a resource for non model organism aquatic animals, which is great economic and ecological importance and provide transcriptomic information including functional annotation and comparative transcriptome analysis. The database is now publically accessible through the URL http://dbATM.mbc.nctu.edu.tw/ .

  14. Comparative study of two approaches to model the offshore fish cages

    NASA Astrophysics Data System (ADS)

    Zhao, Yun-peng; Wang, Xin-xin; Decew, Jud; Tsukrov, Igor; Bai, Xiao-dong; Bi, Chun-wei

    2015-06-01

    The goal of this paper is to provide a comparative analysis of two commonly used approaches to discretize offshore fish cages: the lumped-mass approach and the finite element technique. Two case studies are chosen to compare predictions of the LMA (lumped-mass approach) and FEA (finite element analysis) based numerical modeling techniques. In both case studies, we consider several loading conditions consisting of different uniform currents and monochromatic waves. We investigate motion of the cage, its deformation, and the resultant tension in the mooring lines. Both model predictions are sufficient close to the experimental data, but for the first experiment, the DUT-FlexSim predictions are slightly more accurate than the ones provided by Aqua-FE™. According to the comparisons, both models can be successfully utilized to the design and analysis of the offshore fish cages provided that an appropriate safety factor is chosen.

  15. A Comparative Analysis of Financial Reporting Models for Private and Public Sector Organizations.

    DTIC Science & Technology

    1995-12-01

    The objective of this thesis was to describe and compare different existing and evolving financial reporting models used in both the public and...private sector. To accomplish the objective, this thesis identified the existing financial reporting models for private sector business organizations...private sector nonprofit organizations, and state and local governments, as well as the evolving financial reporting model for the federal government

  16. Eliciting mixed emotions: a meta-analysis comparing models, types, and measures.

    PubMed

    Berrios, Raul; Totterdell, Peter; Kellett, Stephen

    2015-01-01

    The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model-dimensional or discrete-as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (d IG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought.

  17. Case Problems for Problem-Based Pedagogical Approaches: A Comparative Analysis

    ERIC Educational Resources Information Center

    Dabbagh, Nada; Dass, Susan

    2013-01-01

    A comparative analysis of 51 case problems used in five problem-based pedagogical models was conducted to examine whether there are differences in their characteristics and the implications of such differences on the selection and generation of ill-structured case problems. The five pedagogical models were: situated learning, goal-based scenario,…

  18. Two-Year versus One-Year Head Start Program Impact: Addressing Selection Bias by Comparing Regression Modeling with Propensity Score Analysis

    ERIC Educational Resources Information Center

    Leow, Christine; Wen, Xiaoli; Korfmacher, Jon

    2015-01-01

    This article compares regression modeling and propensity score analysis as different types of statistical techniques used in addressing selection bias when estimating the impact of two-year versus one-year Head Start on children's school readiness. The analyses were based on the national Head Start secondary dataset. After controlling for…

  19. Adaptation to Climate Change: A Comparative Analysis of Modeling Methods for Heat-Related Mortality

    PubMed Central

    Hondula, David M.; Bunker, Aditi; Ibarreta, Dolores; Liu, Junguo; Zhang, Xinxin; Sauerborn, Rainer

    2017-01-01

    Background: Multiple methods are employed for modeling adaptation when projecting the impact of climate change on heat-related mortality. The sensitivity of impacts to each is unknown because they have never been systematically compared. In addition, little is known about the relative sensitivity of impacts to “adaptation uncertainty” (i.e., the inclusion/exclusion of adaptation modeling) relative to using multiple climate models and emissions scenarios. Objectives: This study had three aims: a) Compare the range in projected impacts that arises from using different adaptation modeling methods; b) compare the range in impacts that arises from adaptation uncertainty with ranges from using multiple climate models and emissions scenarios; c) recommend modeling method(s) to use in future impact assessments. Methods: We estimated impacts for 2070–2099 for 14 European cities, applying six different methods for modeling adaptation; we also estimated impacts with five climate models run under two emissions scenarios to explore the relative effects of climate modeling and emissions uncertainty. Results: The range of the difference (percent) in impacts between including and excluding adaptation, irrespective of climate modeling and emissions uncertainty, can be as low as 28% with one method and up to 103% with another (mean across 14 cities). In 13 of 14 cities, the ranges in projected impacts due to adaptation uncertainty are larger than those associated with climate modeling and emissions uncertainty. Conclusions: Researchers should carefully consider how to model adaptation because it is a source of uncertainty that can be greater than the uncertainty in emissions and climate modeling. We recommend absolute threshold shifts and reductions in slope. https://doi.org/10.1289/EHP634 PMID:28885979

  20. Adaptation to Climate Change: A Comparative Analysis of Modeling Methods for Heat-Related Mortality.

    PubMed

    Gosling, Simon N; Hondula, David M; Bunker, Aditi; Ibarreta, Dolores; Liu, Junguo; Zhang, Xinxin; Sauerborn, Rainer

    2017-08-16

    Multiple methods are employed for modeling adaptation when projecting the impact of climate change on heat-related mortality. The sensitivity of impacts to each is unknown because they have never been systematically compared. In addition, little is known about the relative sensitivity of impacts to "adaptation uncertainty" (i.e., the inclusion/exclusion of adaptation modeling) relative to using multiple climate models and emissions scenarios. This study had three aims: a ) Compare the range in projected impacts that arises from using different adaptation modeling methods; b ) compare the range in impacts that arises from adaptation uncertainty with ranges from using multiple climate models and emissions scenarios; c ) recommend modeling method(s) to use in future impact assessments. We estimated impacts for 2070-2099 for 14 European cities, applying six different methods for modeling adaptation; we also estimated impacts with five climate models run under two emissions scenarios to explore the relative effects of climate modeling and emissions uncertainty. The range of the difference (percent) in impacts between including and excluding adaptation, irrespective of climate modeling and emissions uncertainty, can be as low as 28% with one method and up to 103% with another (mean across 14 cities). In 13 of 14 cities, the ranges in projected impacts due to adaptation uncertainty are larger than those associated with climate modeling and emissions uncertainty. Researchers should carefully consider how to model adaptation because it is a source of uncertainty that can be greater than the uncertainty in emissions and climate modeling. We recommend absolute threshold shifts and reductions in slope. https://doi.org/10.1289/EHP634.

  1. Comparing of Cox model and parametric models in analysis of effective factors on event time of neuropathy in patients with type 2 diabetes.

    PubMed

    Kargarian-Marvasti, Sadegh; Rimaz, Shahnaz; Abolghasemi, Jamileh; Heydari, Iraj

    2017-01-01

    Cox proportional hazard model is the most common method for analyzing the effects of several variables on survival time. However, under certain circumstances, parametric models give more precise estimates to analyze survival data than Cox. The purpose of this study was to investigate the comparative performance of Cox and parametric models in a survival analysis of factors affecting the event time of neuropathy in patients with type 2 diabetes. This study included 371 patients with type 2 diabetes without neuropathy who were registered at Fereydunshahr diabetes clinic. Subjects were followed up for the development of neuropathy between 2006 to March 2016. To investigate the factors influencing the event time of neuropathy, significant variables in univariate model ( P < 0.20) were entered into the multivariate Cox and parametric models ( P < 0.05). In addition, Akaike information criterion (AIC) and area under ROC curves were used to evaluate the relative goodness of fitted model and the efficiency of each procedure, respectively. Statistical computing was performed using R software version 3.2.3 (UNIX platforms, Windows and MacOS). Using Kaplan-Meier, survival time of neuropathy was computed 76.6 ± 5 months after initial diagnosis of diabetes. After multivariate analysis of Cox and parametric models, ethnicity, high-density lipoprotein and family history of diabetes were identified as predictors of event time of neuropathy ( P < 0.05). According to AIC, "log-normal" model with the lowest Akaike's was the best-fitted model among Cox and parametric models. According to the results of comparison of survival receiver operating characteristics curves, log-normal model was considered as the most efficient and fitted model.

  2. FINDING A METHOD FOR THE MADNESS: A COMPARATIVE ANALYSIS OF STRATEGIC DESIGN METHODOLOGIES

    DTIC Science & Technology

    2017-06-01

    FINDING A METHOD FOR THE MADNESS: A COMPARATIVE ANALYSIS OF STRATEGIC DESIGN METHODOLOGIES BY AMANDA DONNELLY A THESIS...work develops a comparative model for strategic design methodologies, focusing on the primary elements of vision, time, process, communication and...collaboration, and risk assessment. My analysis dissects and compares three potential design methodologies including, net assessment, scenarios and

  3. Comparative analysis and visualization of multiple collinear genomes

    PubMed Central

    2012-01-01

    Background Genome browsers are a common tool used by biologists to visualize genomic features including genes, polymorphisms, and many others. However, existing genome browsers and visualization tools are not well-suited to perform meaningful comparative analysis among a large number of genomes. With the increasing quantity and availability of genomic data, there is an increased burden to provide useful visualization and analysis tools for comparison of multiple collinear genomes such as the large panels of model organisms which are the basis for much of the current genetic research. Results We have developed a novel web-based tool for visualizing and analyzing multiple collinear genomes. Our tool illustrates genome-sequence similarity through a mosaic of intervals representing local phylogeny, subspecific origin, and haplotype identity. Comparative analysis is facilitated through reordering and clustering of tracks, which can vary throughout the genome. In addition, we provide local phylogenetic trees as an alternate visualization to assess local variations. Conclusions Unlike previous genome browsers and viewers, ours allows for simultaneous and comparative analysis. Our browser provides intuitive selection and interactive navigation about features of interest. Dynamic visualizations adjust to scale and data content making analysis at variable resolutions and of multiple data sets more informative. We demonstrate our genome browser for an extensive set of genomic data sets composed of almost 200 distinct mouse laboratory strains. PMID:22536897

  4. Specialized stroke services: a meta-analysis comparing three models of care.

    PubMed

    Foley, Norine; Salter, Katherine; Teasell, Robert

    2007-01-01

    Using previously published data, the purpose of this study was to identify and discriminate between three different forms of inpatient stroke care based on timing and duration of treatment and to compare the results of clinically important outcomes. Randomized controlled trials, including a recent review of inpatient stroke unit/rehabilitation care, were identified and grouped into three models of care as follows: (a) acute stroke unit care (patients admitted within 36 h of stroke onset and remaining for up to 2 weeks; n = 5), (b) units combining acute and rehabilitative care (combined; n = 4), and (c) rehabilitation units where patients were transferred onto the service approximately 2 weeks following stroke (post-acute; n = 5). Pooled analyses for the outcomes of mortality, combined death and dependency and length of hospital stay were calculated for each model of care, compared to conventional care. All three models of care were associated with significant reductions in the odds of combined death and dependency; however, acute stroke units were not associated with significant reductions in mortality when this outcome was analyzed separately (OR 0.80; 95% CI: 0.61-1.03). Post-acute stroke units were associated with the greatest reduction in the odds of mortality (OR 0.60; 95% CI: 0.44-0.81). Significant reductions in length of hospital stay were associated with combined stroke units only (weighted mean difference -14 days; 95% CI: -27 to -2). Overall, specialized stroke services were associated with significant reductions in mortality, death and dependency and length of hospital stay although not every model of care was associated with equal benefit.

  5. Emergent structures and understanding from a comparative uncertainty analysis of the FUSE rainfall-runoff modelling platform for >1,100 catchments

    NASA Astrophysics Data System (ADS)

    Freer, J. E.; Odoni, N. A.; Coxon, G.; Bloomfield, J.; Clark, M. P.; Greene, S.; Johnes, P.; Macleod, C.; Reaney, S. M.

    2013-12-01

    If we are to learn about catchments and their hydrological function then a range of analysis techniques can be proposed from analysing observations to building complex physically based models using detailed attributes of catchment characteristics. Decisions regarding which technique is fit for a specific purpose will depend on the data available, computing resources, and the underlying reasons for the study. Here we explore defining catchment function in a relatively general sense expressed via a comparison of multiple model structures within an uncertainty analysis framework. We use the FUSE (Framework for Understanding Structural Errors - Clark et al., 2008) rainfall-runoff modelling platform and the GLUE (Generalised Likelihood Uncertainty Estimation - Beven and Freer, 2001) uncertainty analysis framework. Using these techniques we assess two main outcomes: 1) Benchmarking our predictive capability using discharge performance metrics for a diverse range of catchments across the UK 2) evaluating emergent behaviour for each catchment and/or region expressed as ';best performing' model structures that may be equally plausible representations of catchment behaviour. We shall show how such comparative hydrological modelling studies show patterns of emergent behaviour linked both to seasonal responses and to different geoclimatic regions. These results have implications for the hydrological community regarding how models can help us learn about places as hypothesis testing tools. Furthermore we explore what the limits are to such an analysis when dealing with differing data quality and information content from ';pristine' to less well characterised and highly modified catchment domains. This research has been piloted in the UK as part of the Environmental Virtual Observatory programme (EVOp), funded by NERC to demonstrate the use of cyber-infrastructure and cloud computing resources to develop better methods of linking data and models and to support scenario analysis

  6. Comparative analysis through probability distributions of a data set

    NASA Astrophysics Data System (ADS)

    Cristea, Gabriel; Constantinescu, Dan Mihai

    2018-02-01

    In practice, probability distributions are applied in such diverse fields as risk analysis, reliability engineering, chemical engineering, hydrology, image processing, physics, market research, business and economic research, customer support, medicine, sociology, demography etc. This article highlights important aspects of fitting probability distributions to data and applying the analysis results to make informed decisions. There are a number of statistical methods available which can help us to select the best fitting model. Some of the graphs display both input data and fitted distributions at the same time, as probability density and cumulative distribution. The goodness of fit tests can be used to determine whether a certain distribution is a good fit. The main used idea is to measure the "distance" between the data and the tested distribution, and compare that distance to some threshold values. Calculating the goodness of fit statistics also enables us to order the fitted distributions accordingly to how good they fit to data. This particular feature is very helpful for comparing the fitted models. The paper presents a comparison of most commonly used goodness of fit tests as: Kolmogorov-Smirnov, Anderson-Darling, and Chi-Squared. A large set of data is analyzed and conclusions are drawn by visualizing the data, comparing multiple fitted distributions and selecting the best model. These graphs should be viewed as an addition to the goodness of fit tests.

  7. Laparoscopic versus open-component separation: a comparative analysis in a porcine model.

    PubMed

    Rosen, Michael J; Williams, Christina; Jin, Judy; McGee, Michael F; Schomisch, Steve; Marks, Jeffrey; Ponsky, Jeffrey

    2007-09-01

    The ideal surgical treatment for complicated ventral hernias remains elusive. Traditional component separation provides local advancement of native tissue for tension-free closure without prosthetic materials. This technique requires an extensive subcutaneous dissection with division of perforating vessels predisposing to skin-flap necrosis and complicated wound infections. A minimally invasive component separation may decrease wound complication rates; however, the adequacy of the myofascial advancement has not been studied. Five 25-kg pigs underwent bilateral laparoscopic component separation. A 10-mm incision was made lateral to the rectus abdominus muscle. The external oblique fascia was incised, and a dissecting balloon was inflated between the internal and external oblique muscles. Two additional ports were placed in the intermuscular space. The external oblique was incised from the costal margin to the inguinal ligament. The maximal abdominal wall advancement was recorded. A formal open-component separation was performed and maximal advancement 5 cm superior and 5 cm inferior to the umbilicus was recorded for comparison. Groups were compared using standard statistical analysis. The laparoscopic component separation was completed successfully in all animals, with a mean of 22 min/side. Laparoscopic component separation yielded 3.9 cm (SD 1.1) of fascial advancement above the umbilicus, whereas 4.4 cm (1.2) was obtained after open release (P = .24). Below the umbilicus, laparoscopic release achieved 5.0 cm (1.0) of advancement, whereas 5.8 cm (1.2) was gained after open release (P = .13). The minimally invasive component separation achieved an average of 86% of the myofascial advancement compared with a formal open release. The laparoscopic approach does not require extensive subcutaneous dissection and might theoretically result in a decreased incidence or decreased complexity of postoperative wound infections or skin-flap necrosis. Based on our preliminary

  8. Comparative analysis of methods for modeling the penetration and plane-parallel motion of conical projectiles in soil

    NASA Astrophysics Data System (ADS)

    Bazhenov, V. G.; Bragov, A. M.; Konstantinov, A. Yu.; Kotov, V. L.

    2015-05-01

    This paper presents an analysis of the accuracy of known and new modeling methods using the hypothesis of local and plane sections for solution of problems of the impact and plane-parallel motion of conical bodies at an angle to the free surface of the half-space occupied by elastoplastic soil. The parameters of the local interaction model that is quadratic in velocity are determined by solving the one-dimensional problem of the expansion of a spherical cavity. Axisymmetric problems for each of the meridional section are solved simultaneously neglecting mass and momentum transfer in the circumferential direction and using an approach based on the hypothesis of plane sections. The dynamic and kinematic parameters of oblique penetration obtained using modified models are compared with the results of computer simulation in a three-dimensional formulation. The results obtained with regard to the contact stress distribution along the generator of the pointed cone are in satisfactory agreement.

  9. Preservation of protein clefts in comparative models.

    PubMed

    Piedra, David; Lois, Sergi; de la Cruz, Xavier

    2008-01-16

    Comparative, or homology, modelling of protein structures is the most widely used prediction method when the target protein has homologues of known structure. Given that the quality of a model may vary greatly, several studies have been devoted to identifying the factors that influence modelling results. These studies usually consider the protein as a whole, and only a few provide a separate discussion of the behaviour of biologically relevant features of the protein. Given the value of the latter for many applications, here we extended previous work by analysing the preservation of native protein clefts in homology models. We chose to examine clefts because of their role in protein function/structure, as they are usually the locus of protein-protein interactions, host the enzymes' active site, or, in the case of protein domains, can also be the locus of domain-domain interactions that lead to the structure of the whole protein. We studied how the largest cleft of a protein varies in comparative models. To this end, we analysed a set of 53507 homology models that cover the whole sequence identity range, with a special emphasis on medium and low similarities. More precisely we examined how cleft quality - measured using six complementary parameters related to both global shape and local atomic environment, depends on the sequence identity between target and template proteins. In addition to this general analysis, we also explored the impact of a number of factors on cleft quality, and found that the relationship between quality and sequence identity varies depending on cleft rank amongst the set of protein clefts (when ordered according to size), and number of aligned residues. We have examined cleft quality in homology models at a range of seq.id. levels. Our results provide a detailed view of how quality is affected by distinct parameters and thus may help the user of comparative modelling to determine the final quality and applicability of his/her cleft models

  10. NAS Demand Predictions, Transportation Systems Analysis Model (TSAM) Compared with Other Forecasts

    NASA Technical Reports Server (NTRS)

    Viken, Jeff; Dollyhigh, Samuel; Smith, Jeremy; Trani, Antonio; Baik, Hojong; Hinze, Nicholas; Ashiabor, Senanu

    2006-01-01

    The current work incorporates the Transportation Systems Analysis Model (TSAM) to predict the future demand for airline travel. TSAM is a multi-mode, national model that predicts the demand for all long distance travel at a county level based upon population and demographics. The model conducts a mode choice analysis to compute the demand for commercial airline travel based upon the traveler s purpose of the trip, value of time, cost and time of the trip,. The county demand for airline travel is then aggregated (or distributed) to the airport level, and the enplanement demand at commercial airports is modeled. With the growth in flight demand, and utilizing current airline flight schedules, the Fratar algorithm is used to develop future flight schedules in the NAS. The projected flights can then be flown through air transportation simulators to quantify the ability of the NAS to meet future demand. A major strength of the TSAM analysis is that scenario planning can be conducted to quantify capacity requirements at individual airports, based upon different future scenarios. Different demographic scenarios can be analyzed to model the demand sensitivity to them. Also, it is fairly well know, but not well modeled at the airport level, that the demand for travel is highly dependent on the cost of travel, or the fare yield of the airline industry. The FAA projects the fare yield (in constant year dollars) to keep decreasing into the future. The magnitude and/or direction of these projections can be suspect in light of the general lack of airline profits and the large rises in airline fuel cost. Also, changes in travel time and convenience have an influence on the demand for air travel, especially for business travel. Future planners cannot easily conduct sensitivity studies of future demand with the FAA TAF data, nor with the Boeing or Airbus projections. In TSAM many factors can be parameterized and various demand sensitivities can be predicted for future travel. These

  11. Least-Squares Regression and Spectral Residual Augmented Classical Least-Squares Chemometric Models for Stability-Indicating Analysis of Agomelatine and Its Degradation Products: A Comparative Study.

    PubMed

    Naguib, Ibrahim A; Abdelrahman, Maha M; El Ghobashy, Mohamed R; Ali, Nesma A

    2016-01-01

    Two accurate, sensitive, and selective stability-indicating methods are developed and validated for simultaneous quantitative determination of agomelatine (AGM) and its forced degradation products (Deg I and Deg II), whether in pure forms or in pharmaceutical formulations. Partial least-squares regression (PLSR) and spectral residual augmented classical least-squares (SRACLS) are two chemometric models that are being subjected to a comparative study through handling UV spectral data in range (215-350 nm). For proper analysis, a three-factor, four-level experimental design was established, resulting in a training set consisting of 16 mixtures containing different ratios of interfering species. An independent test set consisting of eight mixtures was used to validate the prediction ability of the suggested models. The results presented indicate the ability of mentioned multivariate calibration models to analyze AGM, Deg I, and Deg II with high selectivity and accuracy. The analysis results of the pharmaceutical formulations were statistically compared to the reference HPLC method, with no significant differences observed regarding accuracy and precision. The SRACLS model gives comparable results to the PLSR model; however, it keeps the qualitative spectral information of the classical least-squares algorithm for analyzed components.

  12. Comparative Cognitive Task Analysis

    DTIC Science & Technology

    2007-01-01

    is to perform a task analyses to determine how people operate in a specific domain on a specific task. Cognitive Task Analysis (CTA) is a set of...accomplish a task. In this chapter, we build on CTA methods by suggesting that comparative cognitive task analysis (C2TA) can help solve the aforementioned

  13. Navigating the complexities of qualitative comparative analysis: case numbers, necessity relations, and model ambiguities.

    PubMed

    Thiem, Alrik

    2014-12-01

    In recent years, the method of Qualitative Comparative Analysis (QCA) has been enjoying increasing levels of popularity in evaluation and directly neighboring fields. Its holistic approach to causal data analysis resonates with researchers whose theories posit complex conjunctions of conditions and events. However, due to QCA's relative immaturity, some of its technicalities and objectives have not yet been well understood. In this article, I seek to raise awareness of six pitfalls of employing QCA with regard to the following three central aspects: case numbers, necessity relations, and model ambiguities. Most importantly, I argue that case numbers are irrelevant to the methodological choice of QCA or any of its variants, that necessity is not as simple a concept as it has been suggested by many methodologists, and that doubt must be cast on the determinacy of virtually all results presented in past QCA research. By means of empirical examples from published articles, I explain the background of these pitfalls and introduce appropriate procedures, partly with reference to current software, that help avoid them. QCA carries great potential for scholars in evaluation and directly neighboring areas interested in the analysis of complex dependencies in configurational data. If users beware of the pitfalls introduced in this article, and if they avoid mechanistic adherence to doubtful "standards of good practice" at this stage of development, then research with QCA will gain in quality, as a result of which a more solid foundation for cumulative knowledge generation and well-informed policy decisions will also be created. © The Author(s) 2014.

  14. COMPARING THE UTILITY OF MULTIMEDIA MODELS FOR HUMAN AND ECOLOGICAL EXPOSURE ANALYSIS: TWO CASES

    EPA Science Inventory

    A number of models are available for exposure assessment; however, few are used as tools for both human and ecosystem risks. This discussion will consider two modeling frameworks that have recently been used to support human and ecological decision making. The study will compare ...

  15. Eliciting mixed emotions: a meta-analysis comparing models, types, and measures

    PubMed Central

    Berrios, Raul; Totterdell, Peter; Kellett, Stephen

    2015-01-01

    The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model—dimensional or discrete—as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (dIG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought. PMID:25926805

  16. Comparing GWAS Results of Complex Traits Using Full Genetic Model and Additive Models for Revealing Genetic Architecture

    PubMed Central

    Monir, Md. Mamun; Zhu, Jun

    2017-01-01

    Most of the genome-wide association studies (GWASs) for human complex diseases have ignored dominance, epistasis and ethnic interactions. We conducted comparative GWASs for total cholesterol using full model and additive models, which illustrate the impacts of the ignoring genetic variants on analysis results and demonstrate how genetic effects of multiple loci could differ across different ethnic groups. There were 15 quantitative trait loci with 13 individual loci and 3 pairs of epistasis loci identified by full model, whereas only 14 loci (9 common loci and 5 different loci) identified by multi-loci additive model. Again, 4 full model detected loci were not detected using multi-loci additive model. PLINK-analysis identified two loci and GCTA-analysis detected only one locus with genome-wide significance. Full model identified three previously reported genes as well as several new genes. Bioinformatics analysis showed some new genes are related with cholesterol related chemicals and/or diseases. Analyses of cholesterol data and simulation studies revealed that the full model performs were better than the additive-model performs in terms of detecting power and unbiased estimations of genetic variants of complex traits. PMID:28079101

  17. Neo-Deterministic and Probabilistic Seismic Hazard Assessments: a Comparative Analysis

    NASA Astrophysics Data System (ADS)

    Peresan, Antonella; Magrin, Andrea; Nekrasova, Anastasia; Kossobokov, Vladimir; Panza, Giuliano F.

    2016-04-01

    Objective testing is the key issue towards any reliable seismic hazard assessment (SHA). Different earthquake hazard maps must demonstrate their capability in anticipating ground shaking from future strong earthquakes before an appropriate use for different purposes - such as engineering design, insurance, and emergency management. Quantitative assessment of maps performances is an essential step also in scientific process of their revision and possible improvement. Cross-checking of probabilistic models with available observations and independent physics based models is recognized as major validation procedure. The existing maps from the classical probabilistic seismic hazard analysis (PSHA), as well as those from the neo-deterministic analysis (NDSHA), which have been already developed for several regions worldwide (including Italy, India and North Africa), are considered to exemplify the possibilities of the cross-comparative analysis in spotting out limits and advantages of different methods. Where the data permit, a comparative analysis versus the documented seismic activity observed in reality is carried out, showing how available observations about past earthquakes can contribute to assess performances of the different methods. Neo-deterministic refers to a scenario-based approach, which allows for consideration of a wide range of possible earthquake sources as the starting point for scenarios constructed via full waveforms modeling. The method does not make use of empirical attenuation models (i.e. Ground Motion Prediction Equations, GMPE) and naturally supplies realistic time series of ground shaking (i.e. complete synthetic seismograms), readily applicable to complete engineering analysis and other mitigation actions. The standard NDSHA maps provide reliable envelope estimates of maximum seismic ground motion from a wide set of possible scenario earthquakes, including the largest deterministically or historically defined credible earthquake. In addition

  18. Living network meta-analysis compared with pairwise meta-analysis in comparative effectiveness research: empirical study

    PubMed Central

    Nikolakopoulou, Adriani; Mavridis, Dimitris; Furukawa, Toshi A; Cipriani, Andrea; Tricco, Andrea C; Straus, Sharon E; Siontis, George C M; Egger, Matthias

    2018-01-01

    Abstract Objective To examine whether the continuous updating of networks of prospectively planned randomised controlled trials (RCTs) (“living” network meta-analysis) provides strong evidence against the null hypothesis in comparative effectiveness of medical interventions earlier than the updating of conventional, pairwise meta-analysis. Design Empirical study of the accumulating evidence about the comparative effectiveness of clinical interventions. Data sources Database of network meta-analyses of RCTs identified through searches of Medline, Embase, and the Cochrane Database of Systematic Reviews until 14 April 2015. Eligibility criteria for study selection Network meta-analyses published after January 2012 that compared at least five treatments and included at least 20 RCTs. Clinical experts were asked to identify in each network the treatment comparison of greatest clinical interest. Comparisons were excluded for which direct and indirect evidence disagreed, based on side, or node, splitting test (P<0.10). Outcomes and analysis Cumulative pairwise and network meta-analyses were performed for each selected comparison. Monitoring boundaries of statistical significance were constructed and the evidence against the null hypothesis was considered to be strong when the monitoring boundaries were crossed. A significance level was defined as α=5%, power of 90% (β=10%), and an anticipated treatment effect to detect equal to the final estimate from the network meta-analysis. The frequency and time to strong evidence was compared against the null hypothesis between pairwise and network meta-analyses. Results 49 comparisons of interest from 44 networks were included; most (n=39, 80%) were between active drugs, mainly from the specialties of cardiology, endocrinology, psychiatry, and rheumatology. 29 comparisons were informed by both direct and indirect evidence (59%), 13 by indirect evidence (27%), and 7 by direct evidence (14%). Both network and pairwise meta-analysis

  19. A comparative analysis of speed profile models for wrist pointing movements.

    PubMed

    Vaisman, Lev; Dipietro, Laura; Krebs, Hermano Igo

    2013-09-01

    Following two decades of design and clinical research on robot-mediated therapy for the shoulder and elbow, therapeutic robotic devices for other joints are being proposed: several research groups including ours have designed robots for the wrist, either to be used as stand-alone devices or in conjunction with shoulder and elbow devices. However, in contrast with robots for the shoulder and elbow which were able to take advantage of descriptive kinematic models developed in neuroscience for the past 30 years, design of wrist robots controllers cannot rely on similar prior art: wrist movement kinematics has been largely unexplored. This study aimed at examining speed profiles of fast, visually evoked, visually guided, target-directed human wrist pointing movements. One thousand three-hundred ninety-eight (1398) trials were recorded from seven unimpaired subjects who performed center-out flexion/extension and abduction/adduction wrist movements and fitted with 19 models previously proposed for describing reaching speed profiles. A nonlinear, least squares optimization procedure extracted parameters' sets that minimized error between experimental and reconstructed data. Models' performances were compared based on their ability to reconstruct experimental data. Results suggest that the support-bounded lognormal is the best model for speed profiles of fast, wrist pointing movements. Applications include design of control algorithms for therapeutic wrist robots and quantitative metrics of motor recovery.

  20. Comparing flood loss models of different complexity

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Riggelsen, Carsten; Scherbaum, Frank; Merz, Bruno

    2013-04-01

    Any deliberation on flood risk requires the consideration of potential flood losses. In particular, reliable flood loss models are needed to evaluate cost-effectiveness of mitigation measures, to assess vulnerability, for comparative risk analysis and financial appraisal during and after floods. In recent years, considerable improvements have been made both concerning the data basis and the methodological approaches used for the development of flood loss models. Despite of that, flood loss models remain an important source of uncertainty. Likewise the temporal and spatial transferability of flood loss models is still limited. This contribution investigates the predictive capability of different flood loss models in a split sample cross regional validation approach. For this purpose, flood loss models of different complexity, i.e. based on different numbers of explaining variables, are learned from a set of damage records that was obtained from a survey after the Elbe flood in 2002. The validation of model predictions is carried out for different flood events in the Elbe and Danube river basins in 2002, 2005 and 2006 for which damage records are available from surveys after the flood events. The models investigated are a stage-damage model, the rule based model FLEMOps+r as well as novel model approaches which are derived using data mining techniques of regression trees and Bayesian networks. The Bayesian network approach to flood loss modelling provides attractive additional information concerning the probability distribution of both model predictions and explaining variables.

  1. Living network meta-analysis compared with pairwise meta-analysis in comparative effectiveness research: empirical study.

    PubMed

    Nikolakopoulou, Adriani; Mavridis, Dimitris; Furukawa, Toshi A; Cipriani, Andrea; Tricco, Andrea C; Straus, Sharon E; Siontis, George C M; Egger, Matthias; Salanti, Georgia

    2018-02-28

    To examine whether the continuous updating of networks of prospectively planned randomised controlled trials (RCTs) ("living" network meta-analysis) provides strong evidence against the null hypothesis in comparative effectiveness of medical interventions earlier than the updating of conventional, pairwise meta-analysis. Empirical study of the accumulating evidence about the comparative effectiveness of clinical interventions. Database of network meta-analyses of RCTs identified through searches of Medline, Embase, and the Cochrane Database of Systematic Reviews until 14 April 2015. Network meta-analyses published after January 2012 that compared at least five treatments and included at least 20 RCTs. Clinical experts were asked to identify in each network the treatment comparison of greatest clinical interest. Comparisons were excluded for which direct and indirect evidence disagreed, based on side, or node, splitting test (P<0.10). Cumulative pairwise and network meta-analyses were performed for each selected comparison. Monitoring boundaries of statistical significance were constructed and the evidence against the null hypothesis was considered to be strong when the monitoring boundaries were crossed. A significance level was defined as α=5%, power of 90% (β=10%), and an anticipated treatment effect to detect equal to the final estimate from the network meta-analysis. The frequency and time to strong evidence was compared against the null hypothesis between pairwise and network meta-analyses. 49 comparisons of interest from 44 networks were included; most (n=39, 80%) were between active drugs, mainly from the specialties of cardiology, endocrinology, psychiatry, and rheumatology. 29 comparisons were informed by both direct and indirect evidence (59%), 13 by indirect evidence (27%), and 7 by direct evidence (14%). Both network and pairwise meta-analysis provided strong evidence against the null hypothesis for seven comparisons, but for an additional 10

  2. Comparative Analysis on Nonlinear Models for Ron Gasoline Blending Using Neural Networks

    NASA Astrophysics Data System (ADS)

    Aguilera, R. Carreño; Yu, Wen; Rodríguez, J. C. Tovar; Mosqueda, M. Elena Acevedo; Ortiz, M. Patiño; Juarez, J. J. Medel; Bautista, D. Pacheco

    The blending process always being a nonlinear process is difficult to modeling, since it may change significantly depending on the components and the process variables of each refinery. Different components can be blended depending on the existing stock, and the chemical characteristics of each component are changing dynamically, they all are blended until getting the expected specification in different properties required by the customer. One of the most relevant properties is the Octane, which is difficult to control in line (without the component storage). Since each refinery process is quite different, a generic gasoline blending model is not useful when a blending in line wants to be done in a specific process. A mathematical gasoline blending model is presented in this paper for a given process described in state space as a basic gasoline blending process description. The objective is to adjust the parameters allowing the blending gasoline model to describe a signal in its trajectory, representing in neural networks extreme learning machine method and also for nonlinear autoregressive-moving average (NARMA) in neural networks method, such that a comparative work be developed.

  3. Comparative Analysis of Academic Grades in Compulsory Secondary Education in Spain Using Statistical Techniques

    ERIC Educational Resources Information Center

    Veas, Alejandro; Gilar, Raquel; Miñano, Pablo; Castejón, Juan Luis

    2017-01-01

    The present study, based on the construct comparability approach, performs a comparative analysis of general points average for seven courses, using exploratory factor analysis (EFA) and the Partial Credit model (PCM) with a sample of 1398 student subjects (M = 12.5, SD = 0.67) from 8 schools in the province of Alicante (Spain). EFA confirmed a…

  4. a Comparative Analysis of Spatiotemporal Data Fusion Models for Landsat and Modis Data

    NASA Astrophysics Data System (ADS)

    Hazaymeh, K.; Almagbile, A.

    2018-04-01

    In this study, three documented spatiotemporal data fusion models were applied to Landsat-7 and MODIS surface reflectance, and NDVI. The algorithms included the spatial and temporal adaptive reflectance fusion model (STARFM), sparse representation based on a spatiotemporal reflectance fusion model (SPSTFM), and spatiotemporal image-fusion model (STI-FM). The objectives of this study were to (i) compare the performance of these three fusion models using a one Landsat-MODIS spectral reflectance image pairs using time-series datasets from the Coleambally irrigation area in Australia, and (ii) quantitatively evaluate the accuracy of the synthetic images generated from each fusion model using statistical measurements. Results showed that the three fusion models predicted the synthetic Landsat-7 image with adequate agreements. The STI-FM produced more accurate reconstructions of both Landsat-7 spectral bands and NDVI. Furthermore, it produced surface reflectance images having the highest correlation with the actual Landsat-7 images. This study indicated that STI-FM would be more suitable for spatiotemporal data fusion applications such as vegetation monitoring, drought monitoring, and evapotranspiration.

  5. Modelling formulations using gene expression programming--a comparative analysis with artificial neural networks.

    PubMed

    Colbourn, E A; Roskilly, S J; Rowe, R C; York, P

    2011-10-09

    This study has investigated the utility and potential advantages of gene expression programming (GEP)--a new development in evolutionary computing for modelling data and automatically generating equations that describe the cause-and-effect relationships in a system--to four types of pharmaceutical formulation and compared the models with those generated by neural networks, a technique now widely used in the formulation development. Both methods were capable of discovering subtle and non-linear relationships within the data, with no requirement from the user to specify the functional forms that should be used. Although the neural networks rapidly developed models with higher values for the ANOVA R(2) these were black box and provided little insight into the key relationships. However, GEP, although significantly slower at developing models, generated relatively simple equations describing the relationships that could be interpreted directly. The results indicate that GEP can be considered an effective and efficient modelling technique for formulation data. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. A Comparative Study of Some Dynamic Stall Models

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Kaza, K. R. V.

    1987-01-01

    Three semi-empirical aerodynamic stall models are compared with respect to their lift and moment hysteresis loop prediction, limit cycle behavior, easy implementation, and feasibility in developing the parameters required for stall flutter prediction of advanced turbines. For the comparison of aeroelastic response prediction including stall, a typical section model and a plate structural model are considered. The response analysis includes both plunging and pitching motions of the blades. In model A, a correction to the angle of attack is applied when the angle of attack exceeds the static stall angle. In model B, a synthesis procedure is used for angles of attack above static stall angles and the time history effects are accounted through the Wagner function. In both models the life and moment coefficients for angle of attack below stall are obtained from tabular data for a given Mach number and angle of attack. In model C, referred to an the ONERA model, the life and moment coefficients are given in the form of two differential equations, one for angles below stall, and the other for angles above stall. The parameters of those equations are nonlinear functions of the angle of attack.

  7. Comparative transcriptome analysis reveals vertebrate phylotypic period during organogenesis

    PubMed Central

    Irie, Naoki; Kuratani, Shigeru

    2011-01-01

    One of the central issues in evolutionary developmental biology is how we can formulate the relationships between evolutionary and developmental processes. Two major models have been proposed: the 'funnel-like' model, in which the earliest embryo shows the most conserved morphological pattern, followed by diversifying later stages, and the 'hourglass' model, in which constraints are imposed to conserve organogenesis stages, which is called the phylotypic period. Here we perform a quantitative comparative transcriptome analysis of several model vertebrate embryos and show that the pharyngula stage is most conserved, whereas earlier and later stages are rather divergent. These results allow us to predict approximate developmental timetables between different species, and indicate that pharyngula embryos have the most conserved gene expression profiles, which may be the source of the basic body plan of vertebrates. PMID:21427719

  8. Dispersion Modeling Using Ensemble Forecasts Compared to ETEX Measurements.

    NASA Astrophysics Data System (ADS)

    Straume, Anne Grete; N'dri Koffi, Ernest; Nodop, Katrin

    1998-11-01

    Numerous numerical models are developed to predict long-range transport of hazardous air pollution in connection with accidental releases. When evaluating and improving such a model, it is important to detect uncertainties connected to the meteorological input data. A Lagrangian dispersion model, the Severe Nuclear Accident Program, is used here to investigate the effect of errors in the meteorological input data due to analysis error. An ensemble forecast, produced at the European Centre for Medium-Range Weather Forecasts, is then used as model input. The ensemble forecast members are generated by perturbing the initial meteorological fields of the weather forecast. The perturbations are calculated from singular vectors meant to represent possible forecast developments generated by instabilities in the atmospheric flow during the early part of the forecast. The instabilities are generated by errors in the analyzed fields. Puff predictions from the dispersion model, using ensemble forecast input, are compared, and a large spread in the predicted puff evolutions is found. This shows that the quality of the meteorological input data is important for the success of the dispersion model. In order to evaluate the dispersion model, the calculations are compared with measurements from the European Tracer Experiment. The model manages to predict the measured puff evolution concerning shape and time of arrival to a fairly high extent, up to 60 h after the start of the release. The modeled puff is still too narrow in the advection direction.

  9. Policy Research Challenges in Comparing Care Models for Dual-Eligible Beneficiaries.

    PubMed

    Van Cleave, Janet H; Egleston, Brian L; Brosch, Sarah; Wirth, Elizabeth; Lawson, Molly; Sullivan-Marx, Eileen M; Naylor, Mary D

    2017-05-01

    Providing affordable, high-quality care for the 10 million persons who are dual-eligible beneficiaries of Medicare and Medicaid is an ongoing health-care policy challenge in the United States. However, the workforce and the care provided to dual-eligible beneficiaries are understudied. The purpose of this article is to provide a narrative of the challenges and lessons learned from an exploratory study in the use of clinical and administrative data to compare the workforce of two care models that deliver home- and community-based services to dual-eligible beneficiaries. The research challenges that the study team encountered were as follows: (a) comparing different care models, (b) standardizing data across care models, and (c) comparing patterns of health-care utilization. The methods used to meet these challenges included expert opinion to classify data and summative content analysis to compare and count data. Using descriptive statistics, a summary comparison of the two care models suggested that the coordinated care model workforce provided significantly greater hours of care per recipient than the integrated care model workforce. This likely represented the coordinated care model's focus on providing in-home services for one recipient, whereas the integrated care model focused on providing services in a day center with group activities. The lesson learned from this exploratory study is the need for standardized quality measures across home- and community-based services agencies to determine the workforce that best meets the needs of dual-eligible beneficiaries.

  10. A Comparative Analysis on Models of Higher Education Massification

    ERIC Educational Resources Information Center

    Pan, Maoyuan; Luo, Dan

    2008-01-01

    Four financial models of massification of higher education are discussed in this essay. They are American model, Western European model, Southeast Asian and Latin American model and the transition countries model. The comparison of the four models comes to the conclusion that taking advantage of nongovernmental funding is fundamental to dealing…

  11. Comparative Study on the Prediction of Aerodynamic Characteristics of Aircraft with Turbulence Models

    NASA Astrophysics Data System (ADS)

    Jang, Yujin; Huh, Jinbum; Lee, Namhun; Lee, Seungsoo; Park, Youngmin

    2018-04-01

    The RANS equations are widely used to analyze complex flows over aircraft. The equations require a turbulence model for turbulent flow analyses. A suitable turbulence must be selected for accurate predictions of aircraft aerodynamic characteristics. In this study, numerical analyses of three-dimensional aircraft are performed to compare the results of various turbulence models for the prediction of aircraft aerodynamic characteristics. A 3-D RANS solver, MSAPv, is used for the aerodynamic analysis. The four turbulence models compared are the Sparlart-Allmaras (SA) model, Coakley's q-ω model, Huang and Coakley's k-ɛ model, and Menter's k-ω SST model. Four aircrafts are considered: an ARA-M100, DLR-F6 wing-body, DLR-F6 wing-body-nacelle-pylon from the second drag prediction workshop, and a high wing aircraft with nacelles. The CFD results are compared with experimental data and other published computational results. The details of separation patterns, shock positions, and Cp distributions are discussed to find the characteristics of the turbulence models.

  12. Color model comparative analysis for breast cancer diagnosis using H and E stained images

    NASA Astrophysics Data System (ADS)

    Li, Xingyu; Plataniotis, Konstantinos N.

    2015-03-01

    Digital cancer diagnosis is a research realm where signal processing techniques are used to analyze and to classify color histopathology images. Different from grayscale image analysis of magnetic resonance imaging or X-ray, colors in histopathology images convey large amount of histological information and thus play significant role in cancer diagnosis. Though color information is widely used in histopathology works, as today, there is few study on color model selections for feature extraction in cancer diagnosis schemes. This paper addresses the problem of color space selection for digital cancer classification using H and E stained images, and investigates the effectiveness of various color models (RGB, HSV, CIE L*a*b*, and stain-dependent H and E decomposition model) in breast cancer diagnosis. Particularly, we build a diagnosis framework as a comparison benchmark and take specific concerns of medical decision systems into account in evaluation. The evaluation methodologies include feature discriminate power evaluation and final diagnosis performance comparison. Experimentation on a publicly accessible histopathology image set suggests that the H and E decomposition model outperforms other assessed color spaces. For reasons behind various performance of color spaces, our analysis via mutual information estimation demonstrates that color components in the H and E model are less dependent, and thus most feature discriminate power is collected in one channel instead of spreading out among channels in other color spaces.

  13. Analysis of model output and science data in the Virtual Model Repository (VMR).

    NASA Astrophysics Data System (ADS)

    De Zeeuw, D.; Ridley, A. J.

    2014-12-01

    Big scientific data not only includes large repositories of data from scientific platforms like satelites and ground observation, but also the vast output of numerical models. The Virtual Model Repository (VMR) provides scientific analysis and visualization tools for a many numerical models of the Earth-Sun system. Individual runs can be analyzed in the VMR and compared to relevant data through relevant metadata, but larger collections of runs can also now be studied and statistics generated on the accuracy and tendancies of model output. The vast model repository at the CCMC with over 1000 simulations of the Earth's magnetosphere was used to look at overall trends in accuracy when compared to satelites such as GOES, Geotail, and Cluster. Methodology for this analysis as well as case studies will be presented.

  14. Ecological Footprint and Ecosystem Services Models: A Comparative Analysis of Environmental Carrying Capacity Calculation Approach in Indonesia

    NASA Astrophysics Data System (ADS)

    Subekti, R. M.; Suroso, D. S. A.

    2018-05-01

    Calculation of environmental carrying capacity can be done by various approaches. The selection of an appropriate approach determines the success of determining and applying environmental carrying capacity. This study aimed to compare the ecological footprint approach and the ecosystem services approach for calculating environmental carrying capacity. It attempts to describe two relatively new models that require further explanation if they are used to calculate environmental carrying capacity. In their application, attention needs to be paid to their respective advantages and weaknesses. Conceptually, the ecological footprint model is more complete than the ecosystem services model, because it describes the supply and demand of resources, including supportive and assimilative capacity of the environment, and measurable output through a resource consumption threshold. However, this model also has weaknesses, such as not considering technological change and resources beneath the earth’s surface, as well as the requirement to provide trade data between regions for calculating at provincial and district level. The ecosystem services model also has advantages, such as being in line with strategic environmental assessment (SEA) of ecosystem services, using spatial analysis based on ecoregions, and a draft regulation on calculation guidelines formulated by the government. Meanwhile, weaknesses are that it only describes the supply of resources, that the assessment of the different types of ecosystem services by experts tends to be subjective, and that the output of the calculation lacks a resource consumption threshold.

  15. A comparative analysis of three vector-borne diseases across Australia using seasonal and meteorological models

    PubMed Central

    Stratton, Margaret D.; Ehrlich, Hanna Y.; Mor, Siobhan M.; Naumova, Elena N.

    2017-01-01

    Ross River virus (RRV), Barmah Forest virus (BFV), and dengue are three common mosquito-borne diseases in Australia that display notable seasonal patterns. Although all three diseases have been modeled on localized scales, no previous study has used harmonic models to compare seasonality of mosquito-borne diseases on a continent-wide scale. We fit Poisson harmonic regression models to surveillance data on RRV, BFV, and dengue (from 1993, 1995 and 1991, respectively, through 2015) incorporating seasonal, trend, and climate (temperature and rainfall) parameters. The models captured an average of 50–65% variability of the data. Disease incidence for all three diseases generally peaked in January or February, but peak timing was most variable for dengue. The most significant predictor parameters were trend and inter-annual periodicity for BFV, intra-annual periodicity for RRV, and trend for dengue. We found that a Temperature Suitability Index (TSI), designed to reclassify climate data relative to optimal conditions for vector establishment, could be applied to this context. Finally, we extrapolated our models to estimate the impact of a false-positive BFV epidemic in 2013. Creating these models and comparing variations in periodicities may provide insight into historical outbreaks as well as future patterns of mosquito-borne diseases. PMID:28071683

  16. A comparative analysis of three vector-borne diseases across Australia using seasonal and meteorological models.

    PubMed

    Stratton, Margaret D; Ehrlich, Hanna Y; Mor, Siobhan M; Naumova, Elena N

    2017-01-10

    Ross River virus (RRV), Barmah Forest virus (BFV), and dengue are three common mosquito-borne diseases in Australia that display notable seasonal patterns. Although all three diseases have been modeled on localized scales, no previous study has used harmonic models to compare seasonality of mosquito-borne diseases on a continent-wide scale. We fit Poisson harmonic regression models to surveillance data on RRV, BFV, and dengue (from 1993, 1995 and 1991, respectively, through 2015) incorporating seasonal, trend, and climate (temperature and rainfall) parameters. The models captured an average of 50-65% variability of the data. Disease incidence for all three diseases generally peaked in January or February, but peak timing was most variable for dengue. The most significant predictor parameters were trend and inter-annual periodicity for BFV, intra-annual periodicity for RRV, and trend for dengue. We found that a Temperature Suitability Index (TSI), designed to reclassify climate data relative to optimal conditions for vector establishment, could be applied to this context. Finally, we extrapolated our models to estimate the impact of a false-positive BFV epidemic in 2013. Creating these models and comparing variations in periodicities may provide insight into historical outbreaks as well as future patterns of mosquito-borne diseases.

  17. A comparative analysis of three vector-borne diseases across Australia using seasonal and meteorological models

    NASA Astrophysics Data System (ADS)

    Stratton, Margaret D.; Ehrlich, Hanna Y.; Mor, Siobhan M.; Naumova, Elena N.

    2017-01-01

    Ross River virus (RRV), Barmah Forest virus (BFV), and dengue are three common mosquito-borne diseases in Australia that display notable seasonal patterns. Although all three diseases have been modeled on localized scales, no previous study has used harmonic models to compare seasonality of mosquito-borne diseases on a continent-wide scale. We fit Poisson harmonic regression models to surveillance data on RRV, BFV, and dengue (from 1993, 1995 and 1991, respectively, through 2015) incorporating seasonal, trend, and climate (temperature and rainfall) parameters. The models captured an average of 50-65% variability of the data. Disease incidence for all three diseases generally peaked in January or February, but peak timing was most variable for dengue. The most significant predictor parameters were trend and inter-annual periodicity for BFV, intra-annual periodicity for RRV, and trend for dengue. We found that a Temperature Suitability Index (TSI), designed to reclassify climate data relative to optimal conditions for vector establishment, could be applied to this context. Finally, we extrapolated our models to estimate the impact of a false-positive BFV epidemic in 2013. Creating these models and comparing variations in periodicities may provide insight into historical outbreaks as well as future patterns of mosquito-borne diseases.

  18. Comparing Families of Dynamic Causal Models

    PubMed Central

    Penny, Will D.; Stephan, Klaas E.; Daunizeau, Jean; Rosa, Maria J.; Friston, Karl J.; Schofield, Thomas M.; Leff, Alex P.

    2010-01-01

    Mathematical models of scientific data can be formally compared using Bayesian model evidence. Previous applications in the biological sciences have mainly focussed on model selection in which one first selects the model with the highest evidence and then makes inferences based on the parameters of that model. This “best model” approach is very useful but can become brittle if there are a large number of models to compare, and if different subjects use different models. To overcome this shortcoming we propose the combination of two further approaches: (i) family level inference and (ii) Bayesian model averaging within families. Family level inference removes uncertainty about aspects of model structure other than the characteristic of interest. For example: What are the inputs to the system? Is processing serial or parallel? Is it linear or nonlinear? Is it mediated by a single, crucial connection? We apply Bayesian model averaging within families to provide inferences about parameters that are independent of further assumptions about model structure. We illustrate the methods using Dynamic Causal Models of brain imaging data. PMID:20300649

  19. Beauty and the beast: Some perspectives on efficient model analysis, surrogate models, and the future of modeling

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Jakeman, J.; Razavi, S.; Tolson, B.

    2015-12-01

    For many environmental systems model runtimes have remained very long as more capable computers have been used to add more processes and more time and space discretization. Scientists have also added more parameters and kinds of observations, and many model runs are needed to explore the models. Computational demand equals run time multiplied by number of model runs divided by parallelization opportunities. Model exploration is conducted using sensitivity analysis, optimization, and uncertainty quantification. Sensitivity analysis is used to reveal consequences of what may be very complex simulated relations, optimization is used to identify parameter values that fit the data best, or at least better, and uncertainty quantification is used to evaluate the precision of simulated results. The long execution times make such analyses a challenge. Methods for addressing this challenges include computationally frugal analysis of the demanding original model and a number of ingenious surrogate modeling methods. Both commonly use about 50-100 runs of the demanding original model. In this talk we consider the tradeoffs between (1) original model development decisions, (2) computationally frugal analysis of the original model, and (3) using many model runs of the fast surrogate model. Some questions of interest are as follows. If the added processes and discretization invested in (1) are compared with the restrictions and approximations in model analysis produced by long model execution times, is there a net benefit related of the goals of the model? Are there changes to the numerical methods that could reduce the computational demands while giving up less fidelity than is compromised by using computationally frugal methods or surrogate models for model analysis? Both the computationally frugal methods and surrogate models require that the solution of interest be a smooth function of the parameters or interest. How does the information obtained from the local methods typical

  20. Exploration of freely available web-interfaces for comparative homology modelling of microbial proteins

    PubMed Central

    Nema, Vijay; Pal, Sudhir Kumar

    2013-01-01

    Aim: This study was conducted to find the best suited freely available software for modelling of proteins by taking a few sample proteins. The proteins used were small to big in size with available crystal structures for the purpose of benchmarking. Key players like Phyre2, Swiss-Model, CPHmodels-3.0, Homer, (PS)2, (PS)2-V2, Modweb were used for the comparison and model generation. Results: Benchmarking process was done for four proteins, Icl, InhA, and KatG of Mycobacterium tuberculosis and RpoB of Thermus Thermophilus to get the most suited software. Parameters compared during analysis gave relatively better values for Phyre2 and Swiss-Model. Conclusion: This comparative study gave the information that Phyre2 and Swiss-Model make good models of small and large proteins as compared to other screened software. Other software was also good but is often not very efficient in providing full-length and properly folded structure. PMID:24023424

  1. Exploration of freely available web-interfaces for comparative homology modelling of microbial proteins.

    PubMed

    Nema, Vijay; Pal, Sudhir Kumar

    2013-01-01

    This study was conducted to find the best suited freely available software for modelling of proteins by taking a few sample proteins. The proteins used were small to big in size with available crystal structures for the purpose of benchmarking. Key players like Phyre2, Swiss-Model, CPHmodels-3.0, Homer, (PS)2, (PS)(2)-V(2), Modweb were used for the comparison and model generation. Benchmarking process was done for four proteins, Icl, InhA, and KatG of Mycobacterium tuberculosis and RpoB of Thermus Thermophilus to get the most suited software. Parameters compared during analysis gave relatively better values for Phyre2 and Swiss-Model. This comparative study gave the information that Phyre2 and Swiss-Model make good models of small and large proteins as compared to other screened software. Other software was also good but is often not very efficient in providing full-length and properly folded structure.

  2. Comparative Analysis and Modeling of the Severity of Steatohepatitis in DDC-Treated Mouse Strains

    PubMed Central

    Pandey, Vikash; Sultan, Marc; Kashofer, Karl; Ralser, Meryem; Amstislavskiy, Vyacheslav; Starmann, Julia; Osprian, Ingrid; Grimm, Christina; Hache, Hendrik; Yaspo, Marie-Laure; Sültmann, Holger; Trauner, Michael; Denk, Helmut; Zatloukal, Kurt; Lehrach, Hans; Wierling, Christoph

    2014-01-01

    Background Non-alcoholic fatty liver disease (NAFLD) has a broad spectrum of disease states ranging from mild steatosis characterized by an abnormal retention of lipids within liver cells to steatohepatitis (NASH) showing fat accumulation, inflammation, ballooning and degradation of hepatocytes, and fibrosis. Ultimately, steatohepatitis can result in liver cirrhosis and hepatocellular carcinoma. Methodology and Results In this study we have analyzed three different mouse strains, A/J, C57BL/6J, and PWD/PhJ, that show different degrees of steatohepatitis when administered a 3,5-diethoxycarbonyl-1,4-dihydrocollidine (DDC) containing diet. RNA-Seq gene expression analysis, protein analysis and metabolic profiling were applied to identify differentially expressed genes/proteins and perturbed metabolite levels of mouse liver samples upon DDC-treatment. Pathway analysis revealed alteration of arachidonic acid (AA) and S-adenosylmethionine (SAMe) metabolism upon other pathways. To understand metabolic changes of arachidonic acid metabolism in the light of disease expression profiles a kinetic model of this pathway was developed and optimized according to metabolite levels. Subsequently, the model was used to study in silico effects of potential drug targets for steatohepatitis. Conclusions We identified AA/eicosanoid metabolism as highly perturbed in DDC-induced mice using a combination of an experimental and in silico approach. Our analysis of the AA/eicosanoid metabolic pathway suggests that 5-hydroxyeicosatetraenoic acid (5-HETE), 15-hydroxyeicosatetraenoic acid (15-HETE) and prostaglandin D2 (PGD2) are perturbed in DDC mice. We further demonstrate that a dynamic model can be used for qualitative prediction of metabolic changes based on transcriptomics data in a disease-related context. Furthermore, SAMe metabolism was identified as being perturbed due to DDC treatment. Several genes as well as some metabolites of this module show differences between A/J and C57BL/6J

  3. Comparative analysis and modeling of the severity of steatohepatitis in DDC-treated mouse strains.

    PubMed

    Pandey, Vikash; Sultan, Marc; Kashofer, Karl; Ralser, Meryem; Amstislavskiy, Vyacheslav; Starmann, Julia; Osprian, Ingrid; Grimm, Christina; Hache, Hendrik; Yaspo, Marie-Laure; Sültmann, Holger; Trauner, Michael; Denk, Helmut; Zatloukal, Kurt; Lehrach, Hans; Wierling, Christoph

    2014-01-01

    Non-alcoholic fatty liver disease (NAFLD) has a broad spectrum of disease states ranging from mild steatosis characterized by an abnormal retention of lipids within liver cells to steatohepatitis (NASH) showing fat accumulation, inflammation, ballooning and degradation of hepatocytes, and fibrosis. Ultimately, steatohepatitis can result in liver cirrhosis and hepatocellular carcinoma. In this study we have analyzed three different mouse strains, A/J, C57BL/6J, and PWD/PhJ, that show different degrees of steatohepatitis when administered a 3,5-diethoxycarbonyl-1,4-dihydrocollidine (DDC) containing diet. RNA-Seq gene expression analysis, protein analysis and metabolic profiling were applied to identify differentially expressed genes/proteins and perturbed metabolite levels of mouse liver samples upon DDC-treatment. Pathway analysis revealed alteration of arachidonic acid (AA) and S-adenosylmethionine (SAMe) metabolism upon other pathways. To understand metabolic changes of arachidonic acid metabolism in the light of disease expression profiles a kinetic model of this pathway was developed and optimized according to metabolite levels. Subsequently, the model was used to study in silico effects of potential drug targets for steatohepatitis. We identified AA/eicosanoid metabolism as highly perturbed in DDC-induced mice using a combination of an experimental and in silico approach. Our analysis of the AA/eicosanoid metabolic pathway suggests that 5-hydroxyeicosatetraenoic acid (5-HETE), 15-hydroxyeicosatetraenoic acid (15-HETE) and prostaglandin D2 (PGD2) are perturbed in DDC mice. We further demonstrate that a dynamic model can be used for qualitative prediction of metabolic changes based on transcriptomics data in a disease-related context. Furthermore, SAMe metabolism was identified as being perturbed due to DDC treatment. Several genes as well as some metabolites of this module show differences between A/J and C57BL/6J on the one hand and PWD/PhJ on the other.

  4. Enabling comparative modeling of closely related genomes: Example genus Brucella

    DOE PAGES

    Faria, José P.; Edirisinghe, Janaka N.; Davis, James J.; ...

    2014-03-08

    For many scientific applications, it is highly desirable to be able to compare metabolic models of closely related genomes. In this study, we attempt to raise awareness to the fact that taking annotated genomes from public repositories and using them for metabolic model reconstructions is far from being trivial due to annotation inconsistencies. We are proposing a protocol for comparative analysis of metabolic models on closely related genomes, using fifteen strains of genus Brucella, which contains pathogens of both humans and livestock. This study lead to the identification and subsequent correction of inconsistent annotations in the SEED database, as wellmore » as the identification of 31 biochemical reactions that are common to Brucella, which are not originally identified by automated metabolic reconstructions. We are currently implementing this protocol for improving automated annotations within the SEED database and these improvements have been propagated into PATRIC, Model-SEED, KBase and RAST. This method is an enabling step for the future creation of consistent annotation systems and high-quality model reconstructions that will support in predicting accurate phenotypes such as pathogenicity, media requirements or type of respiration.« less

  5. Enabling comparative modeling of closely related genomes: Example genus Brucella

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faria, José P.; Edirisinghe, Janaka N.; Davis, James J.

    For many scientific applications, it is highly desirable to be able to compare metabolic models of closely related genomes. In this study, we attempt to raise awareness to the fact that taking annotated genomes from public repositories and using them for metabolic model reconstructions is far from being trivial due to annotation inconsistencies. We are proposing a protocol for comparative analysis of metabolic models on closely related genomes, using fifteen strains of genus Brucella, which contains pathogens of both humans and livestock. This study lead to the identification and subsequent correction of inconsistent annotations in the SEED database, as wellmore » as the identification of 31 biochemical reactions that are common to Brucella, which are not originally identified by automated metabolic reconstructions. We are currently implementing this protocol for improving automated annotations within the SEED database and these improvements have been propagated into PATRIC, Model-SEED, KBase and RAST. This method is an enabling step for the future creation of consistent annotation systems and high-quality model reconstructions that will support in predicting accurate phenotypes such as pathogenicity, media requirements or type of respiration.« less

  6. Using structural equation modeling for network meta-analysis.

    PubMed

    Tu, Yu-Kang; Wu, Yun-Chun

    2017-07-14

    Network meta-analysis overcomes the limitations of traditional pair-wise meta-analysis by incorporating all available evidence into a general statistical framework for simultaneous comparisons of several treatments. Currently, network meta-analyses are undertaken either within the Bayesian hierarchical linear models or frequentist generalized linear mixed models. Structural equation modeling (SEM) is a statistical method originally developed for modeling causal relations among observed and latent variables. As random effect is explicitly modeled as a latent variable in SEM, it is very flexible for analysts to specify complex random effect structure and to make linear and nonlinear constraints on parameters. The aim of this article is to show how to undertake a network meta-analysis within the statistical framework of SEM. We used an example dataset to demonstrate the standard fixed and random effect network meta-analysis models can be easily implemented in SEM. It contains results of 26 studies that directly compared three treatment groups A, B and C for prevention of first bleeding in patients with liver cirrhosis. We also showed that a new approach to network meta-analysis based on the technique of unrestricted weighted least squares (UWLS) method can also be undertaken using SEM. For both the fixed and random effect network meta-analysis, SEM yielded similar coefficients and confidence intervals to those reported in the previous literature. The point estimates of two UWLS models were identical to those in the fixed effect model but the confidence intervals were greater. This is consistent with results from the traditional pairwise meta-analyses. Comparing to UWLS model with common variance adjusted factor, UWLS model with unique variance adjusted factor has greater confidence intervals when the heterogeneity was larger in the pairwise comparison. The UWLS model with unique variance adjusted factor reflects the difference in heterogeneity within each comparison

  7. Comparative modeling without implicit sequence alignments.

    PubMed

    Kolinski, Andrzej; Gront, Dominik

    2007-10-01

    The number of known protein sequences is about thousand times larger than the number of experimentally solved 3D structures. For more than half of the protein sequences a close or distant structural analog could be identified. The key starting point in a classical comparative modeling is to generate the best possible sequence alignment with a template or templates. With decreasing sequence similarity, the number of errors in the alignments increases and these errors are the main causes of the decreasing accuracy of the molecular models generated. Here we propose a new approach to comparative modeling, which does not require the implicit alignment - the model building phase explores geometric, evolutionary and physical properties of a template (or templates). The proposed method requires prior identification of a template, although the initial sequence alignment is ignored. The model is built using a very efficient reduced representation search engine CABS to find the best possible superposition of the query protein onto the template represented as a 3D multi-featured scaffold. The criteria used include: sequence similarity, predicted secondary structure consistency, local geometric features and hydrophobicity profile. For more difficult cases, the new method qualitatively outperforms existing schemes of comparative modeling. The algorithm unifies de novo modeling, 3D threading and sequence-based methods. The main idea is general and could be easily combined with other efficient modeling tools as Rosetta, UNRES and others.

  8. Structural modelling and comparative analysis of homologous, analogous and specific proteins from Trypanosoma cruzi versus Homo sapiens: putative drug targets for chagas' disease treatment.

    PubMed

    Capriles, Priscila V S Z; Guimarães, Ana C R; Otto, Thomas D; Miranda, Antonio B; Dardenne, Laurent E; Degrave, Wim M

    2010-10-29

    Trypanosoma cruzi is the etiological agent of Chagas' disease, an endemic infection that causes thousands of deaths every year in Latin America. Therapeutic options remain inefficient, demanding the search for new drugs and/or new molecular targets. Such efforts can focus on proteins that are specific to the parasite, but analogous enzymes and enzymes with a three-dimensional (3D) structure sufficiently different from the corresponding host proteins may represent equally interesting targets. In order to find these targets we used the workflows MHOLline and AnEnΠ obtaining 3D models from homologous, analogous and specific proteins of Trypanosoma cruzi versus Homo sapiens. We applied genome wide comparative modelling techniques to obtain 3D models for 3,286 predicted proteins of T. cruzi. In combination with comparative genome analysis to Homo sapiens, we were able to identify a subset of 397 enzyme sequences, of which 356 are homologous, 3 analogous and 38 specific to the parasite. In this work, we present a set of 397 enzyme models of T. cruzi that can constitute potential structure-based drug targets to be investigated for the development of new strategies to fight Chagas' disease. The strategies presented here support the concept of structural analysis in conjunction with protein functional analysis as an interesting computational methodology to detect potential targets for structure-based rational drug design. For example, 2,4-dienoyl-CoA reductase (EC 1.3.1.34) and triacylglycerol lipase (EC 3.1.1.3), classified as analogous proteins in relation to H. sapiens enzymes, were identified as new potential molecular targets.

  9. Modeling and Analysis of Space Based Transceivers

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Liebetreu, John; Moore, Michael S.; Price, Jeremy C.; Abbott, Ben

    2005-01-01

    This paper presents the tool chain, methodology, and initial results of a study to provide a thorough, objective, and quantitative analysis of the design alternatives for space Software Defined Radio (SDR) transceivers. The approach taken was to develop a set of models and tools for describing communications requirements, the algorithm resource requirements, the available hardware, and the alternative software architectures, and generate analysis data necessary to compare alternative designs. The Space Transceiver Analysis Tool (STAT) was developed to help users identify and select representative designs, calculate the analysis data, and perform a comparative analysis of the representative designs. The tool allows the design space to be searched quickly while permitting incremental refinement in regions of higher payoff.

  10. Modeling and Analysis of Space Based Transceivers

    NASA Technical Reports Server (NTRS)

    Moore, Michael S.; Price, Jeremy C.; Abbott, Ben; Liebetreu, John; Reinhart, Richard C.; Kacpura, Thomas J.

    2007-01-01

    This paper presents the tool chain, methodology, and initial results of a study to provide a thorough, objective, and quantitative analysis of the design alternatives for space Software Defined Radio (SDR) transceivers. The approach taken was to develop a set of models and tools for describing communications requirements, the algorithm resource requirements, the available hardware, and the alternative software architectures, and generate analysis data necessary to compare alternative designs. The Space Transceiver Analysis Tool (STAT) was developed to help users identify and select representative designs, calculate the analysis data, and perform a comparative analysis of the representative designs. The tool allows the design space to be searched quickly while permitting incremental refinement in regions of higher payoff.

  11. Comparing model-based adaptive LMS filters and a model-free hysteresis loop analysis method for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Zhou, Cong; Chase, J. Geoffrey; Rodgers, Geoffrey W.; Xu, Chao

    2017-02-01

    The model-free hysteresis loop analysis (HLA) method for structural health monitoring (SHM) has significant advantages over the traditional model-based SHM methods that require a suitable baseline model to represent the actual system response. This paper provides a unique validation against both an experimental reinforced concrete (RC) building and a calibrated numerical model to delineate the capability of the model-free HLA method and the adaptive least mean squares (LMS) model-based method in detecting, localizing and quantifying damage that may not be visible, observable in overall structural response. Results clearly show the model-free HLA method is capable of adapting to changes in how structures transfer load or demand across structural elements over time and multiple events of different size. However, the adaptive LMS model-based method presented an image of greater spread of lesser damage over time and story when the baseline model is not well defined. Finally, the two algorithms are tested over a simpler hysteretic behaviour typical steel structure to quantify the impact of model mismatch between the baseline model used for identification and the actual response. The overall results highlight the need for model-based methods to have an appropriate model that can capture the observed response, in order to yield accurate results, even in small events where the structure remains linear.

  12. Comparing Results from Constant Comparative and Computer Software Methods: A Reflection about Qualitative Data Analysis

    ERIC Educational Resources Information Center

    Putten, Jim Vander; Nolen, Amanda L.

    2010-01-01

    This study compared qualitative research results obtained by manual constant comparative analysis with results obtained by computer software analysis of the same data. An investigated about issues of trustworthiness and accuracy ensued. Results indicated that the inductive constant comparative data analysis generated 51 codes and two coding levels…

  13. Comparing Supply-Side Specifications in Models of Global Agriculture and the Food System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, Sherman; van Meijl, Hans; Willenbockel, Dirk

    This paper compares the theoretical specification of production and technical change across the partial equilibrium (PE) and computable general equilibrium (CGE) models of the global agricultural and food system included in the AgMIP model comparison study. The two modeling approaches have different theoretical underpinnings concerning the scope of economic activity they capture and how they represent technology and the behavior of supply and demand in markets. This paper focuses on their different specifications of technology and supply behavior, comparing their theoretical and empirical treatments. While the models differ widely in their specifications of technology, both within and between the PEmore » and CGE classes of models, we find that the theoretical responsiveness of supply to changes in prices can be similar, depending on parameter choices that define the behavior of supply functions over the domain of applicability defined by the common scenarios used in the AgMIP comparisons. In particular, we compare the theoretical specification of supply in CGE models with neoclassical production functions and PE models that focus on land and crop yields in agriculture. In practice, however, comparability of results given parameter choices is an empirical question, and the models differ in their sensitivity to variations in specification. To illustrate the issues, sensitivity analysis is done with one global CGE model, MAGNET, to indicate how the results vary with different specification of technical change, and how they compare with the results from PE models.« less

  14. Reliability of four models for clinical gait analysis.

    PubMed

    Kainz, Hans; Graham, David; Edwards, Julie; Walsh, Henry P J; Maine, Sheanna; Boyd, Roslyn N; Lloyd, David G; Modenese, Luca; Carty, Christopher P

    2017-05-01

    Three-dimensional gait analysis (3DGA) has become a common clinical tool for treatment planning in children with cerebral palsy (CP). Many clinical gait laboratories use the conventional gait analysis model (e.g. Plug-in-Gait model), which uses Direct Kinematics (DK) for joint kinematic calculations, whereas, musculoskeletal models, mainly used for research, use Inverse Kinematics (IK). Musculoskeletal IK models have the advantage of enabling additional analyses which might improve the clinical decision-making in children with CP. Before any new model can be used in a clinical setting, its reliability has to be evaluated and compared to a commonly used clinical gait model (e.g. Plug-in-Gait model) which was the purpose of this study. Two testers performed 3DGA in eleven CP and seven typically developing participants on two occasions. Intra- and inter-tester standard deviations (SD) and standard error of measurement (SEM) were used to compare the reliability of two DK models (Plug-in-Gait and a six degrees-of-freedom model solved using Vicon software) and two IK models (two modifications of 'gait2392' solved using OpenSim). All models showed good reliability (mean SEM of 3.0° over all analysed models and joint angles). Variations in joint kinetics were less in typically developed than in CP participants. The modified 'gait2392' model which included all the joint rotations commonly reported in clinical 3DGA, showed reasonable reliable joint kinematic and kinetic estimates, and allows additional musculoskeletal analysis on surgically adjustable parameters, e.g. muscle-tendon lengths, and, therefore, is a suitable model for clinical gait analysis. Copyright © 2017. Published by Elsevier B.V.

  15. In vitro, ex vivo and in vivo models: A comparative analysis of Paracoccidioides spp. proteomic studies.

    PubMed

    Parente-Rocha, Juliana Alves; Tomazett, Mariana Vieira; Pigosso, Laurine Lacerda; Bailão, Alexandre Melo; Ferreira de Souza, Aparecido; Paccez, Juliano Domiraci; Baeza, Lilian Cristiane; Pereira, Maristela; Silva Bailão, Mirelle Garcia; Borges, Clayton Luiz; Maria de Almeida Soares, Célia

    2018-06-01

    Members of the Paracoccidioides complex are human pathogens that infect different anatomic sites in the host. The ability of Paracoccidioides spp. to infect host niches is putatively supported by a wide range of virulence factors, as well as fitness attributes that may comprise the transition from mycelia/conidia to yeast cells, response to deprivation of micronutrients in the host, expression of adhesins on the cell surface, response to oxidative and nitrosative stresses, as well as the secretion of hydrolytic enzymes in the host tissue. Our understanding of how those molecules can contribute to the infection establishment has been increasing significantly, through the utilization of several models, including in vitro, ex vivo and in vivo infection in animal models. In this review we present an update of our understanding on the strategies used by the pathogen to establish infection. Our results were obtained through a comparative proteomic analysis of Paracoccidioides spp. in models of infection. Copyright © 2017 British Mycological Society. Published by Elsevier Ltd. All rights reserved.

  16. Comparative Analysis of Hybrid Models for Prediction of BP Reactivity to Crossed Legs.

    PubMed

    Kaur, Gurmanik; Arora, Ajat Shatru; Jain, Vijender Kumar

    2017-01-01

    Crossing the legs at the knees, during BP measurement, is one of the several physiological stimuli that considerably influence the accuracy of BP measurements. Therefore, it is paramount to develop an appropriate prediction model for interpreting influence of crossed legs on BP. This research work described the use of principal component analysis- (PCA-) fused forward stepwise regression (FSWR), artificial neural network (ANN), adaptive neuro fuzzy inference system (ANFIS), and least squares support vector machine (LS-SVM) models for prediction of BP reactivity to crossed legs among the normotensive and hypertensive participants. The evaluation of the performance of the proposed prediction models using appropriate statistical indices showed that the PCA-based LS-SVM (PCA-LS-SVM) model has the highest prediction accuracy with coefficient of determination ( R 2 ) = 93.16%, root mean square error (RMSE) = 0.27, and mean absolute percentage error (MAPE) = 5.71 for SBP prediction in normotensive subjects. Furthermore, R 2  = 96.46%, RMSE = 0.19, and MAPE = 1.76 for SBP prediction and R 2  = 95.44%, RMSE = 0.21, and MAPE = 2.78 for DBP prediction in hypertensive subjects using the PCA-LSSVM model. This assessment presents the importance and advantages posed by hybrid computing models for the prediction of variables in biomedical research studies.

  17. Comparative Analysis of Pain Behaviours in Humanized Mouse Models of Sickle Cell Anemia

    PubMed Central

    Lei, Jianxun; Benson, Barbara; Tran, Huy; Ofori-Acquah, Solomon F.; Gupta, Kalpna

    2016-01-01

    Pain is a hallmark feature of sickle cell anemia (SCA) but management of chronic as well as acute pain remains a major challenge. Mouse models of SCA are essential to examine the mechanisms of pain and develop novel therapeutics. To facilitate this effort, we compared humanized homozygous BERK and Townes sickle mice for the effect of gender and age on pain behaviors. Similar to previously characterized BERK sickle mice, Townes sickle mice show more mechanical, thermal, and deep tissue hyperalgesia with increasing age. Female Townes sickle mice demonstrate more hyperalgesia compared to males similar to that reported for BERK mice and patients with SCA. Mechanical, thermal and deep tissue hyperalgesia increased further after hypoxia/reoxygenation (H/R) treatment in Townes sickle mice. Together, these data show BERK sickle mice exhibit a significantly greater degree of hyperalgesia for all behavioral measures as compared to gender- and age-matched Townes sickle mice. However, the genetically distinct “knock-in” strategy of human α and β transgene insertion in Townes mice as compared to BERK mice, may provide relative advantage for further genetic manipulations to examine specific mechanisms of pain. PMID:27494522

  18. Three-dimensional structure-activity relationship modeling of cocaine binding to two monoclonal antibodies by comparative molecular field analysis.

    PubMed

    Paula, Stefan; Tabet, Michael R; Keenan, Susan M; Welsh, William J; Ball, W James

    2003-01-17

    Successful immunotherapy of cocaine addiction and overdoses requires cocaine-binding antibodies with specific properties, such as high affinity and selectivity for cocaine. We have determined the affinities of two cocaine-binding murine monoclonal antibodies (mAb: clones 3P1A6 and MM0240PA) for cocaine and its metabolites by [3H]-radioligand binding assays. mAb 3P1A6 (K(d) = 0.22 nM) displayed a 50-fold higher affinity for cocaine than mAb MM0240PA (K(d) = 11 nM) and also had a greater specificity for cocaine. For the systematic exploration of both antibodies' binding specificities, we used a set of approximately 35 cocaine analogues as structural probes by determining their relative binding affinities (RBAs) using an enzyme-linked immunosorbent competition assay. Three-dimensional quantitative structure-activity relationship (3D-QSAR) models on the basis of comparative molecular field analysis (CoMFA) techniques correlated the binding data with structural features of the ligands. The analysis indicated that despite the mAbs' differing specificities for cocaine, the relative contributions of the steric (approximately 80%) and electrostatic (approximately 20%) field interactions to ligand-binding were similar. Generated three-dimensional CoMFA contour plots then located the specific regions about cocaine where the ligand/receptor interactions occurred. While the overall binding patterns of the two mAbs had many features in common, distinct differences were observed about the phenyl ring and the methylester group of cocaine. Furthermore, using previously published data, a 3D-QSAR model was developed for cocaine binding to the dopamine reuptake transporter (DAT) that was compared to the mAb models. Although the relative steric and electrostatic field contributions were similar to those of the mAbs, the DAT cocaine-binding site showed a preference for negatively charged ligands. Besides establishing molecular level insight into the interactions that govern cocaine

  19. International Space Station Model Correlation Analysis

    NASA Technical Reports Server (NTRS)

    Laible, Michael R.; Fitzpatrick, Kristin; Hodge, Jennifer; Grygier, Michael

    2018-01-01

    This paper summarizes the on-orbit structural dynamic data and the related modal analysis, model validation and correlation performed for the International Space Station (ISS) configuration ISS Stage ULF7, 2015 Dedicated Thruster Firing (DTF). The objective of this analysis is to validate and correlate the analytical models used to calculate the ISS internal dynamic loads and compare the 2015 DTF with previous tests. During the ISS configurations under consideration, on-orbit dynamic measurements were collected using the three main ISS instrumentation systems; Internal Wireless Instrumentation System (IWIS), External Wireless Instrumentation System (EWIS) and the Structural Dynamic Measurement System (SDMS). The measurements were recorded during several nominal on-orbit DTF tests on August 18, 2015. Experimental modal analyses were performed on the measured data to extract modal parameters including frequency, damping, and mode shape information. Correlation and comparisons between test and analytical frequencies and mode shapes were performed to assess the accuracy of the analytical models for the configurations under consideration. These mode shapes were also compared to earlier tests. Based on the frequency comparisons, the accuracy of the mathematical models is assessed and model refinement recommendations are given. In particular, results of the first fundamental mode will be discussed, nonlinear results will be shown, and accelerometer placement will be assessed.

  20. Fold assessment for comparative protein structure modeling.

    PubMed

    Melo, Francisco; Sali, Andrej

    2007-11-01

    Accurate and automated assessment of both geometrical errors and incompleteness of comparative protein structure models is necessary for an adequate use of the models. Here, we describe a composite score for discriminating between models with the correct and incorrect fold. To find an accurate composite score, we designed and applied a genetic algorithm method that searched for a most informative subset of 21 input model features as well as their optimized nonlinear transformation into the composite score. The 21 input features included various statistical potential scores, stereochemistry quality descriptors, sequence alignment scores, geometrical descriptors, and measures of protein packing. The optimized composite score was found to depend on (1) a statistical potential z-score for residue accessibilities and distances, (2) model compactness, and (3) percentage sequence identity of the alignment used to build the model. The accuracy of the composite score was compared with the accuracy of assessment by single and combined features as well as by other commonly used assessment methods. The testing set was representative of models produced by automated comparative modeling on a genomic scale. The composite score performed better than any other tested score in terms of the maximum correct classification rate (i.e., 3.3% false positives and 2.5% false negatives) as well as the sensitivity and specificity across the whole range of thresholds. The composite score was implemented in our program MODELLER-8 and was used to assess models in the MODBASE database that contains comparative models for domains in approximately 1.3 million protein sequences.

  1. Power laws in microrheology experiments on living cells: Comparative analysis and modeling

    NASA Astrophysics Data System (ADS)

    Balland, Martial; Desprat, Nicolas; Icard, Delphine; Féréol, Sophie; Asnacios, Atef; Browaeys, Julien; Hénon, Sylvie; Gallet, François

    2006-08-01

    We compare and synthesize the results of two microrheological experiments on the cytoskeleton of single cells. In the first one, the creep function J(t) of a cell stretched between two glass plates is measured after applying a constant force step. In the second one, a microbead specifically bound to transmembrane receptors is driven by an oscillating optical trap, and the viscoelastic coefficient Ge(ω) is retrieved. Both J(t) and Ge(ω) exhibit power law behaviors: J(t)=A0(t/t0)α and ∣Ge(ω)∣=G0(ω/ω0)α , with the same exponent α≈0.2 . This power law behavior is very robust; α is distributed over a narrow range, and shows almost no dependence on the cell type, on the nature of the protein complex which transmits the mechanical stress, nor on the typical length scale of the experiment. On the contrary, the prefactors A0 and G0 appear very sensitive to these parameters. Whereas the exponents α are normally distributed over the cell population, the prefactors A0 and G0 follow a log-normal repartition. These results are compared with other data published in the literature. We propose a global interpretation, based on a semiphenomenological model, which involves a broad distribution of relaxation times in the system. The model predicts the power law behavior and the statistical repartition of the mechanical parameters, as experimentally observed for the cells. Moreover, it leads to an estimate of the largest response time in the cytoskeletal network: τm˜1000s .

  2. Power laws in microrheology experiments on living cells: Comparative analysis and modeling.

    PubMed

    Balland, Martial; Desprat, Nicolas; Icard, Delphine; Féréol, Sophie; Asnacios, Atef; Browaeys, Julien; Hénon, Sylvie; Gallet, François

    2006-08-01

    We compare and synthesize the results of two microrheological experiments on the cytoskeleton of single cells. In the first one, the creep function J(t) of a cell stretched between two glass plates is measured after applying a constant force step. In the second one, a microbead specifically bound to transmembrane receptors is driven by an oscillating optical trap, and the viscoelastic coefficient Ge(omega) is retrieved. Both J(t) and Ge(omega) exhibit power law behaviors: J(t) = A0(t/t0)alpha and absolute value (Ge(omega)) = G0(omega/omega0)alpha, with the same exponent alpha approximately 0.2. This power law behavior is very robust; alpha is distributed over a narrow range, and shows almost no dependence on the cell type, on the nature of the protein complex which transmits the mechanical stress, nor on the typical length scale of the experiment. On the contrary, the prefactors A0 and G0 appear very sensitive to these parameters. Whereas the exponents alpha are normally distributed over the cell population, the prefactors A0 and G0 follow a log-normal repartition. These results are compared with other data published in the literature. We propose a global interpretation, based on a semiphenomenological model, which involves a broad distribution of relaxation times in the system. The model predicts the power law behavior and the statistical repartition of the mechanical parameters, as experimentally observed for the cells. Moreover, it leads to an estimate of the largest response time in the cytoskeletal network: tau(m) approximately 1000 s.

  3. A comparative analysis of errors in long-term econometric forecasts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tepel, R.

    1986-04-01

    The growing body of literature that documents forecast accuracy falls generally into two parts. The first is prescriptive and is carried out by modelers who use simulation analysis as a tool for model improvement. These studies are ex post, that is, they make use of known values for exogenous variables and generate an error measure wholly attributable to the model. The second type of analysis is descriptive and seeks to measure errors, identify patterns among errors and variables and compare forecasts from different sources. Most descriptive studies use an ex ante approach, that is, they evaluate model outputs based onmore » estimated (or forecasted) exogenous variables. In this case, it is the forecasting process, rather than the model, that is under scrutiny. This paper uses an ex ante approach to measure errors in forecast series prepared by Data Resources Incorporated (DRI), Wharton Econometric Forecasting Associates (Wharton), and Chase Econometrics (Chase) and to determine if systematic patterns of errors can be discerned between services, types of variables (by degree of aggregation), length of forecast and time at which the forecast is made. Errors are measured as the percent difference between actual and forecasted values for the historical period of 1971 to 1983.« less

  4. Protein Dynamics from NMR: The Slowly Relaxing Local Structure Analysis Compared with Model-Free Analysis

    PubMed Central

    Meirovitch, Eva; Shapiro, Yury E.; Polimeno, Antonino; Freed, Jack H.

    2009-01-01

    15N-1H spin relaxation is a powerful method for deriving information on protein dynamics. The traditional method of data analysis is model-free (MF), where the global and local N-H motions are independent and the local geometry is simplified. The common MF analysis consists of fitting single-field data. The results are typically field-dependent, and multi-field data cannot be fit with standard fitting schemes. Cases where known functional dynamics has not been detected by MF were identified by us and others. Recently we applied to spin relaxation in proteins the Slowly Relaxing Local Structure (SRLS) approach which accounts rigorously for mode-mixing and general features of local geometry. SRLS was shown to yield MF in appropriate asymptotic limits. We found that the experimental spectral density corresponds quite well to the SRLS spectral density. The MF formulae are often used outside of their validity ranges, allowing small data sets to be force-fitted with good statistics but inaccurate best-fit parameters. This paper focuses on the mechanism of force-fitting and its implications. It is shown that MF force-fits the experimental data because mode-mixing, the rhombic symmetry of the local ordering and general features of local geometry are not accounted for. Combined multi-field multi-temperature data analyzed by MF may lead to the detection of incorrect phenomena, while conformational entropy derived from MF order parameters may be highly inaccurate. On the other hand, fitting to more appropriate models can yield consistent physically insightful information. This requires that the complexity of the theoretical spectral densities matches the integrity of the experimental data. As shown herein, the SRLS densities comply with this requirement. PMID:16821820

  5. Structural modelling and comparative analysis of homologous, analogous and specific proteins from Trypanosoma cruzi versus Homo sapiens: putative drug targets for chagas' disease treatment

    PubMed Central

    2010-01-01

    Background Trypanosoma cruzi is the etiological agent of Chagas' disease, an endemic infection that causes thousands of deaths every year in Latin America. Therapeutic options remain inefficient, demanding the search for new drugs and/or new molecular targets. Such efforts can focus on proteins that are specific to the parasite, but analogous enzymes and enzymes with a three-dimensional (3D) structure sufficiently different from the corresponding host proteins may represent equally interesting targets. In order to find these targets we used the workflows MHOLline and AnEnΠ obtaining 3D models from homologous, analogous and specific proteins of Trypanosoma cruzi versus Homo sapiens. Results We applied genome wide comparative modelling techniques to obtain 3D models for 3,286 predicted proteins of T. cruzi. In combination with comparative genome analysis to Homo sapiens, we were able to identify a subset of 397 enzyme sequences, of which 356 are homologous, 3 analogous and 38 specific to the parasite. Conclusions In this work, we present a set of 397 enzyme models of T. cruzi that can constitute potential structure-based drug targets to be investigated for the development of new strategies to fight Chagas' disease. The strategies presented here support the concept of structural analysis in conjunction with protein functional analysis as an interesting computational methodology to detect potential targets for structure-based rational drug design. For example, 2,4-dienoyl-CoA reductase (EC 1.3.1.34) and triacylglycerol lipase (EC 3.1.1.3), classified as analogous proteins in relation to H. sapiens enzymes, were identified as new potential molecular targets. PMID:21034488

  6. Comparative cost-benefit analysis of tele-homecare for community-dwelling elderly in Japan: Non-Government versus Government Supported Funding Models.

    PubMed

    Akiyama, Miki; Abraham, Chon

    2017-08-01

    Tele-homecare is gaining prominence as a viable care alternative, as evidenced by the increase in financial support from international governments to fund initiatives in their respective countries. The primary reason for the funding is to support efforts to reduce lags and increase capacity in access to care as well as to promote preventive measures that can avert costly emergent issues from arising. These efforts are especially important to super-aged and aging societies such as in Japan, many European countries, and the United States (US). However, to date and to our knowledge, a direct comparison of non-government vs. government-supported funding models for tele-homecare is particularly lacking in Japan. The aim of this study is to compare these operational models (i.e., non-government vs. government-supported funding) from a cost-benefit perspective. This simulation study applies to a Japanese hypothetical cohort with implications for other super-aged and aging societies abroad. We performed a cost-benefit analysis (CBA) on two operational models for enabling tele-homecare for elderly community-dwelling cohorts based on a decision tree model, which we created with parameters from published literature. The two models examined are (a) Model 1-non-government-supported funding that includes monthly fixed charges paid by users for a portion of the operating costs, and (b) Model 2-government-supported funding that includes startup and installation costs only (i.e., no operating costs) and no monthly user charges. We performed base case cost-benefit analysis and probabilistic cost-benefit analysis with a Monte Carlo simulation. We calculated net benefit and benefit-to-cost ratios (BCRs) from the societal perspective with a five-year time horizon applying a 3% discount rate for both cost and benefit values. The cost of tele-homecare included (a) the startup system expense, averaged over a five-year depreciation period, and (b) operation expenses (i.e., labor and non

  7. Comparative Analysis of Smart Meters Deployment Business Models on the Example of the Russian Federation Markets

    NASA Astrophysics Data System (ADS)

    Daminov, Ildar; Tarasova, Ekaterina; Andreeva, Tatyana; Avazov, Artur

    2016-02-01

    This paper presents the comparison of smart meter deployment business models to determine the most suitable option providing smart meters deployment. Authors consider 3 main business model of companies: distribution grid company, energy supplier (energosbyt) and metering company. The goal of the article is to compare the business models of power companies from massive smart metering roll out in power system of Russian Federation.

  8. Comparative modelling of the spectra of cool giants⋆⋆⋆

    NASA Astrophysics Data System (ADS)

    Lebzelter, T.; Heiter, U.; Abia, C.; Eriksson, K.; Ireland, M.; Neilson, H.; Nowotny, W.; Maldonado, J.; Merle, T.; Peterson, R.; Plez, B.; Short, C. I.; Wahlgren, G. M.; Worley, C.; Aringer, B.; Bladh, S.; de Laverny, P.; Goswami, A.; Mora, A.; Norris, R. P.; Recio-Blanco, A.; Scholz, M.; Thévenin, F.; Tsuji, T.; Kordopatis, G.; Montesinos, B.; Wing, R. F.

    2012-11-01

    Context. Our ability to extract information from the spectra of stars depends on reliable models of stellar atmospheres and appropriate techniques for spectral synthesis. Various model codes and strategies for the analysis of stellar spectra are available today. Aims: We aim to compare the results of deriving stellar parameters using different atmosphere models and different analysis strategies. The focus is set on high-resolution spectroscopy of cool giant stars. Methods: Spectra representing four cool giant stars were made available to various groups and individuals working in the area of spectral synthesis, asking them to derive stellar parameters from the data provided. The results were discussed at a workshop in Vienna in 2010. Most of the major codes currently used in the astronomical community for analyses of stellar spectra were included in this experiment. Results: We present the results from the different groups, as well as an additional experiment comparing the synthetic spectra produced by various codes for a given set of stellar parameters. Similarities and differences of the results are discussed. Conclusions: Several valid approaches to analyze a given spectrum of a star result in quite a wide range of solutions. The main causes for the differences in parameters derived by different groups seem to lie in the physical input data and in the details of the analysis method. This clearly shows how far from a definitive abundance analysis we still are. Based on observations obtained at the Bernard Lyot Telescope (TBL, Pic du Midi, France) of the Midi-Pyrénées Observatory, which is operated by the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique of France.Tables 6-11 are only available in electronic form at http://www.aanda.orgThe spectra of stars 1 to 4 used in the experiment presented here are only availalbe at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http

  9. Comparative analysis of reflective sheeting.

    DOT National Transportation Integrated Search

    1981-01-01

    A comparative analysis was made of the initial brightness of seibulite brand super engineering grade and scotchlite brand high intensity grade reflective sheeting under road conditions. Overhead and ground-mounted guide signs were analyzed. Human fac...

  10. The Effectiveness of Physical Models in Teaching Anatomy: A Meta-Analysis of Comparative Studies

    ERIC Educational Resources Information Center

    Yammine, Kaissar; Violato, Claudio

    2016-01-01

    There are various educational methods used in anatomy teaching. While three dimensional (3D) visualization technologies are gaining ground due to their ever-increasing realism, reports investigating physical models as a low-cost 3D traditional method are still the subject of considerable interest. The aim of this meta-analysis is to quantitatively…

  11. A comparative study of two prediction models for brain tumor progression

    NASA Astrophysics Data System (ADS)

    Zhou, Deqi; Tran, Loc; Wang, Jihong; Li, Jiang

    2015-03-01

    MR diffusion tensor imaging (DTI) technique together with traditional T1 or T2 weighted MRI scans supplies rich information sources for brain cancer diagnoses. These images form large-scale, high-dimensional data sets. Due to the fact that significant correlations exist among these images, we assume low-dimensional geometry data structures (manifolds) are embedded in the high-dimensional space. Those manifolds might be hidden from radiologists because it is challenging for human experts to interpret high-dimensional data. Identification of the manifold is a critical step for successfully analyzing multimodal MR images. We have developed various manifold learning algorithms (Tran et al. 2011; Tran et al. 2013) for medical image analysis. This paper presents a comparative study of an incremental manifold learning scheme (Tran. et al. 2013) versus the deep learning model (Hinton et al. 2006) in the application of brain tumor progression prediction. The incremental manifold learning is a variant of manifold learning algorithm to handle large-scale datasets in which a representative subset of original data is sampled first to construct a manifold skeleton and remaining data points are then inserted into the skeleton by following their local geometry. The incremental manifold learning algorithm aims at mitigating the computational burden associated with traditional manifold learning methods for large-scale datasets. Deep learning is a recently developed multilayer perceptron model that has achieved start-of-the-art performances in many applications. A recent technique named "Dropout" can further boost the deep model by preventing weight coadaptation to avoid over-fitting (Hinton et al. 2012). We applied the two models on multiple MRI scans from four brain tumor patients to predict tumor progression and compared the performances of the two models in terms of average prediction accuracy, sensitivity, specificity and precision. The quantitative performance metrics were

  12. Autologous Stem Cell Transplantation in Patients With Multiple Myeloma: An Activity-based Costing Analysis, Comparing a Total Inpatient Model Versus an Early Discharge Model.

    PubMed

    Martino, Massimo; Console, Giuseppe; Russo, Letteria; Meliado', Antonella; Meliambro, Nicola; Moscato, Tiziana; Irrera, Giuseppe; Messina, Giuseppe; Pontari, Antonella; Morabito, Fortunato

    2017-08-01

    Activity-based costing (ABC) was developed and advocated as a means of overcoming the systematic distortions of traditional cost accounting. We calculated the cost of high-dose chemotherapy and autologous stem cell transplantation (ASCT) in patients with multiple myeloma using the ABC method, through 2 different care models: the total inpatient model (TIM) and the early-discharge outpatient model (EDOM) and compared this with the approved diagnosis related-groups (DRG) Italian tariffs. The TIM and EDOM models involved a total cost of €28,615.15 and €16,499.43, respectively. In the TIM model, the phase with the greatest economic impact was the posttransplant (recovery and hematologic engraftment) with 36.4% of the total cost, whereas in the EDOM model, the phase with the greatest economic impact was the pretransplant (chemo-mobilization, apheresis procedure, cryopreservation, and storage) phase, with 60.4% of total expenses. In an analysis of each episode, the TIM model comprised a higher absorption than the EDOM. In particular, the posttransplant represented 36.4% of the total costs in the TIM and 17.7% in EDOM model, respectively. The estimated reduction in cost per patient using an EDOM model was over €12,115.72. The repayment of the DRG in Calabrian Region for the ASCT procedure is €59,806. Given the real cost of the transplant, the estimated cost saving per patient is €31,190.85 in the TIM model and €43,306.57 in the EDOM model. In conclusion, the actual repayment of the DRG does not correspond to the real cost of the ASCT procedure in Italy. Moreover, using the EDOM, the cost of ASCT is approximately the half of the TIM model. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. The Evolution of the Solar Magnetic Field: A Comparative Analysis of Two Models

    NASA Astrophysics Data System (ADS)

    McMichael, K. D.; Karak, B. B.; Upton, L.; Miesch, M. S.; Vierkens, O.

    2017-12-01

    Understanding the complexity of the solar magnetic cycle is a task that has plagued scientists for decades. However, with the help of computer simulations, we have begun to gain more insight into possible solutions to the plethora of questions inside the Sun. STABLE (Surface Transport and Babcock Leighton) is a newly developed 3D dynamo model that can reproduce features of the solar cycle. In this model, the tilted bipolar sunspots are formed on the surface (based on the toroidal field at the bottom of the convection zone) and then decay and disperse, producing the poloidal field. Since STABLE is a 3D model, it is able to solve the full induction equation in the entirety of the solar convection zone as well as incorporate many free parameters (such as spot depth and turbulent diffusion) which are difficult to observe. In an attempt to constrain some of these free parameters, we compare STABLE to a surface flux transport model called AFT (Advective Flux Transport) which solves the radial component of the magnetic field on the solar surface. AFT is a state-of-the-art surface flux transport model that has a proven record of being able to reproduce solar observations with great accuracy. In this project, we implement synthetic bipolar sunspots into both models, using identical surface parameters, and run the models for comparison. We demonstrate that the 3D structure of the sunspots in the interior and the vertical diffusion of the sunspot magnetic field play an important role in establishing the surface magnetic field in STABLE. We found that when a sufficient amount of downward magnetic pumping is included in STABLE, the surface magnetic field from this model becomes insensitive to the internal structure of the sunspot and more consistent with that of AFT.

  14. Model Performance Evaluation and Scenario Analysis ...

    EPA Pesticide Factsheets

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors. The performance measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency, and a new weighted rank method. These performance metrics only provide useful information about the overall model performance. Note that MPESA is based on the separation of observed and simulated time series into magnitude and sequence components. The separation of time series into magnitude and sequence components and the reconstruction back to time series provides diagnostic insights to modelers. For example, traditional approaches lack the capability to identify if the source of uncertainty in the simulated data is due to the quality of the input data or the way the analyst adjusted the model parameters. This report presents a suite of model diagnostics that identify if mismatches between observed and simulated data result from magnitude or sequence related errors. MPESA offers graphical and statistical options that allow HSPF users to compare observed and simulated time series and identify the parameter values to adjust or the input data to modify. The scenario analysis part of the too

  15. Comparative dynamics in a health investment model.

    PubMed

    Eisenring, C

    1999-10-01

    The method of comparative dynamics fully exploits the inter-temporal structure of optimal control models. I derive comparative dynamic results in a simplified demand for health model. The effect of a change in the depreciation rate on the optimal paths for health capital and investment in health is studied by use of a phase diagram.

  16. Monitoring anti-angiogenic therapy in colorectal cancer murine model using dynamic contrast-enhanced MRI: comparing pixel-by-pixel with region of interest analysis.

    PubMed

    Haney, C R; Fan, X; Markiewicz, E; Mustafi, D; Karczmar, G S; Stadler, W M

    2013-02-01

    Sorafenib is a multi-kinase inhibitor that blocks cell proliferation and angiogenesis. It is currently approved for advanced hepatocellular and renal cell carcinomas in humans, where its major mechanism of action is thought to be through inhibition of vascular endothelial growth factor and platelet-derived growth factor receptors. The purpose of this study was to determine whether pixel-by-pixel analysis of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is better able to capture the heterogeneous response of Sorafenib in a murine model of colorectal tumor xenografts (as compared with region of interest analysis). MRI was performed on a 9.4 T pre-clinical scanner on the initial treatment day. Then either vehicle or drug were gavaged daily (3 days) up to the final image. Four days later, the mice were again imaged. The two-compartment model and reference tissue method of DCE-MRI were used to analyze the data. The results demonstrated that the contrast agent distribution rate constant (K(trans)) were significantly reduced (p < 0.005) at day-4 of Sorafenib treatment. In addition, the K(trans) of nearby muscle was also reduced after Sorafenib treatment. The pixel-by-pixel analysis (compared to region of interest analysis) was better able to capture the heterogeneity of the tumor and the decrease in K(trans) four days after treatment. For both methods, the volume of the extravascular extracellular space did not change significantly after treatment. These results confirm that parameters such as K(trans), could provide a non-invasive biomarker to assess the response to anti-angiogenic therapies such as Sorafenib, but that the heterogeneity of response across a tumor requires a more detailed analysis than has typically been undertaken.

  17. Comparative analysis of multisensor satellite monitoring of Arctic sea-ice

    USGS Publications Warehouse

    Belchansky, G.I.; Mordvintsev, Ilia N.; Douglas, David C.

    1999-01-01

    This report represents comparative analysis of nearly coincident Russian OKEAN-01 polar orbiting satellite data, Special Sensor Microwave Imager (SSM/I) and Advanced Very High Resolution Radiometer (AVHRR) imagery. OKEAN-01 ice concentration algorithms utilize active and passive microwave measurements and a linear mixture model for measured values of the brightness temperature and the radar backscatter. SSM/I and AVHRR ice concentrations were computed with NASA Team algorithm and visible and thermal-infrared wavelength AVHRR data, accordingly

  18. Comparing models for perfluorooctanoic acid pharmacokinetics using Bayesian analysis

    EPA Science Inventory

    Selecting the appropriate pharmacokinetic (PK) model given the available data is investigated for perfluorooctanoic acid (PFOA), which has been widely analyzed with an empirical, one-compartment model. This research examined the results of experiments [Kemper R. A., DuPont Haskel...

  19. A Comparative Test of Work-Family Conflict Models and Critical Examination of Work-Family Linkages

    ERIC Educational Resources Information Center

    Michel, Jesse S.; Mitchelson, Jacqueline K.; Kotrba, Lindsey M.; LeBreton, James M.; Baltes, Boris B.

    2009-01-01

    This paper is a comprehensive meta-analysis of over 20 years of work-family conflict research. A series of path analyses were conducted to compare and contrast existing work-family conflict models, as well as a new model we developed which integrates and synthesizes current work-family theory and research. This new model accounted for 40% of the…

  20. Embedded Hyperchaotic Generators: A Comparative Analysis

    NASA Astrophysics Data System (ADS)

    Sadoudi, Said; Tanougast, Camel; Azzaz, Mohamad Salah; Dandache, Abbas

    In this paper, we present a comparative analysis of FPGA implementation performances, in terms of throughput and resources cost, of five well known autonomous continuous hyperchaotic systems. The goal of this analysis is to identify the embedded hyperchaotic generator which leads to designs with small logic area cost, satisfactory throughput rates, low power consumption and low latency required for embedded applications such as secure digital communications between embedded systems. To implement the four-dimensional (4D) chaotic systems, we use a new structural hardware architecture based on direct VHDL description of the forth order Runge-Kutta method (RK-4). The comparative analysis shows that the hyperchaotic Lorenz generator provides attractive performances compared to that of others. In fact, its hardware implementation requires only 2067 CLB-slices, 36 multipliers and no block RAMs, and achieves a throughput rate of 101.6 Mbps, at the output of the FPGA circuit, at a clock frequency of 25.315 MHz with a low latency time of 316 ns. Consequently, these good implementation performances offer to the embedded hyperchaotic Lorenz generator the advantage of being the best candidate for embedded communications applications.

  1. Comparative evaluation of urban storm water quality models

    NASA Astrophysics Data System (ADS)

    Vaze, J.; Chiew, Francis H. S.

    2003-10-01

    The estimation of urban storm water pollutant loads is required for the development of mitigation and management strategies to minimize impacts to receiving environments. Event pollutant loads are typically estimated using either regression equations or "process-based" water quality models. The relative merit of using regression models compared to process-based models is not clear. A modeling study is carried out here to evaluate the comparative ability of the regression equations and process-based water quality models to estimate event diffuse pollutant loads from impervious surfaces. The results indicate that, once calibrated, both the regression equations and the process-based model can estimate event pollutant loads satisfactorily. In fact, the loads estimated using the regression equation as a function of rainfall intensity and runoff rate are better than the loads estimated using the process-based model. Therefore, if only estimates of event loads are required, regression models should be used because they are simpler and require less data compared to process-based models.

  2. Comparative analysis of numerical and experimental data of orthodontic mini-implants.

    PubMed

    Chatzigianni, Athina; Keilig, Ludger; Duschner, Heinz; Götz, Hermann; Eliades, Theodore; Bourauel, Christoph

    2011-10-01

    The purpose of this study was to compare numerical simulation data derived from finite element analysis (FEA) to experimental data on mini-implant loading. Nine finite element (FE) models of mini-implants and surrounding bone were derived from corresponding experimental specimens. The animal bone in the experiment consisted of bovine rib. The experimental groups were based on implant type, length, diameter, and angle of insertion. One experimental specimen was randomly selected from each group and was digitized in a microCT scanner. The FE models consisted of bone pieces containing Aarhus mini-implants with dimensions 1.5 × 7 mm and 1.5 × 9 mm or LOMAS mini-implants (dimensions 1.5 × 7 mm, 1.5 × 9 mm, and 2 × 7 mm). Mini-implants were inserted in two different ways, perpendicular to the bone surface or at 45 degrees to the direction of the applied load. Loading and boundary conditions in the FE models were adjusted to match the experimental situation, with the force applied on the neck of the mini-implants, along the mesio-distal direction up to a maximum of 0.5 N. Displacement and rotation of mini-implants after force application calculated by FEA were compared to previously recorded experimental deflections of the same mini-implants. Analysis of data with the Altman-Bland test and the Youden plot demonstrated good agreement between numerical and experimental findings (P = not significant) for the models selected. This study provides further evidence of the appropriateness of the FEA as an investigational tool in relevant research.

  3. Modeling spatiotemporal covariance for magnetoencephalography or electroencephalography source analysis.

    PubMed

    Plis, Sergey M; George, J S; Jun, S C; Paré-Blagoev, J; Ranken, D M; Wood, C C; Schmidt, D M

    2007-01-01

    We propose a new model to approximate spatiotemporal noise covariance for use in neural electromagnetic source analysis, which better captures temporal variability in background activity. As with other existing formalisms, our model employs a Kronecker product of matrices representing temporal and spatial covariance. In our model, spatial components are allowed to have differing temporal covariances. Variability is represented as a series of Kronecker products of spatial component covariances and corresponding temporal covariances. Unlike previous attempts to model covariance through a sum of Kronecker products, our model is designed to have a computationally manageable inverse. Despite increased descriptive power, inversion of the model is fast, making it useful in source analysis. We have explored two versions of the model. One is estimated based on the assumption that spatial components of background noise have uncorrelated time courses. Another version, which gives closer approximation, is based on the assumption that time courses are statistically independent. The accuracy of the structural approximation is compared to an existing model, based on a single Kronecker product, using both Frobenius norm of the difference between spatiotemporal sample covariance and a model, and scatter plots. Performance of ours and previous models is compared in source analysis of a large number of single dipole problems with simulated time courses and with background from authentic magnetoencephalography data.

  4. A comparative research of different ensemble surrogate models based on set pair analysis for the DNAPL-contaminated aquifer remediation strategy optimization.

    PubMed

    Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin

    2017-08-01

    Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Disability in Mexico: a comparative analysis between descriptive models and historical periods using a timeline.

    PubMed

    Sandoval, Hugo; Pérez-Neri, Iván; Martínez-Flores, Francisco; Valle-Cabrera, Martha Griselda Del; Pineda, Carlos

    2017-01-01

    Some interpretations frequently argue that three Disability Models (DM) (Charity, Medical/Rehabilitation, and Social) correspond to historical periods in terms of chronological succession. These views permeate a priori within major official documents on the subject in Mexico. This paper intends to test whether this association is plausible by applying a timeline method. A document search was made with inclusion and exclusion criteria in databases to select representative studies with which to depict milestones in the timelines for each period. The following is demonstrated: 1) models should be considered as categories of analysis and not as historical periods, in that the prevalence of elements of the three models is present to date, and 2) the association between disability models and historical periods results in teleological interpretations of the history of disability in Mexico.

  6. DIDEM - An integrated model for comparative health damage costs calculation of air pollution

    NASA Astrophysics Data System (ADS)

    Ravina, Marco; Panepinto, Deborah; Zanetti, Maria Chiara

    2018-01-01

    Air pollution represents a continuous hazard to human health. Administration, companies and population need efficient indicators of the possible effects given by a change in decision, strategy or habit. The monetary quantification of health effects of air pollution through the definition of external costs is increasingly recognized as a useful indicator to support decision and information at all levels. The development of modelling tools for the calculation of external costs can provide support to analysts in the development of consistent and comparable assessments. In this paper, the DIATI Dispersion and Externalities Model (DIDEM) is presented. The DIDEM model calculates the delta-external costs of air pollution comparing two alternative emission scenarios. This tool integrates CALPUFF's advanced dispersion modelling with the latest WHO recommendations on concentration-response functions. The model is based on the impact pathway method. It was designed to work with a fine spatial resolution and a local or national geographic scope. The modular structure allows users to input their own data sets. The DIDEM model was tested on a real case study, represented by a comparative analysis of the district heating system in Turin, Italy. Additional advantages and drawbacks of the tool are discussed in the paper. A comparison with other existing models worldwide is reported.

  7. Transitions in state public health law: comparative analysis of state public health law reform following the Turning Point Model State Public Health Act.

    PubMed

    Meier, Benjamin Mason; Hodge, James G; Gebbie, Kristine M

    2009-03-01

    Given the public health importance of law modernization, we undertook a comparative analysis of policy efforts in 4 states (Alaska, South Carolina, Wisconsin, and Nebraska) that have considered public health law reform based on the Turning Point Model State Public Health Act. Through national legislative tracking and state case studies, we investigated how the Turning Point Act's model legal language has been considered for incorporation into state law and analyzed key facilitating and inhibiting factors for public health law reform. Our findings provide the practice community with a research base to facilitate further law reform and inform future scholarship on the role of law as a determinant of the public's health.

  8. A comparative analysis of two highly spatially resolved European atmospheric emission inventories

    NASA Astrophysics Data System (ADS)

    Ferreira, J.; Guevara, M.; Baldasano, J. M.; Tchepel, O.; Schaap, M.; Miranda, A. I.; Borrego, C.

    2013-08-01

    A reliable emissions inventory is highly important for air quality modelling applications, especially at regional or local scales, which require high resolutions. Consequently, higher resolution emission inventories have been developed that are suitable for regional air quality modelling. This research performs an inter-comparative analysis of different spatial disaggregation methodologies of atmospheric emission inventories. This study is based on two different European emission inventories with different spatial resolutions: 1) the EMEP (European Monitoring and Evaluation Programme) inventory and 2) an emission inventory developed by the TNO (Netherlands Organisation for Applied Scientific Research). These two emission inventories were converted into three distinct gridded emission datasets as follows: (i) the EMEP emission inventory was disaggregated by area (EMEParea) and (ii) following a more complex methodology (HERMES-DIS - High-Elective Resolution Modelling Emissions System - DISaggregation module) to understand and evaluate the influence of different disaggregation methods; and (iii) the TNO gridded emissions, which are based on different emission data sources and different disaggregation methods. A predefined common grid with a spatial resolution of 12 × 12 km2 was used to compare the three datasets spatially. The inter-comparative analysis was performed by source sector (SNAP - Selected Nomenclature for Air Pollution) with emission totals for selected pollutants. It included the computation of difference maps (to focus on the spatial variability of emission differences) and a linear regression analysis to calculate the coefficients of determination and to quantitatively measure differences. From the spatial analysis, greater differences were found for residential/commercial combustion (SNAP02), solvent use (SNAP06) and road transport (SNAP07). These findings were related to the different spatial disaggregation that was conducted by the TNO and HERMES

  9. A Comparative Meta-Analysis of 5E and Traditional Approaches in Turkey

    ERIC Educational Resources Information Center

    Anil, Özgür; Batdi, Veli

    2015-01-01

    The aim of this study is to compare the 5E learning model with traditional learning methods in terms of their effect on students' academic achievement, retention and attitude scores. In this context, the meta-analytic method known as the "analysis of analyses" was used and a review undertaken of the studies and theses (N = 14) executed…

  10. Model-based meta-analysis for comparing Vitamin D2 and D3 parent-metabolite pharmacokinetics.

    PubMed

    Ocampo-Pelland, Alanna S; Gastonguay, Marc R; Riggs, Matthew M

    2017-08-01

    Association of Vitamin D (D3 & D2) and its 25OHD metabolite (25OHD3 & 25OHD2) exposures with various diseases is an active research area. D3 and D2 dose-equivalency and each form's ability to raise 25OHD concentrations are not well-defined. The current work describes a population pharmacokinetic (PK) model for D2 and 25OHD2 and the use of a previously developed D3-25OHD3 PK model [1] for comparing D3 and D2-related exposures. Public-source D2 and 25OHD2 PK data in healthy or osteoporotic populations, including 17 studies representing 278 individuals (15 individual-level and 18 arm-level units), were selected using search criteria in PUBMED. Data included oral, single and multiple D2 doses (400-100,000 IU/d). Nonlinear mixed effects models were developed simultaneously for D2 and 25OHD2 PK (NONMEM v7.2) by considering 1- and 2-compartment models with linear or nonlinear clearance. Unit-level random effects and residual errors were weighted by arm sample size. Model simulations compared 25OHD exposures, following repeated D2 and D3 oral administration across typical dosing and baseline ranges. D2 parent and metabolite were each described by 2-compartment models with numerous parameter estimates shared with the D3-25OHD3 model [1]. Notably, parent D2 was eliminated (converted to 25OHD) through a first-order clearance whereas the previously published D3 model [1] included a saturable non-linear clearance. Similar to 25OHD3 PK model results [1], 25OHD2 was eliminated by a first-order clearance, which was almost twice as fast as the former. Simulations at lower baselines, following lower equivalent doses, indicated that D3 was more effective than D2 at raising 25OHD concentrations. Due to saturation of D3 clearance, however, at higher doses or baselines, the probability of D2 surpassing D3's ability to raise 25OHD concentrations increased substantially. Since 25OHD concentrations generally surpassed 75 nmol/L at these higher baselines by 3 months, there would be no

  11. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation

    NASA Astrophysics Data System (ADS)

    Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc

    2015-10-01

    This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.

  12. A comparative sensitivity analysis focused on wet deposition models for the Fukushima and Chernobyl atmospheric dispersion events

    NASA Astrophysics Data System (ADS)

    Quérel, Arnaud; Roustan, Yelva; Quélo, Denis; Bocquet, Marc; Winiarek, Victor

    2014-05-01

    In order to model the transport of radionuclides bound to atmospheric particles and the ground contamination at the synoptic scale, the wet deposition is a crucial point. Usually, the wet deposition is divided in two different mechanisms, the below-cloud scavenging (washout) and the in-cloud scavenging (rainout). Since the micro-physics of both deposition processes is not well known yet, the modeling of the wet deposition of particles at the synoptic scale is uncertain and difficult to validate. This leads to an abundance of wet deposition models, none of them being fully adequate. The existing models of particle scavenging can be distinguished by the nature and the number of physical parameters they rely on. For instance the scavenging coefficient variability can be determined only by the rainfall intensity or take into account the rainfall intensity and the particle size distribution. Beyond their intrinsic formulations, the deposition models are sensitive to the input data necessary to use them, cloud height for instance. Finally, the simulated ground deposition is more or less sensitive to the choices of the overall-models involved in the atmospheric transport of particles and the meteorology in general. For accidental atmospheric releases, the uncertainties linked to the source-term are for instance crucial, what justifies the use of different ones in the study. The Polyphemus air quality system is used to perform the simulations of the radioactive dispersion, considering Caesium-137 as particulate matter for the accidental releases from the Fukushima and Chernobyl nuclear power plants. In this study, two different approaches are used. In the first one, the influence of the different components taking part in the scavenging modeling are confronted separately (whether the scavenging models or the overall models). The second approach is a global sensitivity analysis computed both on the Chernobyl and Fukushima cases. It relies on simulations performed with

  13. Royal London space analysis: plaster versus digital model assessment.

    PubMed

    Grewal, Balpreet; Lee, Robert T; Zou, Lifong; Johal, Ama

    2017-06-01

    With the advent of digital study models, the importance of being able to evaluate space requirements becomes valuable to treatment planning and the justification for any required extraction pattern. This study was undertaken to compare the validity and reliability of the Royal London space analysis (RLSA) undertaken on plaster as compared with digital models. A pilot study (n = 5) was undertaken on plaster and digital models to evaluate the feasibility of digital space planning. This also helped to determine the sample size calculation and as a result, 30 sets of study models with specified inclusion criteria were selected. All five components of the RLSA, namely: crowding; depth of occlusal curve; arch expansion/contraction; incisor antero-posterior advancement and inclination (assessed from the pre-treatment lateral cephalogram) were accounted for in relation to both model types. The plaster models served as the gold standard. Intra-operator measurement error (reliability) was evaluated along with a direct comparison of the measured digital values (validity) with the plaster models. The measurement error or coefficient of repeatability was comparable for plaster and digital space analyses and ranged from 0.66 to 0.95mm. No difference was found between the space analysis performed in either the upper or lower dental arch. Hence, the null hypothesis was accepted. The digital model measurements were consistently larger, albeit by a relatively small amount, than the plaster models (0.35mm upper arch and 0.32mm lower arch). No difference was detected in the RLSA when performed using either plaster or digital models. Thus, digital space analysis provides a valid and reproducible alternative method in the new era of digital records. © The Author 2016. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com

  14. Python package for model STructure ANalysis (pySTAN)

    NASA Astrophysics Data System (ADS)

    Van Hoey, Stijn; van der Kwast, Johannes; Nopens, Ingmar; Seuntjens, Piet

    2013-04-01

    The selection and identification of a suitable hydrological model structure is more than fitting parameters of a model structure to reproduce a measured hydrograph. The procedure is highly dependent on various criteria, i.e. the modelling objective, the characteristics and the scale of the system under investigation as well as the available data. Rigorous analysis of the candidate model structures is needed to support and objectify the selection of the most appropriate structure for a specific case (or eventually justify the use of a proposed ensemble of structures). This holds both in the situation of choosing between a limited set of different structures as well as in the framework of flexible model structures with interchangeable components. Many different methods to evaluate and analyse model structures exist. This leads to a sprawl of available methods, all characterized by different assumptions, changing conditions of application and various code implementations. Methods typically focus on optimization, sensitivity analysis or uncertainty analysis, with backgrounds from optimization, machine-learning or statistics amongst others. These methods also need an evaluation metric (objective function) to compare the model outcome with some observed data. However, for current methods described in literature, implementations are not always transparent and reproducible (if available at all). No standard procedures exist to share code and the popularity (and amount of applications) of the methods is sometimes more dependent on the availability than the merits of the method. Moreover, new implementations of existing methods are difficult to verify and the different theoretical backgrounds make it difficult for environmental scientists to decide about the usefulness of a specific method. A common and open framework with a large set of methods can support users in deciding about the most appropriate method. Hence, it enables to simultaneously apply and compare different

  15. Comparative study: TQ and Lean Production ownership models in health services

    PubMed Central

    Eiro, Natalia Yuri; Torres-Junior, Alvair Silveira

    2015-01-01

    Objective: compare the application of Total Quality (TQ) models used in processes of a health service, cases of lean healthcare and literature from another institution that has also applied this model. Method: this is a qualitative research that was conducted through a descriptive case study. Results: through critical analysis of the institutions studied it was possible to make a comparison between the traditional quality approach checked in one case and the theoretical and practice lean production approach used in another case and the specifications are described below. Conclusion: the research identified that the lean model was better suited for people that work systemically and generate the flow. It also pointed towards some potential challenges in the introduction and implementation of lean methods in health. PMID:26487134

  16. Nursing home quality: a comparative analysis using CMS Nursing Home Compare data to examine differences between rural and nonrural facilities.

    PubMed

    Lutfiyya, May Nawal; Gessert, Charles E; Lipsky, Martin S

    2013-08-01

    Advances in medicine and an aging US population suggest that there will be an increasing demand for nursing home services. Although nursing homes are highly regulated and scrutinized, their quality remains a concern and may be a greater issue to those living in rural communities. Despite this, few studies have investigated differences in the quality of nursing home care across the rural-urban continuum. The purpose of this study was to compare the quality of rural and nonrural nursing homes by using aggregated rankings on multiple quality measures calculated by the Centers for Medicare and Medicaid Services and reported on their Nursing Home Compare Web site. Independent-sample t tests were performed to compare the mean ratings on the reported quality measures of rural and nonrural nursing homes. A linear mixed binary logistic regression model controlling for state was performed to determine if the covariates of ownership, number of beds, and geographic locale were associated with a higher overall quality rating. Of the 15,177 nursing homes included in the study sample, 69.2% were located in nonrural areas and 30.8% in rural areas. The t test analysis comparing the overall, health inspection, staffing, and quality measure ratings of rural and nonrural nursing homes yielded statistically significant results for 3 measures, 2 of which (overall ratings and health inspections) favored rural nursing homes. Although a higher percentage of nursing homes (44.8%-42.2%) received a 4-star or higher rating, regression analysis using an overall rating of 4 stars or higher as the dependent variable revealed that when controlling for state and adjusting for size and ownership, rural nursing homes were less likely to have a 4-star or higher rating when compared with nonrural nursing homes (OR = .901, 95% CI 0.824-0.986). Mixed model logistic regression analysis suggested that rural nursing home quality was not comparable to that of nonrural nursing homes. When controlling for

  17. Skull Development, Ossification Pattern, and Adult Shape in the Emerging Lizard Model Organism Pogona vitticeps: A Comparative Analysis With Other Squamates.

    PubMed

    Ollonen, Joni; Da Silva, Filipe O; Mahlow, Kristin; Di-Poï, Nicolas

    2018-01-01

    The rise of the Evo-Devo field and the development of multidisciplinary research tools at various levels of biological organization have led to a growing interest in researching for new non-model organisms. Squamates (lizards and snakes) are particularly important for understanding fundamental questions about the evolution of vertebrates because of their high diversity and evolutionary innovations and adaptations that portrait a striking body plan change that reached its extreme in snakes. Yet, little is known about the intricate connection between phenotype and genotype in squamates, partly due to limited developmental knowledge and incomplete characterization of embryonic development. Surprisingly, squamate models have received limited attention in comparative developmental studies, and only a few species examined so far can be considered as representative and appropriate model organism for mechanistic Evo-Devo studies. Fortunately, the agamid lizard Pogona vitticeps (central bearded dragon) is one of the most popular, domesticated reptile species with both a well-established history in captivity and key advantages for research, thus forming an ideal laboratory model system and justifying his recent use in reptile biology research. We first report here the complete post-oviposition embryonic development for P. vitticeps based on standardized staging systems and external morphological characters previously defined for squamates. Whereas the overall morphological development follows the general trends observed in other squamates, our comparisons indicate major differences in the developmental sequence of several tissues, including early craniofacial characters. Detailed analysis of both embryonic skull development and adult skull shape, using a comparative approach integrating CT-scans and gene expression studies in P. vitticeps as well as comparative embryology and 3D geometric morphometrics in a large dataset of lizards and snakes, highlights the extreme adult

  18. Skull Development, Ossification Pattern, and Adult Shape in the Emerging Lizard Model Organism Pogona vitticeps: A Comparative Analysis With Other Squamates

    PubMed Central

    Ollonen, Joni; Da Silva, Filipe O.; Mahlow, Kristin; Di-Poï, Nicolas

    2018-01-01

    The rise of the Evo-Devo field and the development of multidisciplinary research tools at various levels of biological organization have led to a growing interest in researching for new non-model organisms. Squamates (lizards and snakes) are particularly important for understanding fundamental questions about the evolution of vertebrates because of their high diversity and evolutionary innovations and adaptations that portrait a striking body plan change that reached its extreme in snakes. Yet, little is known about the intricate connection between phenotype and genotype in squamates, partly due to limited developmental knowledge and incomplete characterization of embryonic development. Surprisingly, squamate models have received limited attention in comparative developmental studies, and only a few species examined so far can be considered as representative and appropriate model organism for mechanistic Evo-Devo studies. Fortunately, the agamid lizard Pogona vitticeps (central bearded dragon) is one of the most popular, domesticated reptile species with both a well-established history in captivity and key advantages for research, thus forming an ideal laboratory model system and justifying his recent use in reptile biology research. We first report here the complete post-oviposition embryonic development for P. vitticeps based on standardized staging systems and external morphological characters previously defined for squamates. Whereas the overall morphological development follows the general trends observed in other squamates, our comparisons indicate major differences in the developmental sequence of several tissues, including early craniofacial characters. Detailed analysis of both embryonic skull development and adult skull shape, using a comparative approach integrating CT-scans and gene expression studies in P. vitticeps as well as comparative embryology and 3D geometric morphometrics in a large dataset of lizards and snakes, highlights the extreme adult

  19. Building v/s Exploring Models: Comparing Learning of Evolutionary Processes through Agent-based Modeling

    NASA Astrophysics Data System (ADS)

    Wagh, Aditi

    Two strands of work motivate the three studies in this dissertation. Evolutionary change can be viewed as a computational complex system in which a small set of rules operating at the individual level result in different population level outcomes under different conditions. Extensive research has documented students' difficulties with learning about evolutionary change (Rosengren et al., 2012), particularly in terms of levels slippage (Wilensky & Resnick, 1999). Second, though building and using computational models is becoming increasingly common in K-12 science education, we know little about how these two modalities compare. This dissertation adopts agent-based modeling as a representational system to compare these modalities in the conceptual context of micro-evolutionary processes. Drawing on interviews, Study 1 examines middle-school students' productive ways of reasoning about micro-evolutionary processes to find that the specific framing of traits plays a key role in whether slippage explanations are cued. Study 2, which was conducted in 2 schools with about 150 students, forms the crux of the dissertation. It compares learning processes and outcomes when students build their own models or explore a pre-built model. Analysis of Camtasia videos of student pairs reveals that builders' and explorers' ways of accessing rules, and sense-making of observed trends are of a different character. Builders notice rules through available blocks-based primitives, often bypassing their enactment while explorers attend to rules primarily through the enactment. Moreover, builders' sense-making of observed trends is more rule-driven while explorers' is more enactment-driven. Pre and posttests reveal that builders manifest a greater facility with accessing rules, providing explanations manifesting targeted assembly. Explorers use rules to construct explanations manifesting non-targeted assembly. Interviews reveal varying degrees of shifts away from slippage in both

  20. Comparative molecular analysis of early and late cancer cachexia-induced muscle wasting in mouse models.

    PubMed

    Sun, Rulin; Zhang, Santao; Lu, Xing; Hu, Wenjun; Lou, Ning; Zhao, Yan; Zhou, Jia; Zhang, Xiaoping; Yang, Hongmei

    2016-12-01

    Cancer-induced muscle wasting, which commonly occurs in cancer cachexia, is characterized by impaired quality of life and poor patient survival. To identify an appropriate treatment, research on the mechanism underlying muscle wasting is essential. Thus far, studies on muscle wasting using cancer cachectic models have generally focused on early cancer cachexia (ECC), before severe body weight loss occurs. In the present study, we established models of ECC and late cancer cachexia (LCC) and compared different stages of cancer cachexia using two cancer cachectic mouse models induced by colon-26 (C26) adenocarcinoma or Lewis lung carcinoma (LLC). In each model, tumor-bearing (TB) and control (CN) mice were injected with cancer cells and PBS, respectively. The TB and CN mice, which were euthanized on the 24th day or the 36th day after injection, were defined as the ECC and ECC-CN mice or the LCC and LCC-CN mice. In addition, the tissues were harvested and analyzed. We found that both the ECC and LCC mice developed cancer cachexia. The amounts of muscle loss differed between the ECC and LCC mice. Moreover, the expression of some molecules was altered in the muscles from the LCC mice but not in those from the ECC mice compared with their CN mice. In conclusion, the molecules with altered expression in the muscles from the ECC and LCC mice were not exactly the same. These findings may provide some clues for therapy which could prevent the muscle wasting in cancer cachexia from progression to the late stage.

  1. Comparative Analysis of Four Manpower Nursing Requirements Models. Health Manpower References. [Nurse Planning Information Series, No. 6].

    ERIC Educational Resources Information Center

    Deane, Robert T.; Ro, Kong-Kyun

    The analysis and description of four manpower nursing requirements models-- the Pugh-Roberts, the Vector, the Community Systems Foundation (CSF), and the Western Interstate Commission of Higher Education (WICHE)--are presented in this report. The introduction provides an overview of the project which was designed to analyze these different models.…

  2. A comparative study of theoretical graph models for characterizing structural networks of human brain.

    PubMed

    Li, Xiaojin; Hu, Xintao; Jin, Changfeng; Han, Junwei; Liu, Tianming; Guo, Lei; Hao, Wei; Li, Lingjiang

    2013-01-01

    Previous studies have investigated both structural and functional brain networks via graph-theoretical methods. However, there is an important issue that has not been adequately discussed before: what is the optimal theoretical graph model for describing the structural networks of human brain? In this paper, we perform a comparative study to address this problem. Firstly, large-scale cortical regions of interest (ROIs) are localized by recently developed and validated brain reference system named Dense Individualized Common Connectivity-based Cortical Landmarks (DICCCOL) to address the limitations in the identification of the brain network ROIs in previous studies. Then, we construct structural brain networks based on diffusion tensor imaging (DTI) data. Afterwards, the global and local graph properties of the constructed structural brain networks are measured using the state-of-the-art graph analysis algorithms and tools and are further compared with seven popular theoretical graph models. In addition, we compare the topological properties between two graph models, namely, stickiness-index-based model (STICKY) and scale-free gene duplication model (SF-GD), that have higher similarity with the real structural brain networks in terms of global and local graph properties. Our experimental results suggest that among the seven theoretical graph models compared in this study, STICKY and SF-GD models have better performances in characterizing the structural human brain network.

  3. Transitions in State Public Health Law: Comparative Analysis of State Public Health Law Reform Following the Turning Point Model State Public Health Act

    PubMed Central

    Meier, Benjamin Mason; Gebbie, Kristine M.

    2009-01-01

    Given the public health importance of law modernization, we undertook a comparative analysis of policy efforts in 4 states (Alaska, South Carolina, Wisconsin, and Nebraska) that have considered public health law reform based on the Turning Point Model State Public Health Act. Through national legislative tracking and state case studies, we investigated how the Turning Point Act's model legal language has been considered for incorporation into state law and analyzed key facilitating and inhibiting factors for public health law reform. Our findings provide the practice community with a research base to facilitate further law reform and inform future scholarship on the role of law as a determinant of the public's health. PMID:19150900

  4. A general numerical model for wave rotor analysis

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel W.

    1992-01-01

    Wave rotors represent one of the promising technologies for achieving very high core temperatures and pressures in future gas turbine engines. Their operation depends upon unsteady gas dynamics and as such, their analysis is quite difficult. This report describes a numerical model which has been developed to perform such an analysis. Following a brief introduction, a summary of the wave rotor concept is given. The governing equations are then presented, along with a summary of the assumptions used to obtain them. Next, the numerical integration technique is described. This is an explicit finite volume technique based on the method of Roe. The discussion then focuses on the implementation of appropriate boundary conditions. Following this, some results are presented which first compare the numerical approximation to the governing differential equations and then compare the overall model to an actual wave rotor experiment. Finally, some concluding remarks are presented concerning the limitations of the simplifying assumptions and areas where the model may be improved.

  5. Comparing ESC and iPSC-Based Models for Human Genetic Disorders.

    PubMed

    Halevy, Tomer; Urbach, Achia

    2014-10-24

    Traditionally, human disorders were studied using animal models or somatic cells taken from patients. Such studies enabled the analysis of the molecular mechanisms of numerous disorders, and led to the discovery of new treatments. Yet, these systems are limited or even irrelevant in modeling multiple genetic diseases. The isolation of human embryonic stem cells (ESCs) from diseased blastocysts, the derivation of induced pluripotent stem cells (iPSCs) from patients' somatic cells, and the new technologies for genome editing of pluripotent stem cells have opened a new window of opportunities in the field of disease modeling, and enabled studying diseases that couldn't be modeled in the past. Importantly, despite the high similarity between ESCs and iPSCs, there are several fundamental differences between these cells, which have important implications regarding disease modeling. In this review we compare ESC-based models to iPSC-based models, and highlight the advantages and disadvantages of each system. We further suggest a roadmap for how to choose the optimal strategy to model each specific disorder.

  6. A comparative analysis of prognostic factor models for follicular lymphoma based on a phase III trial of CHOP-rituximab versus CHOP + 131iodine--tositumomab.

    PubMed

    Press, Oliver W; Unger, Joseph M; Rimsza, Lisa M; Friedberg, Jonathan W; LeBlanc, Michael; Czuczman, Myron S; Kaminski, Mark; Braziel, Rita M; Spier, Catherine; Gopal, Ajay K; Maloney, David G; Cheson, Bruce D; Dakhil, Shaker R; Miller, Thomas P; Fisher, Richard I

    2013-12-01

    There is currently no consensus on optimal frontline therapy for patients with follicular lymphoma. We analyzed a phase III randomized intergroup trial comparing six cycles of CHOP-R (cyclophosphamide-Adriamycin-vincristine-prednisone (Oncovin)-rituximab) with six cycles of CHOP followed by iodine-131 tositumomab radioimmunotherapy (RIT) to assess whether any subsets benefited more from one treatment or the other, and to compare three prognostic models. We conducted univariate and multivariate Cox regression analyses of 532 patients enrolled on this trial and compared the prognostic value of the FLIPI (follicular lymphoma international prognostic index), FLIPI2, and LDH + β2M (lactate dehydrogenase + β2-microglobulin) models. Outcomes were excellent, but not statistically different between the two study arms [5-year progression-free survival (PFS) of 60% with CHOP-R and 66% with CHOP-RIT (P = 0.11); 5-year overall survival (OS) of 92% with CHOP-R and 86% with CHOP-RIT (P = 0.08); overall response rate of 84% for both arms]. The only factor found to potentially predict the impact of treatment was serum β2M; among patients with normal β2M, CHOP-RIT patients had better PFS compared with CHOP-R patients, whereas among patients with high serum β2M, PFS by arm was similar (interaction P value = 0.02). All three prognostic models (FLIPI, FLIPI2, and LDH + β2M) predicted both PFS and OS well, though the LDH + β2M model is easiest to apply and identified an especially poor risk subset. In an exploratory analysis using the latter model, there was a statistically significant trend suggesting that low-risk patients had superior observed PFS if treated with CHOP-RIT, whereas high-risk patients had a better PFS with CHOP-R. ©2013 AACR.

  7. MetaComp: comprehensive analysis software for comparative meta-omics including comparative metagenomics.

    PubMed

    Zhai, Peng; Yang, Longshu; Guo, Xiao; Wang, Zhe; Guo, Jiangtao; Wang, Xiaoqi; Zhu, Huaiqiu

    2017-10-02

    During the past decade, the development of high throughput nucleic sequencing and mass spectrometry analysis techniques have enabled the characterization of microbial communities through metagenomics, metatranscriptomics, metaproteomics and metabolomics data. To reveal the diversity of microbial communities and interactions between living conditions and microbes, it is necessary to introduce comparative analysis based upon integration of all four types of data mentioned above. Comparative meta-omics, especially comparative metageomics, has been established as a routine process to highlight the significant differences in taxon composition and functional gene abundance among microbiota samples. Meanwhile, biologists are increasingly concerning about the correlations between meta-omics features and environmental factors, which may further decipher the adaptation strategy of a microbial community. We developed a graphical comprehensive analysis software named MetaComp comprising a series of statistical analysis approaches with visualized results for metagenomics and other meta-omics data comparison. This software is capable to read files generated by a variety of upstream programs. After data loading, analyses such as multivariate statistics, hypothesis testing of two-sample, multi-sample as well as two-group sample and a novel function-regression analysis of environmental factors are offered. Here, regression analysis regards meta-omic features as independent variable and environmental factors as dependent variables. Moreover, MetaComp is capable to automatically choose an appropriate two-group sample test based upon the traits of input abundance profiles. We further evaluate the performance of its choice, and exhibit applications for metagenomics, metaproteomics and metabolomics samples. MetaComp, an integrative software capable for applying to all meta-omics data, originally distills the influence of living environment on microbial community by regression analysis

  8. Bayesian analysis of CCDM models

    NASA Astrophysics Data System (ADS)

    Jesus, J. F.; Valentim, R.; Andrade-Oliveira, F.

    2017-09-01

    Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3αH0 model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.

  9. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics.

    PubMed

    Walmsley, Christopher W; McCurry, Matthew R; Clausen, Phillip D; McHenry, Colin R

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be 'reasonable' are often assumed to have little influence on the results and their interpretation. HERE WE REPORT AN EXTENSIVE SENSITIVITY ANALYSIS WHERE HIGH RESOLUTION FINITE ELEMENT (FE) MODELS OF MANDIBLES FROM SEVEN SPECIES OF CROCODILE WERE ANALYSED UNDER LOADS TYPICAL FOR COMPARATIVE ANALYSIS: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would

  10. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics

    PubMed Central

    McCurry, Matthew R.; Clausen, Phillip D.; McHenry, Colin R.

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation. Here we report an extensive sensitivity analysis where high resolution finite element (FE) models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets

  11. Using multiple group modeling to test moderators in meta-analysis.

    PubMed

    Schoemann, Alexander M

    2016-12-01

    Meta-analysis is a popular and flexible analysis that can be fit in many modeling frameworks. Two methods of fitting meta-analyses that are growing in popularity are structural equation modeling (SEM) and multilevel modeling (MLM). By using SEM or MLM to fit a meta-analysis researchers have access to powerful techniques associated with SEM and MLM. This paper details how to use one such technique, multiple group analysis, to test categorical moderators in meta-analysis. In a multiple group meta-analysis a model is fit to each level of the moderator simultaneously. By constraining parameters across groups any model parameter can be tested for equality. Using multiple groups to test for moderators is especially relevant in random-effects meta-analysis where both the mean and the between studies variance of the effect size may be compared across groups. A simulation study and the analysis of a real data set are used to illustrate multiple group modeling with both SEM and MLM. Issues related to multiple group meta-analysis and future directions for research are discussed. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Comparative analysis of remotely-sensed data products via ecological niche modeling of avian influenza case occurrences in Middle Eastern poultry.

    PubMed

    Bodbyl-Roels, Sarah; Peterson, A Townsend; Xiao, Xiangming

    2011-03-28

    Ecological niche modeling integrates known sites of occurrence of species or phenomena with data on environmental variation across landscapes to infer environmental spaces potentially inhabited (i.e., the ecological niche) to generate predictive maps of potential distributions in geographic space. Key inputs to this process include raster data layers characterizing spatial variation in environmental parameters, such as vegetation indices from remotely sensed satellite imagery. The extent to which ecological niche models reflect real-world distributions depends on a number of factors, but an obvious concern is the quality and content of the environmental data layers. We assessed ecological niche model predictions of H5N1 avian flu presence quantitatively within and among four geographic regions, based on models incorporating two means of summarizing three vegetation indices derived from the MODIS satellite. We evaluated our models for predictive ability using partial ROC analysis and GLM ANOVA to compare performance among indices and regions. We found correlations between vegetation indices to be high, such that they contain information that overlaps broadly. Neither the type of vegetation index used nor method of summary affected model performance significantly. However, the degree to which model predictions had to be transferred (i.e., projected onto landscapes and conditions not represented on the landscape of training) impacted predictive strength greatly (within-region model predictions far out-performed models projected among regions). Our results provide the first quantitative tests of most appropriate uses of different remotely sensed data sets in ecological niche modeling applications. While our testing did not result in a decisive "best" index product or means of summarizing indices, it emphasizes the need for careful evaluation of products used in modeling (e.g. matching temporal dimensions and spatial resolution) for optimum performance, instead of

  13. Effective comparative analysis of protein-protein interaction networks by measuring the steady-state network flow using a Markov model.

    PubMed

    Jeong, Hyundoo; Qian, Xiaoning; Yoon, Byung-Jun

    2016-10-06

    Comparative analysis of protein-protein interaction (PPI) networks provides an effective means of detecting conserved functional network modules across different species. Such modules typically consist of orthologous proteins with conserved interactions, which can be exploited to computationally predict the modules through network comparison. In this work, we propose a novel probabilistic framework for comparing PPI networks and effectively predicting the correspondence between proteins, represented as network nodes, that belong to conserved functional modules across the given PPI networks. The basic idea is to estimate the steady-state network flow between nodes that belong to different PPI networks based on a Markov random walk model. The random walker is designed to make random moves to adjacent nodes within a PPI network as well as cross-network moves between potential orthologous nodes with high sequence similarity. Based on this Markov random walk model, we estimate the steady-state network flow - or the long-term relative frequency of the transitions that the random walker makes - between nodes in different PPI networks, which can be used as a probabilistic score measuring their potential correspondence. Subsequently, the estimated scores can be used for detecting orthologous proteins in conserved functional modules through network alignment. Through evaluations based on multiple real PPI networks, we demonstrate that the proposed scheme leads to improved alignment results that are biologically more meaningful at reduced computational cost, outperforming the current state-of-the-art algorithms. The source code and datasets can be downloaded from http://www.ece.tamu.edu/~bjyoon/CUFID .

  14. Regression Model Optimization for the Analysis of Experimental Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2009-01-01

    A candidate math model search algorithm was developed at Ames Research Center that determines a recommended math model for the multivariate regression analysis of experimental data. The search algorithm is applicable to classical regression analysis problems as well as wind tunnel strain gage balance calibration analysis applications. The algorithm compares the predictive capability of different regression models using the standard deviation of the PRESS residuals of the responses as a search metric. This search metric is minimized during the search. Singular value decomposition is used during the search to reject math models that lead to a singular solution of the regression analysis problem. Two threshold dependent constraints are also applied. The first constraint rejects math models with insignificant terms. The second constraint rejects math models with near-linear dependencies between terms. The math term hierarchy rule may also be applied as an optional constraint during or after the candidate math model search. The final term selection of the recommended math model depends on the regressor and response values of the data set, the user s function class combination choice, the user s constraint selections, and the result of the search metric minimization. A frequently used regression analysis example from the literature is used to illustrate the application of the search algorithm to experimental data.

  15. High Altitude Long Endurance UAV Analysis Model Development and Application Study Comparing Solar Powered Airplane and Airship Station-Keeping Capabilities

    NASA Technical Reports Server (NTRS)

    Ozoroski, Thomas A.; Nickol, Craig L.; Guynn, Mark D.

    2015-01-01

    There have been ongoing efforts in the Aeronautics Systems Analysis Branch at NASA Langley Research Center to develop a suite of integrated physics-based computational utilities suitable for modeling and analyzing extended-duration missions carried out using solar powered aircraft. From these efforts, SolFlyte has emerged as a state-of-the-art vehicle analysis and mission simulation tool capable of modeling both heavier-than-air (HTA) and lighter-than-air (LTA) vehicle concepts. This study compares solar powered airplane and airship station-keeping capability during a variety of high altitude missions, using SolFlyte as the primary analysis component. Three Unmanned Aerial Vehicle (UAV) concepts were designed for this study: an airplane (Operating Empty Weight (OEW) = 3285 kilograms, span = 127 meters, array area = 450 square meters), a small airship (OEW = 3790 kilograms, length = 115 meters, array area = 570 square meters), and a large airship (OEW = 6250 kilograms, length = 135 meters, array area = 1080 square meters). All the vehicles were sized for payload weight and power requirements of 454 kilograms and 5 kilowatts, respectively. Seven mission sites distributed throughout the United States were selected to provide a basis for assessing the vehicle energy budgets and site-persistent operational availability. Seasonal, 30-day duration missions were simulated at each of the sites during March, June, September, and December; one-year duration missions were simulated at three of the sites. Atmospheric conditions during the simulated missions were correlated to National Climatic Data Center (NCDC) historical data measurements at each mission site, at four flight levels. Unique features of the SolFlyte model are described, including methods for calculating recoverable and energy-optimal flight trajectories and the effects of shadows on solar energy collection. Results of this study indicate that: 1) the airplane concept attained longer periods of on

  16. Comparing the Discrete and Continuous Logistic Models

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2008-01-01

    The solutions of the discrete logistic growth model based on a difference equation and the continuous logistic growth model based on a differential equation are compared and contrasted. The investigation is conducted using a dynamic interactive spreadsheet. (Contains 5 figures.)

  17. Comparative Study Of Four Models Of Turbulence

    NASA Technical Reports Server (NTRS)

    Menter, Florian R.

    1996-01-01

    Report presents comparative study of four popular eddy-viscosity models of turbulence. Computations reported for three different adverse pressure-gradient flowfields. Detailed comparison of numerical results and experimental data given. Following models tested: Baldwin-Lomax, Johnson-King, Baldwin-Barth, and Wilcox.

  18. Comparative analysis on the probability of being a good payer

    NASA Astrophysics Data System (ADS)

    Mihova, V.; Pavlov, V.

    2017-10-01

    Credit risk assessment is crucial for the bank industry. The current practice uses various approaches for the calculation of credit risk. The core of these approaches is the use of multiple regression models, applied in order to assess the risk associated with the approval of people applying for certain products (loans, credit cards, etc.). Based on data from the past, these models try to predict what will happen in the future. Different data requires different type of models. This work studies the causal link between the conduct of an applicant upon payment of the loan and the data that he completed at the time of application. A database of 100 borrowers from a commercial bank is used for the purposes of the study. The available data includes information from the time of application and credit history while paying off the loan. Customers are divided into two groups, based on the credit history: Good and Bad payers. Linear and logistic regression are applied in parallel to the data in order to estimate the probability of being good for new borrowers. A variable, which contains value of 1 for Good borrowers and value of 0 for Bad candidates, is modeled as a dependent variable. To decide which of the variables listed in the database should be used in the modelling process (as independent variables), a correlation analysis is made. Due to the results of it, several combinations of independent variables are tested as initial models - both with linear and logistic regression. The best linear and logistic models are obtained after initial transformation of the data and following a set of standard and robust statistical criteria. A comparative analysis between the two final models is made and scorecards are obtained from both models to assess new customers at the time of application. A cut-off level of points, bellow which to reject the applications and above it - to accept them, has been suggested for both the models, applying the strategy to keep the same Accept Rate as

  19. A GIS-based approach for comparative analysis of potential fire risk assessment

    NASA Astrophysics Data System (ADS)

    Sun, Ying; Hu, Lieqiu; Liu, Huiping

    2007-06-01

    Urban fires are one of the most important sources of property loss and human casualty and therefore it is necessary to assess the potential fire risk with consideration of urban community safety. Two evaluation models are proposed, both of which are integrated with GIS. One is the single factor model concerning the accessibility of fire passage and the other is grey clustering approach based on the multifactor system. In the latter model, fourteen factors are introduced and divided into four categories involving security management, evacuation facility, construction resistance and fire fighting capability. A case study on campus of Beijing Normal University is presented to express the potential risk assessment models in details. A comparative analysis of the two models is carried out to validate the accuracy. The results are approximately consistent with each other. Moreover, modeling with GIS promotes the efficiency the potential risk assessment.

  20. Comparative Numerical Analysis of Different Strengthening Systems of Historical Brick Arches

    NASA Astrophysics Data System (ADS)

    Zielińska, M.

    2017-05-01

    The article presents a comparative numerical analysis of various ways to strengthen historical brick arches. Five ways of strengthening brick arches with steel tie-rods have been proposed. Two of these involve the use of braces wrapped around pillars supporting the arch connected with a tie-rod; the other two ways involve the use of the tie-rods with welded metal sheets of different sizes; the latter involves the use of a tie-rod glued with the use of an epoxy adhesive. The collected data were compared with the reference model of the arch left without any interference. The results make it possible to evaluate the effectiveness of the methods by comparing displacements in the vertical and horizontal direction and stresses. The article indicates the direction of proper planning and design of the arch strengthening in brick structures in historical buildings.

  1. Comparing Models of Spontaneous Variations, Maneuvers and Indexes to Assess Dynamic Cerebral Autoregulation.

    PubMed

    Chacón, Max; Noh, Sun-Ho; Landerretche, Jean; Jara, José L

    2018-01-01

    We analyzed the performance of linear and nonlinear models to assess dynamic cerebral autoregulation (dCA) from spontaneous variations in healthy subjects and compared it with the use of two known maneuvers to abruptly change arterial blood pressure (BP): thigh cuffs and sit-to-stand. Cerebral blood flow velocity and BP were measured simultaneously at rest and while the maneuvers were performed in 20 healthy subjects. To analyze the spontaneous variations, we implemented two types of models using support vector machine (SVM): linear and nonlinear finite impulse response models. The classic autoregulation index (ARI) and the more recently proposed model-free ARI (mfARI) were used as measures of dCA. An ANOVA analysis was applied to compare the different methods and the coefficient of variation was calculated to evaluate their variability. There are differences between indexes, but not between models and maneuvers. The mfARI index with the sit-to-stand maneuver shows the least variability. Support vector machine modeling of spontaneous variation with the mfARI index could be used for the assessment of dCA as an alternative to maneuvers to introduce large BP fluctuations.

  2. Comparative shotgun proteomics using spectral count data and quasi-likelihood modeling.

    PubMed

    Li, Ming; Gray, William; Zhang, Haixia; Chung, Christine H; Billheimer, Dean; Yarbrough, Wendell G; Liebler, Daniel C; Shyr, Yu; Slebos, Robbert J C

    2010-08-06

    Shotgun proteomics provides the most powerful analytical platform for global inventory of complex proteomes using liquid chromatography-tandem mass spectrometry (LC-MS/MS) and allows a global analysis of protein changes. Nevertheless, sampling of complex proteomes by current shotgun proteomics platforms is incomplete, and this contributes to variability in assessment of peptide and protein inventories by spectral counting approaches. Thus, shotgun proteomics data pose challenges in comparing proteomes from different biological states. We developed an analysis strategy using quasi-likelihood Generalized Linear Modeling (GLM), included in a graphical interface software package (QuasiTel) that reads standard output from protein assemblies created by IDPicker, an HTML-based user interface to query shotgun proteomic data sets. This approach was compared to four other statistical analysis strategies: Student t test, Wilcoxon rank test, Fisher's Exact test, and Poisson-based GLM. We analyzed the performance of these tests to identify differences in protein levels based on spectral counts in a shotgun data set in which equimolar amounts of 48 human proteins were spiked at different levels into whole yeast lysates. Both GLM approaches and the Fisher Exact test performed adequately, each with their unique limitations. We subsequently compared the proteomes of normal tonsil epithelium and HNSCC using this approach and identified 86 proteins with differential spectral counts between normal tonsil epithelium and HNSCC. We selected 18 proteins from this comparison for verification of protein levels between the individual normal and tumor tissues using liquid chromatography-multiple reaction monitoring mass spectrometry (LC-MRM-MS). This analysis confirmed the magnitude and direction of the protein expression differences in all 6 proteins for which reliable data could be obtained. Our analysis demonstrates that shotgun proteomic data sets from different tissue phenotypes are

  3. Comparative Shotgun Proteomics Using Spectral Count Data and Quasi-Likelihood Modeling

    PubMed Central

    2010-01-01

    Shotgun proteomics provides the most powerful analytical platform for global inventory of complex proteomes using liquid chromatography−tandem mass spectrometry (LC−MS/MS) and allows a global analysis of protein changes. Nevertheless, sampling of complex proteomes by current shotgun proteomics platforms is incomplete, and this contributes to variability in assessment of peptide and protein inventories by spectral counting approaches. Thus, shotgun proteomics data pose challenges in comparing proteomes from different biological states. We developed an analysis strategy using quasi-likelihood Generalized Linear Modeling (GLM), included in a graphical interface software package (QuasiTel) that reads standard output from protein assemblies created by IDPicker, an HTML-based user interface to query shotgun proteomic data sets. This approach was compared to four other statistical analysis strategies: Student t test, Wilcoxon rank test, Fisher’s Exact test, and Poisson-based GLM. We analyzed the performance of these tests to identify differences in protein levels based on spectral counts in a shotgun data set in which equimolar amounts of 48 human proteins were spiked at different levels into whole yeast lysates. Both GLM approaches and the Fisher Exact test performed adequately, each with their unique limitations. We subsequently compared the proteomes of normal tonsil epithelium and HNSCC using this approach and identified 86 proteins with differential spectral counts between normal tonsil epithelium and HNSCC. We selected 18 proteins from this comparison for verification of protein levels between the individual normal and tumor tissues using liquid chromatography−multiple reaction monitoring mass spectrometry (LC−MRM-MS). This analysis confirmed the magnitude and direction of the protein expression differences in all 6 proteins for which reliable data could be obtained. Our analysis demonstrates that shotgun proteomic data sets from different tissue

  4. CRITICA: coding region identification tool invoking comparative analysis

    NASA Technical Reports Server (NTRS)

    Badger, J. H.; Olsen, G. J.; Woese, C. R. (Principal Investigator)

    1999-01-01

    Gene recognition is essential to understanding existing and future DNA sequence data. CRITICA (Coding Region Identification Tool Invoking Comparative Analysis) is a suite of programs for identifying likely protein-coding sequences in DNA by combining comparative analysis of DNA sequences with more common noncomparative methods. In the comparative component of the analysis, regions of DNA are aligned with related sequences from the DNA databases; if the translation of the aligned sequences has greater amino acid identity than expected for the observed percentage nucleotide identity, this is interpreted as evidence for coding. CRITICA also incorporates noncomparative information derived from the relative frequencies of hexanucleotides in coding frames versus other contexts (i.e., dicodon bias). The dicodon usage information is derived by iterative analysis of the data, such that CRITICA is not dependent on the existence or accuracy of coding sequence annotations in the databases. This independence makes the method particularly well suited for the analysis of novel genomes. CRITICA was tested by analyzing the available Salmonella typhimurium DNA sequences. Its predictions were compared with the DNA sequence annotations and with the predictions of GenMark. CRITICA proved to be more accurate than GenMark, and moreover, many of its predictions that would seem to be errors instead reflect problems in the sequence databases. The source code of CRITICA is freely available by anonymous FTP (rdp.life.uiuc.edu in/pub/critica) and on the World Wide Web (http:/(/)rdpwww.life.uiuc.edu).

  5. A comparative study of mixture cure models with covariate

    NASA Astrophysics Data System (ADS)

    Leng, Oh Yit; Khalid, Zarina Mohd

    2017-05-01

    In survival analysis, the survival time is assumed to follow a non-negative distribution, such as the exponential, Weibull, and log-normal distributions. In some cases, the survival time is influenced by some observed factors. The absence of these observed factors may cause an inaccurate estimation in the survival function. Therefore, a survival model which incorporates the influences of observed factors is more appropriate to be used in such cases. These observed factors are included in the survival model as covariates. Besides that, there are cases where a group of individuals who are cured, that is, not experiencing the event of interest. Ignoring the cure fraction may lead to overestimate in estimating the survival function. Thus, a mixture cure model is more suitable to be employed in modelling survival data with the presence of a cure fraction. In this study, three mixture cure survival models are used to analyse survival data with a covariate and a cure fraction. The first model includes covariate in the parameterization of the susceptible individuals survival function, the second model allows the cure fraction to depend on covariate, and the third model incorporates covariate in both cure fraction and survival function of susceptible individuals. This study aims to compare the performance of these models via a simulation approach. Therefore, in this study, survival data with varying sample sizes and cure fractions are simulated and the survival time is assumed to follow the Weibull distribution. The simulated data are then modelled using the three mixture cure survival models. The results show that the three mixture cure models are more appropriate to be used in modelling survival data with the presence of cure fraction and an observed factor.

  6. Modeling corrosion inhibition efficacy of small organic molecules as non-toxic chromate alternatives using comparative molecular surface analysis (CoMSA).

    PubMed

    Fernandez, Michael; Breedon, Michael; Cole, Ivan S; Barnard, Amanda S

    2016-10-01

    Traditionally many structural alloys are protected by primer coatings loaded with corrosion inhibiting additives. Strontium Chromate (or other chromates) have been shown to be extremely effectively inhibitors, and find extensive use in protective primer formulations. Unfortunately, hexavalent chromium which imbues these coatings with their corrosion inhibiting properties is also highly toxic, and their use is being increasingly restricted by legislation. In this work we explore a novel tridimensional Quantitative-Structure Property Relationship (3D-QSPR) approach, comparative molecular surface analysis (CoMSA), which was developed to recognize "high-performing" corrosion inhibitor candidates from the distributions of electronegativity, polarizability and van der Waals volume on the molecular surfaces of 28 small organic molecules. Multivariate statistical analysis identified five prototypes molecules, which are capable of explaining 71% of the variance within the inhibitor data set; whilst a further five molecules were also identified as archetypes, describing 75% of data variance. All active corrosion inhibitors, at a 80% threshold, were successfully recognized by the CoMSA model with adequate specificity and precision higher than 70% and 60%, respectively. The model was also capable of identifying structural patterns, that revealed reasonable starting points for where structural changes may augment corrosion inhibition efficacy. The presented methodology can be applied to other functional molecules and extended to cover structure-activity studies in a diverse range of areas such as drug design and novel material discovery. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Development and application of a comparative fatty acid analysis method to investigate voriconazole-induced hepatotoxicity.

    PubMed

    Chen, Guan-yuan; Chiu, Huai-hsuan; Lin, Shu-wen; Tseng, Yufeng Jane; Tsai, Sung-jeng; Kuo, Ching-hua

    2015-01-01

    As fatty acids play an important role in biological regulation, the profiling of fatty acid expression has been used to discover various disease markers and to understand disease mechanisms. This study developed an effective and accurate comparative fatty acid analysis method using differential labeling to speed up the metabolic profiling of fatty acids. Fatty acids were derivatized with unlabeled (D0) or deuterated (D3) methanol, followed by GC-MS analysis. The comparative fatty acid analysis method was validated using a series of samples with different ratios of D0/D3-labeled fatty acid standards and with mouse liver extracts. Using a lipopolysaccharide (LPS)-treated mouse model, we found that the fatty acid profiles after LPS treatment were similar between the conventional single-sample analysis approach and the proposed comparative approach, with a Pearson's correlation coefficient of approximately 0.96. We applied the comparative method to investigate voriconazole-induced hepatotoxicity and revealed the toxicity mechanism as well as the potential of using fatty acids as toxicity markers. In conclusion, the comparative fatty acid profiling technique was determined to be fast and accurate and allowed the discovery of potential fatty acid biomarkers in a more economical and efficient manner. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Comparing models of Red Knot population dynamics

    USGS Publications Warehouse

    McGowan, Conor P.

    2015-01-01

    Predictive population modeling contributes to our basic scientific understanding of population dynamics, but can also inform management decisions by evaluating alternative actions in virtual environments. Quantitative models mathematically reflect scientific hypotheses about how a system functions. In Delaware Bay, mid-Atlantic Coast, USA, to more effectively manage horseshoe crab (Limulus polyphemus) harvests and protect Red Knot (Calidris canutus rufa) populations, models are used to compare harvest actions and predict the impacts on crab and knot populations. Management has been chiefly driven by the core hypothesis that horseshoe crab egg abundance governs the survival and reproduction of migrating Red Knots that stopover in the Bay during spring migration. However, recently, hypotheses proposing that knot dynamics are governed by cyclical lemming dynamics garnered some support in data analyses. In this paper, I present alternative models of Red Knot population dynamics to reflect alternative hypotheses. Using 2 models with different lemming population cycle lengths and 2 models with different horseshoe crab effects, I project the knot population into the future under environmental stochasticity and parametric uncertainty with each model. I then compare each model's predictions to 10 yr of population monitoring from Delaware Bay. Using Bayes' theorem and model weight updating, models can accrue weight or support for one or another hypothesis of population dynamics. With 4 models of Red Knot population dynamics and only 10 yr of data, no hypothesis clearly predicted population count data better than another. The collapsed lemming cycle model performed best, accruing ~35% of the model weight, followed closely by the horseshoe crab egg abundance model, which accrued ~30% of the weight. The models that predicted no decline or stable populations (i.e. the 4-yr lemming cycle model and the weak horseshoe crab effect model) were the most weakly supported.

  9. Comparative promoter analysis allows de novo identification of specialized cell junction-associated proteins.

    PubMed

    Cohen, Clemens D; Klingenhoff, Andreas; Boucherot, Anissa; Nitsche, Almut; Henger, Anna; Brunner, Bodo; Schmid, Holger; Merkle, Monika; Saleem, Moin A; Koller, Klaus-Peter; Werner, Thomas; Gröne, Hermann-Josef; Nelson, Peter J; Kretzler, Matthias

    2006-04-11

    Shared transcription factor binding sites that are conserved in distance and orientation help control the expression of gene products that act together in the same biological context. New bioinformatics approaches allow the rapid characterization of shared promoter structures and can be used to find novel interacting molecules. Here, these principles are demonstrated by using molecules linked to the unique functional unit of the glomerular slit diaphragm. An evolutionarily conserved promoter model was generated by comparative genomics in the proximal promoter regions of the slit diaphragm-associated molecule nephrin. Phylogenetic promoter fingerprints of known elements of the slit diaphragm complex identified the nephrin model in the promoter region of zonula occludens-1 (ZO-1). Genome-wide scans using this promoter model effectively predicted a previously unrecognized slit diaphragm molecule, cadherin-5. Nephrin, ZO-1, and cadherin-5 mRNA showed stringent coexpression across a diverse set of human glomerular diseases. Comparative promoter analysis can identify regulatory pathways at work in tissue homeostasis and disease processes.

  10. Comparative biology of cystic fibrosis animal models.

    PubMed

    Fisher, John T; Zhang, Yulong; Engelhardt, John F

    2011-01-01

    Animal models of human diseases are critical for dissecting mechanisms of pathophysiology and developing therapies. In the context of cystic fibrosis (CF), mouse models have been the dominant species by which to study CF disease processes in vivo for the past two decades. Although much has been learned through these CF mouse models, limitations in the ability of this species to recapitulate spontaneous lung disease and several other organ abnormalities seen in CF humans have created a need for additional species on which to study CF. To this end, pig and ferret CF models have been generated by somatic cell nuclear transfer and are currently being characterized. These new larger animal models have phenotypes that appear to closely resemble human CF disease seen in newborns, and efforts to characterize their adult phenotypes are ongoing. This chapter will review current knowledge about comparative lung cell biology and cystic fibrosis transmembrane conductance regulator (CFTR) biology among mice, pigs, and ferrets that has implications for CF disease modeling in these species. We will focus on methods used to compare the biology and function of CFTR between these species and their relevance to phenotypes seen in the animal models. These cross-species comparisons and the development of both the pig and the ferret CF models may help elucidate pathophysiologic mechanisms of CF lung disease and lead to new therapeutic approaches.

  11. Howard University: A Comparative Fiscal Analysis.

    ERIC Educational Resources Information Center

    Inman, Deborah; And Others

    This report presents a fiscal analysis of Howard University (District of Columbia) including: (1) general education revenues; (2) education and general expenditures; and (3) faculty salaries. The study compared Howard University to four different groups of higher education institutions: similar private institutions with hospitals; public…

  12. Comparative assessment of turbulence model in predicting airflow over a NACA 0010 airfoil

    NASA Astrophysics Data System (ADS)

    Panday, Shoyon; Khan, Nafiz Ahmed; Rasel, Md; Faisal, Kh. Md.; Salam, Md. Abdus

    2017-06-01

    Nowadays the role of computational fluid dynamics to predict the flow behavior over airfoil is quite prominent. Most often a 2-D subsonic flow simulation is carried out over an airfoil at a certain Reynolds number and various angles of attack obtained by different turbulence models those are based on governing equations. The commonly used turbulence models are K-ɛpsilon, K-omega, Spalart Allmaras etc. Variation in turbulence model effectively influences the result of analysis. Here a comparative study is represented to show the effect of different turbulence models for a 2-D flow analysis over a National Advisory Committee for Aeronautics (NACA) airfoil 0010. This airfoil was analysed at 200000 Re number in 10 different angle of attacks at a constant speed of 21.6 m/s. Numbers of two dimensional flow simulation was run by changing the turbulence model, for each AOA. In accordance with the variation of result for different turbulence model, it was also found that for which model, attained result is close enough to experimental outcome from a low subsonic wind tunnel AF100. This paper also documents the effect of high and low angle of attack on the flow behaviour over an airfoil.

  13. Beyond the scope of Free-Wilson analysis: building interpretable QSAR models with machine learning algorithms.

    PubMed

    Chen, Hongming; Carlsson, Lars; Eriksson, Mats; Varkonyi, Peter; Norinder, Ulf; Nilsson, Ingemar

    2013-06-24

    A novel methodology was developed to build Free-Wilson like local QSAR models by combining R-group signatures and the SVM algorithm. Unlike Free-Wilson analysis this method is able to make predictions for compounds with R-groups not present in a training set. Eleven public data sets were chosen as test cases for comparing the performance of our new method with several other traditional modeling strategies, including Free-Wilson analysis. Our results show that the R-group signature SVM models achieve better prediction accuracy compared with Free-Wilson analysis in general. Moreover, the predictions of R-group signature models are also comparable to the models using ECFP6 fingerprints and signatures for the whole compound. Most importantly, R-group contributions to the SVM model can be obtained by calculating the gradient for R-group signatures. For most of the studied data sets, a significant correlation with that of a corresponding Free-Wilson analysis is shown. These results suggest that the R-group contribution can be used to interpret bioactivity data and highlight that the R-group signature based SVM modeling method is as interpretable as Free-Wilson analysis. Hence the signature SVM model can be a useful modeling tool for any drug discovery project.

  14. Comparing the cognitive differences resulting from modeling instruction: Using computer microworld and physical object instruction to model real world problems

    NASA Astrophysics Data System (ADS)

    Oursland, Mark David

    This study compared the modeling achievement of students receiving mathematical modeling instruction using the computer microworld, Interactive Physics, and students receiving instruction using physical objects. Modeling instruction included activities where students applied the (a) linear model to a variety of situations, (b) linear model to two-rate situations with a constant rate, (c) quadratic model to familiar geometric figures. Both quantitative and qualitative methods were used to analyze achievement differences between students (a) receiving different methods of modeling instruction, (b) with different levels of beginning modeling ability, or (c) with different levels of computer literacy. Student achievement was analyzed quantitatively through a three-factor analysis of variance where modeling instruction, beginning modeling ability, and computer literacy were used as the three independent factors. The SOLO (Structure of the Observed Learning Outcome) assessment framework was used to design written modeling assessment instruments to measure the students' modeling achievement. The same three independent factors were used to collect and analyze the interviews and observations of student behaviors. Both methods of modeling instruction used the data analysis approach to mathematical modeling. The instructional lessons presented problem situations where students were asked to collect data, analyze the data, write a symbolic mathematical equation, and use equation to solve the problem. The researcher recommends the following practice for modeling instruction based on the conclusions of this study. A variety of activities with a common structure are needed to make explicit the modeling process of applying a standard mathematical model. The modeling process is influenced strongly by prior knowledge of the problem context and previous modeling experiences. The conclusions of this study imply that knowledge of the properties about squares improved the students

  15. Model-Based Safety Analysis

    NASA Technical Reports Server (NTRS)

    Joshi, Anjali; Heimdahl, Mats P. E.; Miller, Steven P.; Whalen, Mike W.

    2006-01-01

    System safety analysis techniques are well established and are used extensively during the design of safety-critical systems. Despite this, most of the techniques are highly subjective and dependent on the skill of the practitioner. Since these analyses are usually based on an informal system model, it is unlikely that they will be complete, consistent, and error free. In fact, the lack of precise models of the system architecture and its failure modes often forces the safety analysts to devote much of their effort to gathering architectural details about the system behavior from several sources and embedding this information in the safety artifacts such as the fault trees. This report describes Model-Based Safety Analysis, an approach in which the system and safety engineers share a common system model created using a model-based development process. By extending the system model with a fault model as well as relevant portions of the physical system to be controlled, automated support can be provided for much of the safety analysis. We believe that by using a common model for both system and safety engineering and automating parts of the safety analysis, we can both reduce the cost and improve the quality of the safety analysis. Here we present our vision of model-based safety analysis and discuss the advantages and challenges in making this approach practical.

  16. Highly comparative time-series analysis: the empirical structure of time series and their methods.

    PubMed

    Fulcher, Ben D; Little, Max A; Jones, Nick S

    2013-06-06

    The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines.

  17. Highly comparative time-series analysis: the empirical structure of time series and their methods

    PubMed Central

    Fulcher, Ben D.; Little, Max A.; Jones, Nick S.

    2013-01-01

    The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines. PMID:23554344

  18. An NCME Instructional Module on Latent DIF Analysis Using Mixture Item Response Models

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Suh, Youngsuk; Lee, Woo-yeol

    2016-01-01

    The purpose of this ITEMS module is to provide an introduction to differential item functioning (DIF) analysis using mixture item response models. The mixture item response models for DIF analysis involve comparing item profiles across latent groups, instead of manifest groups. First, an overview of DIF analysis based on latent groups, called…

  19. National Launch System comparative economic analysis

    NASA Technical Reports Server (NTRS)

    Prince, A.

    1992-01-01

    Results are presented from an analysis of economic benefits (or losses), in the form of the life cycle cost savings, resulting from the development of the National Launch System (NLS) family of launch vehicles. The analysis was carried out by comparing various NLS-based architectures with the current Shuttle/Titan IV fleet. The basic methodology behind this NLS analysis was to develop a set of annual payload requirements for the Space Station Freedom and LEO, to design launch vehicle architectures around these requirements, and to perform life-cycle cost analyses on all of the architectures. A SEI requirement was included. Launch failure costs were estimated and combined with the relative reliability assumptions to measure the effects of losses. Based on the analysis, a Shuttle/NLS architecture evolving into a pressurized-logistics-carrier/NLS architecture appears to offer the best long-term cost benefit.

  20. A Statistical Test for Comparing Nonnested Covariance Structure Models.

    ERIC Educational Resources Information Center

    Levy, Roy; Hancock, Gregory R.

    While statistical procedures are well known for comparing hierarchically related (nested) covariance structure models, statistical tests for comparing nonhierarchically related (nonnested) models have proven more elusive. While isolated attempts have been made, none exists within the commonly used maximum likelihood estimation framework, thereby…

  1. Mental Models about Seismic Effects: Students' Profile Based Comparative Analysis

    ERIC Educational Resources Information Center

    Moutinho, Sara; Moura, Rui; Vasconcelos, Clara

    2016-01-01

    Nowadays, meaningful learning takes a central role in science education and is based in mental models that allow the representation of the real world by individuals. Thus, it is essential to analyse the student's mental models by promoting an easier reconstruction of scientific knowledge, by allowing them to become consistent with the curricular…

  2. Small-molecule ligand docking into comparative models with Rosetta

    PubMed Central

    Combs, Steven A; DeLuca, Samuel L; DeLuca, Stephanie H; Lemmon, Gordon H; Nannemann, David P; Nguyen, Elizabeth D; Willis, Jordan R; Sheehan, Jonathan H; Meiler, Jens

    2017-01-01

    Structure-based drug design is frequently used to accelerate the development of small-molecule therapeutics. Although substantial progress has been made in X-ray crystallography and nuclear magnetic resonance (NMR) spectroscopy, the availability of high-resolution structures is limited owing to the frequent inability to crystallize or obtain sufficient NMR restraints for large or flexible proteins. Computational methods can be used to both predict unknown protein structures and model ligand interactions when experimental data are unavailable. This paper describes a comprehensive and detailed protocol using the Rosetta modeling suite to dock small-molecule ligands into comparative models. In the protocol presented here, we review the comparative modeling process, including sequence alignment, threading and loop building. Next, we cover docking a small-molecule ligand into the protein comparative model. In addition, we discuss criteria that can improve ligand docking into comparative models. Finally, and importantly, we present a strategy for assessing model quality. The entire protocol is presented on a single example selected solely for didactic purposes. The results are therefore not representative and do not replace benchmarks published elsewhere. We also provide an additional tutorial so that the user can gain hands-on experience in using Rosetta. The protocol should take 5–7 h, with additional time allocated for computer generation of models. PMID:23744289

  3. A Comparative Analysis of Spatial Visualization Ability and Drafting Models for Industrial and Technology Education Students

    ERIC Educational Resources Information Center

    Katsioloudis, Petros; Jovanovic, Vukica; Jones, Mildred

    2014-01-01

    The main purpose of this study was to determine significant positive effects among the use of three different types of drafting models, and to identify whether any differences exist towards promotion of spatial visualization ability for students in Industrial Technology and Technology Education courses. In particular, the study compared the use of…

  4. Biomechanical analysis comparing natural and alloplastic temporomandibular joint replacement using a finite element model.

    PubMed

    Mesnard, Michel; Ramos, Antonio; Ballu, Alex; Morlier, Julien; Cid, M; Simoes, J A

    2011-04-01

    Prosthetic materials and bone present quite different mechanical properties. Consequently, mandible reconstruction with metallic materials (or a mandible condyle implant) modifies the physiologic behavior of the mandible (stress, strain patterns, and condyle displacements). The changing of bone strain distribution results in an adaptation of the temporomandibular joint, including articular contacts. Using a validated finite element model, the natural mandible strains and condyle displacements were evaluated. Modifications of strains and displacements were then assessed for 2 different temporomandibular joint implants. Because materials and geometry play important key roles, mechanical properties of cortical bone were taken into account in models used in finite element analysis. The finite element model allowed verification of the worst loading configuration of the mandibular condyle. Replacing the natural condyle by 1 of the 2 tested implants, the results also show the importance of the implant geometry concerning biomechanical mandibular behavior. The implant geometry and stiffness influenced mainly strain distribution. The different forces applied to the mandible by the elevator muscles, teeth, and joint loads indicate that the finite element model is a relevant tool to optimize implant geometry or, in a subsequent study, to choose a more suitable distribution of the screws. Bone screws (number and position) have a significant influence on mandibular behavior and on implant stress pattern. Stress concentration and implant fracture must be avoided. Copyright © 2011 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  5. Comparative analysis of ventricular assist devices (POLVAD and POLVAD_EXT) based on multiscale FEM model.

    PubMed

    Milenin, Andrzej; Kopernik, Magdalena

    2011-01-01

    The prosthesis - pulsatory ventricular assist device (VAD) - is made of polyurethane (PU) and biocompatible TiN deposited by pulsed laser deposition (PLD) method. The paper discusses the numerical modelling and computer-aided design of such an artificial organ. Two types of VADs: POLVAD and POLVAD_EXT are investigated. The main tasks and assumptions of the computer program developed are presented. The multiscale model of VAD based on finite element method (FEM) is introduced and the analysis of the stress-strain state in macroscale for the blood chamber in both versions of VAD is shown, as well as the verification of the results calculated by applying ABAQUS, a commercial FEM code. The FEM code developed is based on a new approach to the simulation of multilayer materials obtained by using PLD method. The model in microscale includes two components, i.e., model of initial stresses (residual stress) caused by the deposition process and simulation of active loadings observed in the blood chamber of POLVAD and POLVAD_EXT. The computed distributions of stresses and strains in macro- and microscales are helpful in defining precisely the regions of blood chamber, which can be defined as the failure-source areas.

  6. Support vector regression and artificial neural network models for stability indicating analysis of mebeverine hydrochloride and sulpiride mixtures in pharmaceutical preparation: A comparative study

    NASA Astrophysics Data System (ADS)

    Naguib, Ibrahim A.; Darwish, Hany W.

    2012-02-01

    A comparison between support vector regression (SVR) and Artificial Neural Networks (ANNs) multivariate regression methods is established showing the underlying algorithm for each and making a comparison between them to indicate the inherent advantages and limitations. In this paper we compare SVR to ANN with and without variable selection procedure (genetic algorithm (GA)). To project the comparison in a sensible way, the methods are used for the stability indicating quantitative analysis of mixtures of mebeverine hydrochloride and sulpiride in binary mixtures as a case study in presence of their reported impurities and degradation products (summing up to 6 components) in raw materials and pharmaceutical dosage form via handling the UV spectral data. For proper analysis, a 6 factor 5 level experimental design was established resulting in a training set of 25 mixtures containing different ratios of the interfering species. An independent test set consisting of 5 mixtures was used to validate the prediction ability of the suggested models. The proposed methods (linear SVR (without GA) and linear GA-ANN) were successfully applied to the analysis of pharmaceutical tablets containing mebeverine hydrochloride and sulpiride mixtures. The results manifest the problem of nonlinearity and how models like the SVR and ANN can handle it. The methods indicate the ability of the mentioned multivariate calibration models to deconvolute the highly overlapped UV spectra of the 6 components' mixtures, yet using cheap and easy to handle instruments like the UV spectrophotometer.

  7. Comparative Genome Analysis of Enterobacter cloacae

    PubMed Central

    Liu, Wing-Yee; Wong, Chi-Fat; Chung, Karl Ming-Kar; Jiang, Jing-Wei; Leung, Frederick Chi-Ching

    2013-01-01

    The Enterobacter cloacae species includes an extremely diverse group of bacteria that are associated with plants, soil and humans. Publication of the complete genome sequence of the plant growth-promoting endophytic E. cloacae subsp. cloacae ENHKU01 provided an opportunity to perform the first comparative genome analysis between strains of this dynamic species. Examination of the pan-genome of E. cloacae showed that the conserved core genome retains the general physiological and survival genes of the species, while genomic factors in plasmids and variable regions determine the virulence of the human pathogenic E. cloacae strain; additionally, the diversity of fimbriae contributes to variation in colonization and host determination of different E. cloacae strains. Comparative genome analysis further illustrated that E. cloacae strains possess multiple mechanisms for antagonistic action against other microorganisms, which involve the production of siderophores and various antimicrobial compounds, such as bacteriocins, chitinases and antibiotic resistance proteins. The presence of Type VI secretion systems is expected to provide further fitness advantages for E. cloacae in microbial competition, thus allowing it to survive in different environments. Competition assays were performed to support our observations in genomic analysis, where E. cloacae subsp. cloacae ENHKU01 demonstrated antagonistic activities against a wide range of plant pathogenic fungal and bacterial species. PMID:24069314

  8. Modified optimal control pilot model for computer-aided design and analysis

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Schmidt, David K.

    1992-01-01

    This paper presents the theoretical development of a modified optimal control pilot model based upon the optimal control model (OCM) of the human operator developed by Kleinman, Baron, and Levison. This model is input compatible with the OCM and retains other key aspects of the OCM, such as a linear quadratic solution for the pilot gains with inclusion of control rate in the cost function, a Kalman estimator, and the ability to account for attention allocation and perception threshold effects. An algorithm designed for each implementation in current dynamic systems analysis and design software is presented. Example results based upon the analysis of a tracking task using three basic dynamic systems are compared with measured results and with similar analyses performed with the OCM and two previously proposed simplified optimal pilot models. The pilot frequency responses and error statistics obtained with this modified optimal control model are shown to compare more favorably to the measured experimental results than the other previously proposed simplified models evaluated.

  9. An efficient current-based logic cell model for crosstalk delay analysis

    NASA Astrophysics Data System (ADS)

    Nazarian, Shahin; Das, Debasish

    2013-04-01

    Logic cell modelling is an important component in the analysis and design of CMOS integrated circuits, mostly due to nonlinear behaviour of CMOS cells with respect to the voltage signal at their input and output pins. A current-based model for CMOS logic cells is presented, which can be used for effective crosstalk noise and delta delay analysis in CMOS VLSI circuits. Existing current source models are expensive and need a new set of Spice-based characterisation, which is not compatible with typical EDA tools. In this article we present Imodel, a simple nonlinear logic cell model that can be derived from the typical cell libraries such as NLDM, with accuracy much higher than NLDM-based cell delay models. In fact, our experiments show an average error of 3% compared to Spice. This level of accuracy comes with a maximum runtime penalty of 19% compared to NLDM-based cell delay models on medium-sized industrial designs.

  10. General quantitative genetic methods for comparative biology: phylogenies, taxonomies and multi-trait models for continuous and categorical characters.

    PubMed

    Hadfield, J D; Nakagawa, S

    2010-03-01

    Although many of the statistical techniques used in comparative biology were originally developed in quantitative genetics, subsequent development of comparative techniques has progressed in relative isolation. Consequently, many of the new and planned developments in comparative analysis already have well-tested solutions in quantitative genetics. In this paper, we take three recent publications that develop phylogenetic meta-analysis, either implicitly or explicitly, and show how they can be considered as quantitative genetic models. We highlight some of the difficulties with the proposed solutions, and demonstrate that standard quantitative genetic theory and software offer solutions. We also show how results from Bayesian quantitative genetics can be used to create efficient Markov chain Monte Carlo algorithms for phylogenetic mixed models, thereby extending their generality to non-Gaussian data. Of particular utility is the development of multinomial models for analysing the evolution of discrete traits, and the development of multi-trait models in which traits can follow different distributions. Meta-analyses often include a nonrandom collection of species for which the full phylogenetic tree has only been partly resolved. Using missing data theory, we show how the presented models can be used to correct for nonrandom sampling and show how taxonomies and phylogenies can be combined to give a flexible framework with which to model dependence.

  11. Entrance and exit region friction factor models for annular seal analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Elrod, David Alan

    1988-01-01

    The Mach number definition and boundary conditions in Nelson's nominally-centered, annular gas seal analysis are revised. A method is described for determining the wall shear stress characteristics of an annular gas seal experimentally. Two friction factor models are developed for annular seal analysis; one model is based on flat-plate flow theory; the other uses empirical entrance and exit region friction factors. The friction factor predictions of the models are compared to experimental results. Each friction model is used in an annular gas seal analysis. The seal characteristics predicted by the two seal analyses are compared to experimental results and to the predictions of Nelson's analysis. The comparisons are for smooth-rotor seals with smooth and honeycomb stators. The comparisons show that the analysis which uses empirical entrance and exit region shear stress models predicts the static and stability characteristics of annular gas seals better than the other analyses. The analyses predict direct stiffness poorly.

  12. Categorical Data Analysis Using a Skewed Weibull Regression Model

    NASA Astrophysics Data System (ADS)

    Caron, Renault; Sinha, Debajyoti; Dey, Dipak; Polpo, Adriano

    2018-03-01

    In this paper, we present a Weibull link (skewed) model for categorical response data arising from binomial as well as multinomial model. We show that, for such types of categorical data, the most commonly used models (logit, probit and complementary log-log) can be obtained as limiting cases. We further compare the proposed model with some other asymmetrical models. The Bayesian as well as frequentist estimation procedures for binomial and multinomial data responses are presented in details. The analysis of two data sets to show the efficiency of the proposed model is performed.

  13. Choice-Based Conjoint Analysis: Classification vs. Discrete Choice Models

    NASA Astrophysics Data System (ADS)

    Giesen, Joachim; Mueller, Klaus; Taneva, Bilyana; Zolliker, Peter

    Conjoint analysis is a family of techniques that originated in psychology and later became popular in market research. The main objective of conjoint analysis is to measure an individual's or a population's preferences on a class of options that can be described by parameters and their levels. We consider preference data obtained in choice-based conjoint analysis studies, where one observes test persons' choices on small subsets of the options. There are many ways to analyze choice-based conjoint analysis data. Here we discuss the intuition behind a classification based approach, and compare this approach to one based on statistical assumptions (discrete choice models) and to a regression approach. Our comparison on real and synthetic data indicates that the classification approach outperforms the discrete choice models.

  14. Bayesian multivariate hierarchical transformation models for ROC analysis.

    PubMed

    O'Malley, A James; Zou, Kelly H

    2006-02-15

    A Bayesian multivariate hierarchical transformation model (BMHTM) is developed for receiver operating characteristic (ROC) curve analysis based on clustered continuous diagnostic outcome data with covariates. Two special features of this model are that it incorporates non-linear monotone transformations of the outcomes and that multiple correlated outcomes may be analysed. The mean, variance, and transformation components are all modelled parametrically, enabling a wide range of inferences. The general framework is illustrated by focusing on two problems: (1) analysis of the diagnostic accuracy of a covariate-dependent univariate test outcome requiring a Box-Cox transformation within each cluster to map the test outcomes to a common family of distributions; (2) development of an optimal composite diagnostic test using multivariate clustered outcome data. In the second problem, the composite test is estimated using discriminant function analysis and compared to the test derived from logistic regression analysis where the gold standard is a binary outcome. The proposed methodology is illustrated on prostate cancer biopsy data from a multi-centre clinical trial.

  15. Bayesian multivariate hierarchical transformation models for ROC analysis

    PubMed Central

    O'Malley, A. James; Zou, Kelly H.

    2006-01-01

    SUMMARY A Bayesian multivariate hierarchical transformation model (BMHTM) is developed for receiver operating characteristic (ROC) curve analysis based on clustered continuous diagnostic outcome data with covariates. Two special features of this model are that it incorporates non-linear monotone transformations of the outcomes and that multiple correlated outcomes may be analysed. The mean, variance, and transformation components are all modelled parametrically, enabling a wide range of inferences. The general framework is illustrated by focusing on two problems: (1) analysis of the diagnostic accuracy of a covariate-dependent univariate test outcome requiring a Box–Cox transformation within each cluster to map the test outcomes to a common family of distributions; (2) development of an optimal composite diagnostic test using multivariate clustered outcome data. In the second problem, the composite test is estimated using discriminant function analysis and compared to the test derived from logistic regression analysis where the gold standard is a binary outcome. The proposed methodology is illustrated on prostate cancer biopsy data from a multi-centre clinical trial. PMID:16217836

  16. Sentiments Analysis of Reviews Based on ARCNN Model

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoyu; Xu, Ming; Xu, Jian; Zheng, Ning; Yang, Tao

    2017-10-01

    The sentiments analysis of product reviews is designed to help customers understand the status of the product. The traditional method of sentiments analysis relies on the input of a fixed feature vector which is performance bottleneck of the basic codec architecture. In this paper, we propose an attention mechanism with BRNN-CNN model, referring to as ARCNN model. In order to have a good analysis of the semantic relations between words and solves the problem of dimension disaster, we use the GloVe algorithm to train the vector representations for words. Then, ARCNN model is proposed to deal with the problem of deep features training. Specifically, BRNN model is proposed to investigate non-fixed-length vectors and keep time series information perfectly and CNN can study more connection of deep semantic links. Moreover, the attention mechanism can automatically learn from the data and optimize the allocation of weights. Finally, a softmax classifier is designed to complete the sentiment classification of reviews. Experiments show that the proposed method can improve the accuracy of sentiment classification compared with benchmark methods.

  17. The digital storytelling process: A comparative analysis from various experts

    NASA Astrophysics Data System (ADS)

    Hussain, Hashiroh; Shiratuddin, Norshuhada

    2016-08-01

    Digital Storytelling (DST) is a method of delivering information to the audience. It combines narrative and digital media content infused with the multimedia elements. In order for the educators (i.e the designers) to create a compelling digital story, there are sets of processes introduced by experts. Nevertheless, the experts suggest varieties of processes to guide them; of which some are redundant. The main aim of this study is to propose a single guide process for the creation of DST. A comparative analysis is employed where ten DST models from various experts are analysed. The process can also be implemented in other multimedia materials that used the concept of DST.

  18. Network Analysis in Comparative Social Sciences

    ERIC Educational Resources Information Center

    Vera, Eugenia Roldan; Schupp, Thomas

    2006-01-01

    This essay describes the pertinence of Social Network Analysis (SNA) for the social sciences in general, and discusses its methodological and conceptual implications for comparative research in particular. The authors first present a basic summary of the theoretical and methodological assumptions of SNA, followed by a succinct overview of its…

  19. Statistical Power of Alternative Structural Models for Comparative Effectiveness Research: Advantages of Modeling Unreliability.

    PubMed

    Coman, Emil N; Iordache, Eugen; Dierker, Lisa; Fifield, Judith; Schensul, Jean J; Suggs, Suzanne; Barbour, Russell

    2014-05-01

    The advantages of modeling the unreliability of outcomes when evaluating the comparative effectiveness of health interventions is illustrated. Adding an action-research intervention component to a regular summer job program for youth was expected to help in preventing risk behaviors. A series of simple two-group alternative structural equation models are compared to test the effect of the intervention on one key attitudinal outcome in terms of model fit and statistical power with Monte Carlo simulations. Some models presuming parameters equal across the intervention and comparison groups were underpowered to detect the intervention effect, yet modeling the unreliability of the outcome measure increased their statistical power and helped in the detection of the hypothesized effect. Comparative Effectiveness Research (CER) could benefit from flexible multi-group alternative structural models organized in decision trees, and modeling unreliability of measures can be of tremendous help for both the fit of statistical models to the data and their statistical power.

  20. Surface Modeling, Solid Modeling and Finite Element Modeling. Analysis Capabilities of Computer-Assisted Design and Manufacturing Systems.

    ERIC Educational Resources Information Center

    Nee, John G.; Kare, Audhut P.

    1987-01-01

    Explores several concepts in computer assisted design/computer assisted manufacturing (CAD/CAM). Defines, evaluates, reviews and compares advanced computer-aided geometric modeling and analysis techniques. Presents the results of a survey to establish the capabilities of minicomputer based-systems with the CAD/CAM packages evaluated. (CW)

  1. Initial implementation of a comparative data analysis ontology.

    PubMed

    Prosdocimi, Francisco; Chisham, Brandon; Pontelli, Enrico; Thompson, Julie D; Stoltzfus, Arlin

    2009-07-03

    Comparative analysis is used throughout biology. When entities under comparison (e.g. proteins, genomes, species) are related by descent, evolutionary theory provides a framework that, in principle, allows N-ary comparisons of entities, while controlling for non-independence due to relatedness. Powerful software tools exist for specialized applications of this approach, yet it remains under-utilized in the absence of a unifying informatics infrastructure. A key step in developing such an infrastructure is the definition of a formal ontology. The analysis of use cases and existing formalisms suggests that a significant component of evolutionary analysis involves a core problem of inferring a character history, relying on key concepts: "Operational Taxonomic Units" (OTUs), representing the entities to be compared; "character-state data" representing the observations compared among OTUs; "phylogenetic tree", representing the historical path of evolution among the entities; and "transitions", the inferred evolutionary changes in states of characters that account for observations. Using the Web Ontology Language (OWL), we have defined these and other fundamental concepts in a Comparative Data Analysis Ontology (CDAO). CDAO has been evaluated for its ability to represent token data sets and to support simple forms of reasoning. With further development, CDAO will provide a basis for tools (for semantic transformation, data retrieval, validation, integration, etc.) that make it easier for software developers and biomedical researchers to apply evolutionary methods of inference to diverse types of data, so as to integrate this powerful framework for reasoning into their research.

  2. Comparative Modeling Studies of Boreal Water and Carbon Balance

    NASA Technical Reports Server (NTRS)

    Coughlan, J.; Peterson, David L. (Technical Monitor)

    1997-01-01

    The coordination of the modeling and field efforts for an Intensive Field Campaign (IFC) may resemble the chicken and egg dilemma. This session's theme advocates that early and proactive involvement by modeling teams can produce a scientific and operational benefit for the IFC and Experiment. This talk will provide some examples and suggestions originating from the NASA funded IFC's of the FIFE First ISLSCP (International Satellite Land Surface Climatology Project) Field Experiment, Oregon Transect Ecosystem Research (OTTER) and predominately Boreal Ecosystem-Atmosphere Study (BOREAS) Experiments. In February 1994 and prior to the final selection of the BOREAS study sites, a group of funded BOREAS investigators agreed to run their models with data for five community types representing the proposed tower flux sites. All participating models were given identical initial values and boundary conditions and driven with identical climate data. The objectives of the intercomparison exercise were: 1) compare simulation results of participating terrestrial, hydrological, and atmospheric models over selected time frames; 2) learn about model behavior and sensitivity to estimated boreal site and vegetation definitions; 3) prioritize BOREAS field data collection efforts supporting modeling studies; 4) identify individual model deficiencies as early as possible. Out of these objectives evolved some important coordination and science issues for the BOREAS Experiment that can be generalized to IFCs and long term archiving of the data. Some problems are acceptable because they are endemic to maintaining fair and open competition prior to the peer review process. Others are logistical and addressable through application of planning, management, and information sciences. This investigator has identified one source of measurement and model incompatibility that is manifest in the IFC scaling approach. Although intuitively obvious, scaling problems are already more formally defined in

  3. A comparative study of multivariable robustness analysis methods as applied to integrated flight and propulsion control

    NASA Technical Reports Server (NTRS)

    Schierman, John D.; Lovell, T. A.; Schmidt, David K.

    1993-01-01

    Three multivariable robustness analysis methods are compared and contrasted. The focus of the analysis is on system stability and performance robustness to uncertainty in the coupling dynamics between two interacting subsystems. Of particular interest is interacting airframe and engine subsystems, and an example airframe/engine vehicle configuration is utilized in the demonstration of these approaches. The singular value (SV) and structured singular value (SSV) analysis methods are compared to a method especially well suited for analysis of robustness to uncertainties in subsystem interactions. This approach is referred to here as the interacting subsystem (IS) analysis method. This method has been used previously to analyze airframe/engine systems, emphasizing the study of stability robustness. However, performance robustness is also investigated here, and a new measure of allowable uncertainty for acceptable performance robustness is introduced. The IS methodology does not require plant uncertainty models to measure the robustness of the system, and is shown to yield valuable information regarding the effects of subsystem interactions. In contrast, the SV and SSV methods allow for the evaluation of the robustness of the system to particular models of uncertainty, and do not directly indicate how the airframe (engine) subsystem interacts with the engine (airframe) subsystem.

  4. Comparative genome analysis in the integrated microbial genomes (IMG) system.

    PubMed

    Markowitz, Victor M; Kyrpides, Nikos C

    2007-01-01

    Comparative genome analysis is critical for the effective exploration of a rapidly growing number of complete and draft sequences for microbial genomes. The Integrated Microbial Genomes (IMG) system (img.jgi.doe.gov) has been developed as a community resource that provides support for comparative analysis of microbial genomes in an integrated context. IMG allows users to navigate the multidimensional microbial genome data space and focus their analysis on a subset of genes, genomes, and functions of interest. IMG provides graphical viewers, summaries, and occurrence profile tools for comparing genes, pathways, and functions (terms) across specific genomes. Genes can be further examined using gene neighborhoods and compared with sequence alignment tools.

  5. Comparative lipidomic analysis of synovial fluid in human and canine osteoarthritis.

    PubMed

    Kosinska, M K; Mastbergen, S C; Liebisch, G; Wilhelm, J; Dettmeyer, R B; Ishaque, B; Rickert, M; Schmitz, G; Lafeber, F P; Steinmeyer, J

    2016-08-01

    The lipid profile of synovial fluid (SF) is related to the health status of joints. The early stages of human osteoarthritis (OA) are poorly understood, which larger animals are expected to be able to model closely. This study examined whether the canine groove model of OA represents early OA in humans based on the changes in the lipid species profile in SF. Furthermore, the SF lipidomes of humans and dogs were compared to determine how closely canine lipid species profiles reflect the human lipidome. Lipids were extracted from cell- and cellular debris-free knee SF from nine donors with healthy joints, 17 patients with early and 13 patients with late osteoarthritic changes, and nine dogs with knee OA and healthy contralateral joints. Lipid species were quantified by electrospray ionization tandem mass spectrometry (ESI-MS/MS). Compared with control canine SF most lipid species were elevated in canine OA SF. Moreover, the lipid species profiles in the canine OA model resembled early OA profiles in humans. The SF lipidomes between dog and human were generally similar, with differences in certain lipid species in the phosphatidylcholine (PC), lysophosphatidylcholine (LPC) and sphingomyelin (SM) classes. Our lipidomic analysis demonstrates that SF in the canine OA model closely mimics the early osteoarthritic changes that occur in humans. Further, the canine SF lipidome often reflects normal human lipid metabolism. Copyright © 2016 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  6. Comparative analysis on the selection of number of clusters in community detection

    NASA Astrophysics Data System (ADS)

    Kawamoto, Tatsuro; Kabashima, Yoshiyuki

    2018-02-01

    We conduct a comparative analysis on various estimates of the number of clusters in community detection. An exhaustive comparison requires testing of all possible combinations of frameworks, algorithms, and assessment criteria. In this paper we focus on the framework based on a stochastic block model, and investigate the performance of greedy algorithms, statistical inference, and spectral methods. For the assessment criteria, we consider modularity, map equation, Bethe free energy, prediction errors, and isolated eigenvalues. From the analysis, the tendency of overfit and underfit that the assessment criteria and algorithms have becomes apparent. In addition, we propose that the alluvial diagram is a suitable tool to visualize statistical inference results and can be useful to determine the number of clusters.

  7. A comparative analysis of biclustering algorithms for gene expression data

    PubMed Central

    Eren, Kemal; Deveci, Mehmet; Küçüktunç, Onur; Çatalyürek, Ümit V.

    2013-01-01

    The need to analyze high-dimension biological data is driving the development of new data mining methods. Biclustering algorithms have been successfully applied to gene expression data to discover local patterns, in which a subset of genes exhibit similar expression levels over a subset of conditions. However, it is not clear which algorithms are best suited for this task. Many algorithms have been published in the past decade, most of which have been compared only to a small number of algorithms. Surveys and comparisons exist in the literature, but because of the large number and variety of biclustering algorithms, they are quickly outdated. In this article we partially address this problem of evaluating the strengths and weaknesses of existing biclustering methods. We used the BiBench package to compare 12 algorithms, many of which were recently published or have not been extensively studied. The algorithms were tested on a suite of synthetic data sets to measure their performance on data with varying conditions, such as different bicluster models, varying noise, varying numbers of biclusters and overlapping biclusters. The algorithms were also tested on eight large gene expression data sets obtained from the Gene Expression Omnibus. Gene Ontology enrichment analysis was performed on the resulting biclusters, and the best enrichment terms are reported. Our analyses show that the biclustering method and its parameters should be selected based on the desired model, whether that model allows overlapping biclusters, and its robustness to noise. In addition, we observe that the biclustering algorithms capable of finding more than one model are more successful at capturing biologically relevant clusters. PMID:22772837

  8. A Bayesian Framework for Analysis of Pseudo-Spatial Models of Comparable Engineered Systems with Application to Spacecraft Anomaly Prediction Based on Precedent Data

    NASA Astrophysics Data System (ADS)

    Ndu, Obibobi Kamtochukwu

    To ensure that estimates of risk and reliability inform design and resource allocation decisions in the development of complex engineering systems, early engagement in the design life cycle is necessary. An unfortunate constraint on the accuracy of such estimates at this stage of concept development is the limited amount of high fidelity design and failure information available on the actual system under development. Applying the human ability to learn from experience and augment our state of knowledge to evolve better solutions mitigates this limitation. However, the challenge lies in formalizing a methodology that takes this highly abstract, but fundamentally human cognitive, ability and extending it to the field of risk analysis while maintaining the tenets of generalization, Bayesian inference, and probabilistic risk analysis. We introduce an integrated framework for inferring the reliability, or other probabilistic measures of interest, of a new system or a conceptual variant of an existing system. Abstractly, our framework is based on learning from the performance of precedent designs and then applying the acquired knowledge, appropriately adjusted based on degree of relevance, to the inference process. This dissertation presents a method for inferring properties of the conceptual variant using a pseudo-spatial model that describes the spatial configuration of the family of systems to which the concept belongs. Through non-metric multidimensional scaling, we formulate the pseudo-spatial model based on rank-ordered subjective expert perception of design similarity between systems that elucidate the psychological space of the family. By a novel extension of Kriging methods for analysis of geospatial data to our "pseudo-space of comparable engineered systems", we develop a Bayesian inference model that allows prediction of the probabilistic measure of interest.

  9. Comparing the Cognitive Process of Circular Causality in Two Patients with Strokes through Qualitative Analysis.

    PubMed

    Derakhshanrad, Seyed Alireza; Piven, Emily; Ghoochani, Bahareh Zeynalzadeh

    2017-10-01

    Walter J. Freeman pioneered the neurodynamic model of brain activity when he described the brain dynamics for cognitive information transfer as the process of circular causality at intention, meaning, and perception (IMP) levels. This view contributed substantially to establishment of the Intention, Meaning, and Perception Model of Neuro-occupation in occupational therapy. As described by the model, IMP levels are three components of the brain dynamics system, with nonlinear connections that enable cognitive function to be processed in a circular causality fashion, known as Cognitive Process of Circular Causality (CPCC). Although considerable research has been devoted to study the brain dynamics by sophisticated computerized imaging techniques, less attention has been paid to study it through investigating the adaptation process of thoughts and behaviors. To explore how CPCC manifested thinking and behavioral patterns, a qualitative case study was conducted on two matched female participants with strokes, who were of comparable ages, affected sides, and other characteristics, except for their resilience and motivational behaviors. CPCC was compared by matrix analysis between two participants, using content analysis with pre-determined categories. Different patterns of thinking and behavior may have happened, due to disparate regulation of CPCC between two participants.

  10. A Comparative Analysis of a Generalized Lanchester Equation Model and a Stochastic Computer Simulation Model.

    DTIC Science & Technology

    1987-03-01

    model is one in which words or numerical descriptions are used to represent an entity or process. An example of a symbolic model is a mathematical ...are the third type of model used in modeling combat attrition. Analytical models are symbolic models which use mathematical symbols and equations to...simplicity and the ease of tracing through the mathematical computations. In this section I will discuss some of the shortcoming which have been

  11. Subject-specific longitudinal shape analysis by coupling spatiotemporal shape modeling with medial analysis

    NASA Astrophysics Data System (ADS)

    Hong, Sungmin; Fishbaugh, James; Rezanejad, Morteza; Siddiqi, Kaleem; Johnson, Hans; Paulsen, Jane; Kim, Eun Young; Gerig, Guido

    2017-02-01

    Modeling subject-specific shape change is one of the most important challenges in longitudinal shape analysis of disease progression. Whereas anatomical change over time can be a function of normal aging, anatomy can also be impacted by disease related degeneration. Anatomical shape change may also be affected by structural changes from neighboring shapes, which may cause non-linear variations in pose. In this paper, we propose a framework to analyze disease related shape changes by coupling extrinsic modeling of the ambient anatomical space via spatiotemporal deformations with intrinsic shape properties from medial surface analysis. We compare intrinsic shape properties of a subject-specific shape trajectory to a normative 4D shape atlas representing normal aging to isolate shape changes related to disease. The spatiotemporal shape modeling establishes inter/intra subject anatomical correspondence, which in turn enables comparisons between subjects and the 4D shape atlas, and also quantitative analysis of disease related shape change. The medial surface analysis captures intrinsic shape properties related to local patterns of deformation. The proposed framework jointly models extrinsic longitudinal shape changes in the ambient anatomical space, as well as intrinsic shape properties to give localized measurements of degeneration. Six high risk subjects and six controls are randomly sampled from a Huntington's disease image database for qualitative and quantitative comparison.

  12. The Indecision Model of Psychophysical Performance in Dual-Presentation Tasks: Parameter Estimation and Comparative Analysis of Response Formats

    PubMed Central

    García-Pérez, Miguel A.; Alcalá-Quintana, Rocío

    2017-01-01

    Psychophysical data from dual-presentation tasks are often collected with the two-alternative forced-choice (2AFC) response format, asking observers to guess when uncertain. For an analytical description of performance, psychometric functions are then fitted to data aggregated across the two orders/positions in which stimuli were presented. Yet, order effects make aggregated data uninterpretable, and the bias with which observers guess when uncertain precludes separating sensory from decisional components of performance. A ternary response format in which observers are also allowed to report indecision should fix these problems, but a comparative analysis with the 2AFC format has never been conducted. In addition, fitting ternary data separated by presentation order poses serious challenges. To address these issues, we extended the indecision model of psychophysical performance to accommodate the ternary, 2AFC, and same–different response formats in detection and discrimination tasks. Relevant issues for parameter estimation are also discussed along with simulation results that document the superiority of the ternary format. These advantages are demonstrated by fitting the indecision model to published detection and discrimination data collected with the ternary, 2AFC, or same–different formats, which had been analyzed differently in the sources. These examples also show that 2AFC data are unsuitable for testing certain types of hypotheses. matlab and R routines written for our purposes are available as Supplementary Material, which should help spread the use of the ternary format for dependable collection and interpretation of psychophysical data. PMID:28747893

  13. Do Breast Implants Influence Breastfeeding? A Meta-Analysis of Comparative Studies.

    PubMed

    Cheng, Fengrui; Dai, Shuiping; Wang, Chiyi; Zeng, Shaoxue; Chen, Junjie; Cen, Ying

    2018-06-01

    Aesthetic breast implant augmentation surgery is the most popular plastic surgery worldwide. Many women choose to receive breast implants during their reproductive ages, although the long-term effects are still controversial. Research aim: We conducted a meta-analysis to assess the influence of aesthetic breast augmentation on breastfeeding. We also compared the exclusive breastfeeding rates of periareolar versus inframammary incision. A systematic search for comparative studies about breast implants and breastfeeding was performed in PubMed, MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials, ScienceDirect, Scopus, and Web of Science through May 2018. Meta-analysis was conducted with a random-effects model (or fixed effects, if heterogeneity was absent). Four cohorts and one cross-sectional study were included. There was a significant reduction in the exclusive breastfeeding rate for women with breast implants compared with women without implants, pooled relative risk = 0.63, 95% confidence interval [0.46, 0.86], as well as the breastfeeding rate, pooled relative risk = 0.88, 95% confidence interval [0.81, 0.95]. There was no evidence that periareolar incision was associated with a reduction in the exclusive breastfeeding rate, pooled relative risk = 0.84, 95% confidence interval [0.45, 1.58]. Participants with breast implants are less likely to establish breastfeeding, especially exclusive breastfeeding. Periareolar incision does not appear to reduce the exclusive breastfeeding rate.

  14. Rural and Urban Crashes: A Comparative Analysis

    DOT National Transportation Integrated Search

    1996-08-01

    National Highway Traffic Safety Administration's National Center for Statistics : and Analysis (NCSA) recently completed a study comparing the characteristics of : crashes occurring in rural areas to the characteristics of crashes occurring in : urba...

  15. Comparative analysis of tumor spheroid generation techniques for differential in vitro drug toxicity

    PubMed Central

    Raghavan, Shreya; Rowley, Katelyn R.; Mehta, Geeta

    2016-01-01

    Multicellular tumor spheroids are powerful in vitro models to perform preclinical chemosensitivity assays. We compare different methodologies to generate tumor spheroids in terms of resultant spheroid morphology, cellular arrangement and chemosensitivity. We used two cancer cell lines (MCF7 and OVCAR8) to generate spheroids using i) hanging drop array plates; ii) liquid overlay on ultra-low attachment plates; iii) liquid overlay on ultra-low attachment plates with rotating mixing (nutator plates). Analysis of spheroid morphometry indicated that cellular compaction was increased in spheroids generated on nutator and hanging drop array plates. Collagen staining also indicated higher compaction and remodeling in tumor spheroids on nutator and hanging drop arrays compared to conventional liquid overlay. Consequently, spheroids generated on nutator or hanging drop plates had increased chemoresistance to cisplatin treatment (20-60% viability) compared to spheroids on ultra low attachment plates (10-20% viability). Lastly, we used a mathematical model to demonstrate minimal changes in oxygen and cisplatin diffusion within experimentally generated spheroids. Our results demonstrate that in vitro methods of tumor spheroid generation result in varied cellular arrangement and chemosensitivity. PMID:26918944

  16. On the applications of nanofluids to enhance the performance of solar collectors: A comparative analysis of Atangana-Baleanu and Caputo-Fabrizio fractional models

    NASA Astrophysics Data System (ADS)

    Sheikh, Nadeem Ahmad; Ali, Farhad; Khan, Ilyas; Gohar, Madeha; Saqib, Muhammad

    2017-12-01

    In the modern era, solar energy has gained the consideration of researchers to a great deal. Apparently, the reasons are twofold: firstly, the researchers are concerned to design new devices like solar collectors, solar water heaters, etc. Secondly, the use of new approaches to improve the performance of solar energy equipment. The aim of this paper is to model the problem of the enhancement of heat transfer rate of solar energy devices, using nanoparticles and to find the exact solutions of the considered problem. The classical model is transformed to a generalized model using two different types of time-fractional derivatives, namely the Caputo-Fabrizio and Atangana-Baleanu derivatives and their comparative analysis has been presented. The solutions for the flow profile and heat transfer are presented using the Laplace transform method. The variation in the heat transfer rate has been observed for different nanoparticles and their different volume fractions. Theoretical results show that by adding aluminum oxide nanoparticles, the efficiency of solar collectors may be enhanced by 5.2%. Furthermore, the effect of volume friction of nanoparticles on velocity distribution has been discussed in graphical illustrations. The solutions are reduced to the corresponding classical model of nanofluid.

  17. Key Process Uncertainties in Soil Carbon Dynamics: Comparing Multiple Model Structures and Observational Meta-analysis

    NASA Astrophysics Data System (ADS)

    Sulman, B. N.; Moore, J.; Averill, C.; Abramoff, R. Z.; Bradford, M.; Classen, A. T.; Hartman, M. D.; Kivlin, S. N.; Luo, Y.; Mayes, M. A.; Morrison, E. W.; Riley, W. J.; Salazar, A.; Schimel, J.; Sridhar, B.; Tang, J.; Wang, G.; Wieder, W. R.

    2016-12-01

    Soil carbon (C) dynamics are crucial to understanding and predicting C cycle responses to global change and soil C modeling is a key tool for understanding these dynamics. While first order model structures have historically dominated this area, a recent proliferation of alternative model structures representing different assumptions about microbial activity and mineral protection is providing new opportunities to explore process uncertainties related to soil C dynamics. We conducted idealized simulations of soil C responses to warming and litter addition using models from five research groups that incorporated different sets of assumptions about processes governing soil C decomposition and stabilization. We conducted a meta-analysis of published warming and C addition experiments for comparison with simulations. Assumptions related to mineral protection and microbial dynamics drove strong differences among models. In response to C additions, some models predicted long-term C accumulation while others predicted transient increases that were counteracted by accelerating decomposition. In experimental manipulations, doubling litter addition did not change soil C stocks in studies spanning as long as two decades. This result agreed with simulations from models with strong microbial growth responses and limited mineral sorption capacity. In observations, warming initially drove soil C loss via increased CO2 production, but in some studies soil C rebounded and increased over decadal time scales. In contrast, all models predicted sustained C losses under warming. The disagreement with experimental results could be explained by physiological or community-level acclimation, or by warming-related changes in plant growth. In addition to the role of microbial activity, assumptions related to mineral sorption and protected C played a key role in driving long-term model responses. In general, simulations were similar in their initial responses to perturbations but diverged over

  18. rCAD: A Novel Database Schema for the Comparative Analysis of RNA.

    PubMed

    Ozer, Stuart; Doshi, Kishore J; Xu, Weijia; Gutell, Robin R

    2011-12-31

    Beyond its direct involvement in protein synthesis with mRNA, tRNA, and rRNA, RNA is now being appreciated for its significance in the overall metabolism and regulation of the cell. Comparative analysis has been very effective in the identification and characterization of RNA molecules, including the accurate prediction of their secondary structure. We are developing an integrative scalable data management and analysis system, the RNA Comparative Analysis Database (rCAD), implemented with SQL Server to support RNA comparative analysis. The platformagnostic database schema of rCAD captures the essential relationships between the different dimensions of information for RNA comparative analysis datasets. The rCAD implementation enables a variety of comparative analysis manipulations with multiple integrated data dimensions for advanced RNA comparative analysis workflows. In this paper, we describe details of the rCAD schema design and illustrate its usefulness with two usage scenarios.

  19. rCAD: A Novel Database Schema for the Comparative Analysis of RNA

    PubMed Central

    Ozer, Stuart; Doshi, Kishore J.; Xu, Weijia; Gutell, Robin R.

    2013-01-01

    Beyond its direct involvement in protein synthesis with mRNA, tRNA, and rRNA, RNA is now being appreciated for its significance in the overall metabolism and regulation of the cell. Comparative analysis has been very effective in the identification and characterization of RNA molecules, including the accurate prediction of their secondary structure. We are developing an integrative scalable data management and analysis system, the RNA Comparative Analysis Database (rCAD), implemented with SQL Server to support RNA comparative analysis. The platformagnostic database schema of rCAD captures the essential relationships between the different dimensions of information for RNA comparative analysis datasets. The rCAD implementation enables a variety of comparative analysis manipulations with multiple integrated data dimensions for advanced RNA comparative analysis workflows. In this paper, we describe details of the rCAD schema design and illustrate its usefulness with two usage scenarios. PMID:24772454

  20. Value Frameworks in Oncology: Comparative Analysis and Implications to the Pharmaceutical Industry.

    PubMed

    Slomiany, Mark; Madhavan, Priya; Kuehn, Michael; Richardson, Sasha

    2017-07-01

    As the cost of oncology care continues to rise, composite value models that variably capture the diverse concerns of patients, physicians, payers, policymakers, and the pharmaceutical industry have begun to take shape. To review the capabilities and limitations of 5 of the most notable value frameworks in oncology that have emerged in recent years and to compare their relative value and application among the intended stakeholders. We compared the methodology of the American Society of Clinical Oncology (ASCO) Value Framework (version 2.0), the National Comprehensive Cancer Network Evidence Blocks, Memorial Sloan Kettering Cancer Center DrugAbacus, the Institute for Clinical and Economic Review Value Assessment Framework, and the European Society for Medical Oncology Magnitude of Clinical Benefit Scale, using a side-by-side comparative approach in terms of the input, scoring methodology, and output of each framework. In addition, we gleaned stakeholder insights about these frameworks and their potential real-world applications through dialogues with physicians and payers, as well as through secondary research and an aggregate analysis of previously published survey results. The analysis identified several framework-specific themes in their respective focus on clinical trial elements, breadth of evidence, evidence weighting, scoring methodology, and value to stakeholders. Our dialogues with physicians and our aggregate analysis of previous surveys revealed a varying level of awareness of, and use of, each of the value frameworks in clinical practice. For example, although the ASCO Value Framework appears nascent in clinical practice, physicians believe that the frameworks will be more useful in practice in the future as they become more established and as their outputs are more widely accepted. Along with patients and payers, who bear the burden of treatment costs, physicians and policymakers have waded into the discussion of defining value in oncology care, as well

  1. Sensitivity analysis of a sound absorption model with correlated inputs

    NASA Astrophysics Data System (ADS)

    Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.

    2017-04-01

    Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.

  2. Comparative analysis of early ontogeny in Bursatella leachii and Aplysia californica

    PubMed Central

    Vue, Zer; Capo, Thomas R.; Bardales, Ana T.

    2014-01-01

    Opisthobranch molluscs exhibit fascinating body plans associated with the evolution of shell loss in multiple lineages. Sea hares in particular are interesting because Aplysia californica is a well-studied model organism that offers a large suite of genetic tools. Bursatella leachii is a related tropical sea hare that lacks a shell as an adult and therefore lends itself to comparative analysis with A. californica. We have established an enhanced culturing procedure for B. leachii in husbandry that enabled the study of shell formation and loss in this lineage with respect to A. californica life staging. PMID:25538871

  3. White matter degeneration in schizophrenia: a comparative diffusion tensor analysis

    NASA Astrophysics Data System (ADS)

    Ingalhalikar, Madhura A.; Andreasen, Nancy C.; Kim, Jinsuh; Alexander, Andrew L.; Magnotta, Vincent A.

    2010-03-01

    Schizophrenia is a serious and disabling mental disorder. Diffusion tensor imaging (DTI) studies performed on schizophrenia have demonstrated white matter degeneration either due to loss of myelination or deterioration of fiber tracts although the areas where the changes occur are variable across studies. Most of the population based studies analyze the changes in schizophrenia using scalar indices computed from the diffusion tensor such as fractional anisotropy (FA) and relative anisotropy (RA). The scalar measures may not capture the complete information from the diffusion tensor. In this paper we have applied the RADTI method on a group of 9 controls and 9 patients with schizophrenia. The RADTI method converts the tensors to log-Euclidean space where a linear regression model is applied and hypothesis testing is performed between the control and patient groups. Results show that there is a significant difference in the anisotropy between patients and controls especially in the parts of forceps minor, superior corona radiata, anterior limb of internal capsule and genu of corpus callosum. To check if the tensor analysis gives a better idea of the changes in anisotropy, we compared the results with voxelwise FA analysis as well as voxelwise geodesic anisotropy (GA) analysis.

  4. Comparative study of Sperm Motility Analysis System and conventional microscopic semen analysis

    PubMed Central

    KOMORI, KAZUHIKO; ISHIJIMA, SUMIO; TANJAPATKUL, PHANU; FUJITA, KAZUTOSHI; MATSUOKA, YASUHIRO; TAKAO, TETSUYA; MIYAGAWA, YASUSHI; TAKADA, SHINGO; OKUYAMA, AKIHIKO

    2006-01-01

    Background and Aim:  Conventional manual sperm analysis still shows variations in structure, process and outcome although World Health Organization (WHO) guidelines present an appropriate method for sperm analysis. In the present study a new system for sperm analysis, Sperm Motility Analysis System (SMAS), was compared with manual semen analysis based on WHO guidelines. Materials and methods:  Samples from 30 infertility patients and 21 healthy volunteers were subjected to manual microscopic analysis and SMAS analysis, simultaneously. We compared these two methods with respect to sperm concentration and percent motility. Results:  Sperm concentrations obtained by SMAS (Csmas) and manual microscopic analyses on WHO guidelines (Cwho) were strongly correlated (Cwho = 1.325 × Csmas; r = 0.95, P < 0.001). If we excluded subjects with Csmas values >30 × 106 sperm/mL, the results were more similar (Cwho = 1.022 × Csmas; r = 0.81, P < 0.001). Percent motility obtained by SMAS (Msmas) and manual analysis on WHO guidelines (Mwho) were strongly correlated (Mwho = 1.214 × Msmas; r = 0.89, P < 0.001). Conclusions:  The data indicate that the results of SMAS and those of manual microscopic sperm analyses based on WHO guidelines are strongly correlated. SMAS is therefore a promising system for sperm analysis. (Reprod Med Biol 2006; 5: 195–200) PMID:29662398

  5. Comparing Internet Probing Methodologies Through an Analysis of Large Dynamic Graphs

    DTIC Science & Technology

    2014-06-01

    comparable Internet topologies in less time. We compare these by modeling union of traceroute outputs as graphs, and using standard graph theoretical...topologies in less time. We compare these by modeling union of traceroute outputs as graphs, and using standard graph theoretical measurements as well...We compare these by modeling union of traceroute outputs as graphs, and study the graphs by using vertex and edge count, average vertex degree

  6. Modeller's attitude in catchment modelling: a comparative study

    NASA Astrophysics Data System (ADS)

    Battista Chirico, Giovanni

    2010-05-01

    Ten modellers have been invited to predict, independently from each other, the discharge of the artificial Chicken Creek catchment in North-East Germany for simulation period of three years, providing them only soil texture, terrain and meteorological data. No data concerning the discharge or other sources of state variables and fluxes within the catchment have been provided. Modellers had however the opportunity to visit the experimental catchment and inspect areal photos of the catchments since its initial development stage. This study has been a unique comparative study focussing on how different modellers deal with the key issues in predicting the discharge in ungauged catchments: 1) choice of the model structure; 2) identification of model parameters; 3) identification of model initial and boundary conditions. The first general lesson learned during this study was that the modeller is just part of the entire modelling process and has a major bearing on the model results, particularly in ungauged catchments where there are more degrees of freedom in making modelling decisions. Modellers' attitudes during the stages of the model implementation and parameterisation have been deeply influenced by their own experience from previous modelling studies. A common outcome was that modellers have been mainly oriented to apply process-based models able to exploit the available data concerning the physical properties of the catchment and therefore could be more suitable to cope with the lack of data concerning state variables or fluxes. The second general lesson learned during this study was the role of dominant processes. We believed that the modelling task would have been much easier in an artificial catchment, where heterogeneity were expected to be negligible and processes simpler, than in catchments that have evolved over a longer time period. The results of the models were expected to converge, and this would have been a good starting point to proceed for a model

  7. Comparison of composite rotor blade models: A coupled-beam analysis and an MSC/NASTRAN finite-element model

    NASA Technical Reports Server (NTRS)

    Hodges, Robert V.; Nixon, Mark W.; Rehfield, Lawrence W.

    1987-01-01

    A methodology was developed for the structural analysis of composite rotor blades. This coupled-beam analysis is relatively simple to use compared with alternative analysis techniques. The beam analysis was developed for thin-wall single-cell rotor structures and includes the effects of elastic coupling. This paper demonstrates the effectiveness of the new composite-beam analysis method through comparison of its results with those of an established baseline analysis technique. The baseline analysis is an MSC/NASTRAN finite-element model built up from anisotropic shell elements. Deformations are compared for three linear static load cases of centrifugal force at design rotor speed, applied torque, and lift for an ideal rotor in hover. A D-spar designed to twist under axial loading is the subject of the analysis. Results indicate the coupled-beam analysis is well within engineering accuracy.

  8. Using multi-criteria analysis of simulation models to understand complex biological systems

    Treesearch

    Maureen C. Kennedy; E. David Ford

    2011-01-01

    Scientists frequently use computer-simulation models to help solve complex biological problems. Typically, such models are highly integrated, they produce multiple outputs, and standard methods of model analysis are ill suited for evaluating them. We show how multi-criteria optimization with Pareto optimality allows for model outputs to be compared to multiple system...

  9. Recent results on the spatiotemporal modelling and comparative analysis of Black Death and bubonic plague epidemics.

    PubMed

    Christakos, G; Olea, R A; Yu, H-L

    2007-09-01

    This work demonstrates the importance of spatiotemporal stochastic modelling in constructing maps of major epidemics from fragmentary information, assessing population impacts, searching for possible etiologies, and performing comparative analysis of epidemics. Based on the theory previously published by the authors and incorporating new knowledge bases, informative maps of the composite space-time distributions were generated for important characteristics of two major epidemics: Black Death (14th century Western Europe) and bubonic plague (19th-20th century Indian subcontinent). The comparative spatiotemporal analysis of the epidemics led to a number of interesting findings: (1) the two epidemics exhibited certain differences in their spatiotemporal characteristics (correlation structures, trends, occurrence patterns and propagation speeds) that need to be explained by means of an interdisciplinary effort; (2) geographical epidemic indicators confirmed in a rigorous quantitative manner the partial findings of isolated reports and time series that Black Death mortality was two orders of magnitude higher than that of bubonic plague; (3) modern bubonic plague is a rural disease hitting harder the small villages in the countryside whereas Black Death was a devastating epidemic that indiscriminately attacked large urban centres and the countryside, and while the epidemic in India lasted uninterruptedly for five decades, in Western Europe it lasted three and a half years; (4) the epidemics had reverse areal extension features in response to annual seasonal variations. Temperature increase at the end of winter led to an expansion of infected geographical area for Black Death and a reduction for bubonic plague, reaching a climax at the end of spring when the infected area in Western Europe was always larger than in India. Conversely, without exception, the infected area during winter was larger for the Indian bubonic plague; (5) during the Indian epidemic, the disease

  10. Recent results on the spatiotemporal modelling and comparative analysis of Black Death and bubonic plague epidemics

    USGS Publications Warehouse

    Christakos, G.; Olea, R.A.; Yu, H.-L.

    2007-01-01

    Background: This work demonstrates the importance of spatiotemporal stochastic modelling in constructing maps of major epidemics from fragmentary information, assessing population impacts, searching for possible etiologies, and performing comparative analysis of epidemics. Methods: Based on the theory previously published by the authors and incorporating new knowledge bases, informative maps of the composite space-time distributions were generated for important characteristics of two major epidemics: Black Death (14th century Western Europe) and bubonic plague (19th-20th century Indian subcontinent). Results: The comparative spatiotemporal analysis of the epidemics led to a number of interesting findings: (1) the two epidemics exhibited certain differences in their spatiotemporal characteristics (correlation structures, trends, occurrence patterns and propagation speeds) that need to be explained by means of an interdisciplinary effort; (2) geographical epidemic indicators confirmed in a rigorous quantitative manner the partial findings of isolated reports and time series that Black Death mortality was two orders of magnitude higher than that of bubonic plague; (3) modern bubonic plague is a rural disease hitting harder the small villages in the countryside whereas Black Death was a devastating epidemic that indiscriminately attacked large urban centres and the countryside, and while the epidemic in India lasted uninterruptedly for five decades, in Western Europe it lasted three and a half years; (4) the epidemics had reverse areal extension features in response to annual seasonal variations. Temperature increase at the end of winter led to an expansion of infected geographical area for Black Death and a reduction for bubonic plague, reaching a climax at the end of spring when the infected area in Western Europe was always larger than in India. Conversely, without exception, the infected area during winter was larger for the Indian bubonic plague; (5) during the

  11. CloVR-Comparative: automated, cloud-enabled comparative microbial genome sequence analysis pipeline.

    PubMed

    Agrawal, Sonia; Arze, Cesar; Adkins, Ricky S; Crabtree, Jonathan; Riley, David; Vangala, Mahesh; Galens, Kevin; Fraser, Claire M; Tettelin, Hervé; White, Owen; Angiuoli, Samuel V; Mahurkar, Anup; Fricke, W Florian

    2017-04-27

    The benefit of increasing genomic sequence data to the scientific community depends on easy-to-use, scalable bioinformatics support. CloVR-Comparative combines commonly used bioinformatics tools into an intuitive, automated, and cloud-enabled analysis pipeline for comparative microbial genomics. CloVR-Comparative runs on annotated complete or draft genome sequences that are uploaded by the user or selected via a taxonomic tree-based user interface and downloaded from NCBI. CloVR-Comparative runs reference-free multiple whole-genome alignments to determine unique, shared and core coding sequences (CDSs) and single nucleotide polymorphisms (SNPs). Output includes short summary reports and detailed text-based results files, graphical visualizations (phylogenetic trees, circular figures), and a database file linked to the Sybil comparative genome browser. Data up- and download, pipeline configuration and monitoring, and access to Sybil are managed through CloVR-Comparative web interface. CloVR-Comparative and Sybil are distributed as part of the CloVR virtual appliance, which runs on local computers or the Amazon EC2 cloud. Representative datasets (e.g. 40 draft and complete Escherichia coli genomes) are processed in <36 h on a local desktop or at a cost of <$20 on EC2. CloVR-Comparative allows anybody with Internet access to run comparative genomics projects, while eliminating the need for on-site computational resources and expertise.

  12. Robust Linear Models for Cis-eQTL Analysis.

    PubMed

    Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C

    2015-01-01

    Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.

  13. [Analysis of the stability and adaptability of near infrared spectra qualitative analysis model].

    PubMed

    Cao, Wu; Li, Wei-jun; Wang, Ping; Zhang, Li-ping

    2014-06-01

    The stability and adaptability of model of near infrared spectra qualitative analysis were studied. Method of separate modeling can significantly improve the stability and adaptability of model; but its ability of improving adaptability of model is limited. Method of joint modeling can not only improve the adaptability of the model, but also the stability of model, at the same time, compared to separate modeling, the method can shorten the modeling time, reduce the modeling workload; extend the term of validity of model, and improve the modeling efficiency. The experiment of model adaptability shows that, the correct recognition rate of separate modeling method is relatively low, which can not meet the requirements of application, and joint modeling method can reach the correct recognition rate of 90%, and significantly enhances the recognition effect. The experiment of model stability shows that, the identification results of model by joint modeling are better than the model by separate modeling, and has good application value.

  14. Comparative analysis of the modified enclosed energy metric for self-focusing holograms from digital lensless holographic microscopy.

    PubMed

    Trujillo, Carlos; Garcia-Sucerquia, Jorge

    2015-06-01

    A comparative analysis of the performance of the modified enclosed energy (MEE) method for self-focusing holograms recorded with digital lensless holographic microscopy is presented. Notwithstanding the MEE analysis previously published, no extended analysis of its performance has been reported. We have tested the MEE in terms of the minimum axial distance allowed between the set of reconstructed holograms to search for the focal plane and the elapsed time to obtain the focused image. These parameters have been compared with those for some of the already reported methods in the literature. The MEE achieves better results in terms of self-focusing quality but at a higher computational cost. Despite its longer processing time, the method remains within a time frame to be technologically attractive. Modeled and experimental holograms have been utilized in this work to perform the comparative study.

  15. Comparing estimates of genetic variance across different relationship models.

    PubMed

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Parmodel: a web server for automated comparative modeling of proteins.

    PubMed

    Uchôa, Hugo Brandão; Jorge, Guilherme Eberhart; Freitas Da Silveira, Nelson José; Camera, João Carlos; Canduri, Fernanda; De Azevedo, Walter Filgueira

    2004-12-24

    Parmodel is a web server for automated comparative modeling and evaluation of protein structures. The aim of this tool is to help inexperienced users to perform modeling, assessment, visualization, and optimization of protein models as well as crystallographers to evaluate structures solved experimentally. It is subdivided in four modules: Parmodel Modeling, Parmodel Assessment, Parmodel Visualization, and Parmodel Optimization. The main module is the Parmodel Modeling that allows the building of several models for a same protein in a reduced time, through the distribution of modeling processes on a Beowulf cluster. Parmodel automates and integrates the main softwares used in comparative modeling as MODELLER, Whatcheck, Procheck, Raster3D, Molscript, and Gromacs. This web server is freely accessible at .

  17. Comparative study of turbulence models in predicting hypersonic inlet flows

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.

    1992-01-01

    A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared wery well with the experimental data, and performed better than the Thomas model near the walls.

  18. Comparative study of turbulence models in predicting hypersonic inlet flows

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.

    1992-01-01

    A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared very well with the experimental data, and performed better than the Thomas model near the walls.

  19. In vitro comparative optical bench analysis of a spherical and aspheric optic design of the same IOL model.

    PubMed

    Tandogan, Tamer; Auffarth, Gerd U; Choi, Chul Y; Liebing, Stephanie; Mayer, Christian; Khoramnia, Ramin

    2017-02-08

    To analyse objective optical properties of the spherical and aspheric design of the same intraocular lens (IOL) model using optical bench analysis. This study entailed a comparative analysis of 10 spherical C-flex 570 C and 10 aspheric C-flex 970 C IOLs (Rayner Intraocular Lenses Ltd., Hove, UK) of 26 diopters [D] using an optical bench (OptiSpheric, Trioptics, Germany). In all lenses, we evaluated the modulation transfer function (MTF) at 50 lp/mm and 100 lp/mm and the Strehl Ratio using a 3-mm (photopic) and 4.5-mm (mesopic) aperture. At 50 lp/mm, the MTF values were 0.713/0.805 (C-flex 570 C/C-flex 970 C) for a 3-mm aperture and 0.294/0.591 for a 4.5-mm aperture. At 100 lp/mm, the MTF values were 0.524/0.634 for a 3-mm aperture and 0.198/0.344 for a 4.5-mm aperture. The Strehl Ratio was 0.806/0.925 and 0.237/0.479 for a 3-mm and 4.5-mm aperture respectively. A Mann-Whitney U test revealed all intergroup differences to be statistically significant (p < 0.01). The aspheric IOL design achieved higher MTF values than the spherical design of the same IOL for both apertures. Moreover, the differences between the two designs of the IOL were more prominent for larger apertures. This suggests that the evaluated IOL provides enhanced optical quality to patients with larger pupils or working under mesopic conditions.

  20. Online Statistical Modeling (Regression Analysis) for Independent Responses

    NASA Astrophysics Data System (ADS)

    Made Tirta, I.; Anggraeni, Dian; Pandutama, Martinus

    2017-06-01

    Regression analysis (statistical analmodelling) are among statistical methods which are frequently needed in analyzing quantitative data, especially to model relationship between response and explanatory variables. Nowadays, statistical models have been developed into various directions to model various type and complex relationship of data. Rich varieties of advanced and recent statistical modelling are mostly available on open source software (one of them is R). However, these advanced statistical modelling, are not very friendly to novice R users, since they are based on programming script or command line interface. Our research aims to developed web interface (based on R and shiny), so that most recent and advanced statistical modelling are readily available, accessible and applicable on web. We have previously made interface in the form of e-tutorial for several modern and advanced statistical modelling on R especially for independent responses (including linear models/LM, generalized linier models/GLM, generalized additive model/GAM and generalized additive model for location scale and shape/GAMLSS). In this research we unified them in the form of data analysis, including model using Computer Intensive Statistics (Bootstrap and Markov Chain Monte Carlo/ MCMC). All are readily accessible on our online Virtual Statistics Laboratory. The web (interface) make the statistical modeling becomes easier to apply and easier to compare them in order to find the most appropriate model for the data.

  1. Comparative proteomics analysis of oral cancer cell lines: identification of cancer associated proteins

    PubMed Central

    2014-01-01

    Background A limiting factor in performing proteomics analysis on cancerous cells is the difficulty in obtaining sufficient amounts of starting material. Cell lines can be used as a simplified model system for studying changes that accompany tumorigenesis. This study used two-dimensional gel electrophoresis (2DE) to compare the whole cell proteome of oral cancer cell lines vs normal cells in an attempt to identify cancer associated proteins. Results Three primary cell cultures of normal cells with a limited lifespan without hTERT immortalization have been successfully established. 2DE was used to compare the whole cell proteome of these cells with that of three oral cancer cell lines. Twenty four protein spots were found to have changed in abundance. MALDI TOF/TOF was then used to determine the identity of these proteins. Identified proteins were classified into seven functional categories – structural proteins, enzymes, regulatory proteins, chaperones and others. IPA core analysis predicted that 18 proteins were related to cancer with involvements in hyperplasia, metastasis, invasion, growth and tumorigenesis. The mRNA expressions of two proteins – 14-3-3 protein sigma and Stress-induced-phosphoprotein 1 – were found to correlate with the corresponding proteins’ abundance. Conclusions The outcome of this analysis demonstrated that a comparative study of whole cell proteome of cancer versus normal cell lines can be used to identify cancer associated proteins. PMID:24422745

  2. Model parameter uncertainty analysis for an annual field-scale P loss model

    NASA Astrophysics Data System (ADS)

    Bolster, Carl H.; Vadas, Peter A.; Boykin, Debbie

    2016-08-01

    Phosphorous (P) fate and transport models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. Because all models are simplifications of complex systems, there will exist an inherent amount of uncertainty associated with their predictions. It is therefore important that efforts be directed at identifying, quantifying, and communicating the different sources of model uncertainties. In this study, we conducted an uncertainty analysis with the Annual P Loss Estimator (APLE) model. Our analysis included calculating parameter uncertainties and confidence and prediction intervals for five internal regression equations in APLE. We also estimated uncertainties of the model input variables based on values reported in the literature. We then predicted P loss for a suite of fields under different management and climatic conditions while accounting for uncertainties in the model parameters and inputs and compared the relative contributions of these two sources of uncertainty to the overall uncertainty associated with predictions of P loss. Both the overall magnitude of the prediction uncertainties and the relative contributions of the two sources of uncertainty varied depending on management practices and field characteristics. This was due to differences in the number of model input variables and the uncertainties in the regression equations associated with each P loss pathway. Inspection of the uncertainties in the five regression equations brought attention to a previously unrecognized limitation with the equation used to partition surface-applied fertilizer P between leaching and runoff losses. As a result, an alternate equation was identified that provided similar predictions with much less uncertainty. Our results demonstrate how a thorough uncertainty and model residual analysis can be used to identify limitations with a model. Such insight can then be used to guide future data collection and model

  3. Mental health network governance: comparative analysis across Canadian regions

    PubMed Central

    Wiktorowicz, Mary E; Fleury, Marie-Josée; Adair, Carol E; Lesage, Alain; Goldner, Elliot; Peters, Suzanne

    2010-01-01

    Objective Modes of governance were compared in ten local mental health networks in diverse contexts (rural/urban and regionalized/non-regionalized) to clarify the governance processes that foster inter-organizational collaboration and the conditions that support them. Methods Case studies of ten local mental health networks were developed using qualitative methods of document review, semi-structured interviews and focus groups that incorporated provincial policy, network and organizational levels of analysis. Results Mental health networks adopted either a corporate structure, mutual adjustment or an alliance governance model. A corporate structure supported by regionalization offered the most direct means for local governance to attain inter-organizational collaboration. The likelihood that networks with an alliance model developed coordination processes depended on the presence of the following conditions: a moderate number of organizations, goal consensus and trust among the organizations, and network-level competencies. In the small and mid-sized urban networks where these conditions were met their alliance realized the inter-organizational collaboration sought. In the large urban and rural networks where these conditions were not met, externally brokered forms of network governance were required to support alliance based models. Discussion In metropolitan and rural networks with such shared forms of network governance as an alliance or voluntary mutual adjustment, external mediation by a regional or provincial authority was an important lever to foster inter-organizational collaboration. PMID:21289999

  4. Mental health network governance: comparative analysis across Canadian regions.

    PubMed

    Wiktorowicz, Mary E; Fleury, Marie-Josée; Adair, Carol E; Lesage, Alain; Goldner, Elliot; Peters, Suzanne

    2010-10-26

    Modes of governance were compared in ten local mental health networks in diverse contexts (rural/urban and regionalized/non-regionalized) to clarify the governance processes that foster inter-organizational collaboration and the conditions that support them. Case studies of ten local mental health networks were developed using qualitative methods of document review, semi-structured interviews and focus groups that incorporated provincial policy, network and organizational levels of analysis. Mental health networks adopted either a corporate structure, mutual adjustment or an alliance governance model. A corporate structure supported by regionalization offered the most direct means for local governance to attain inter-organizational collaboration. The likelihood that networks with an alliance model developed coordination processes depended on the presence of the following conditions: a moderate number of organizations, goal consensus and trust among the organizations, and network-level competencies. In the small and mid-sized urban networks where these conditions were met their alliance realized the inter-organizational collaboration sought. In the large urban and rural networks where these conditions were not met, externally brokered forms of network governance were required to support alliance based models. In metropolitan and rural networks with such shared forms of network governance as an alliance or voluntary mutual adjustment, external mediation by a regional or provincial authority was an important lever to foster inter-organizational collaboration.

  5. Structural modeling and docking studies of ribose 5-phosphate isomerase from Leishmania major and Homo sapiens: a comparative analysis for Leishmaniasis treatment.

    PubMed

    Capriles, Priscila V S Z; Baptista, Luiz Phillippe R; Guedes, Isabella A; Guimarães, Ana Carolina R; Custódio, Fabio L; Alves-Ferreira, Marcelo; Dardenne, Laurent E

    2015-02-01

    Leishmaniases are caused by protozoa of the genus Leishmania and are considered the second-highest cause of death worldwide by parasitic infection. The drugs available for treatment in humans are becoming ineffective mainly due to parasite resistance; therefore, it is extremely important to develop a new chemotherapy against these parasites. A crucial aspect of drug design development is the identification and characterization of novel molecular targets. In this work, through an in silico comparative analysis between the genomes of Leishmania major and Homo sapiens, the enzyme ribose 5-phosphate isomerase (R5PI) was indicated as a promising molecular target. R5PI is an important enzyme that acts in the pentose phosphate pathway and catalyzes the interconversion of d-ribose-5-phosphate (R5P) and d-ribulose-5-phosphate (5RP). R5PI activity is found in two analogous groups of enzymes called RpiA (found in H. sapiens) and RpiB (found in L. major). Here, we present the first report of the three-dimensional (3D) structures and active sites of RpiB from L. major (LmRpiB) and RpiA from H. sapiens (HsRpiA). Three-dimensional models were constructed by applying a hybrid methodology that combines comparative and ab initio modeling techniques, and the active site was characterized based on docking studies of the substrates R5P (furanose and ring-opened forms) and 5RP. Our comparative analyses show that these proteins are structural analogs and that distinct residues participate in the interconversion of R5P and 5RP. We propose two distinct reaction mechanisms for the reversible isomerization of R5P to 5RP, which is catalyzed by LmRpiB and HsRpiA. We expect that the present results will be important in guiding future molecular modeling studies to develop new drugs that are specially designed to inhibit the parasitic form of the enzyme without significant effects on the human analog. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Comparison of the occlusal contact area of virtual models and actual models: a comparative in vitro study on Class I and Class II malocclusion models.

    PubMed

    Lee, Hyemin; Cha, Jooly; Chun, Youn-Sic; Kim, Minji

    2018-06-19

    The occlusal registration of virtual models taken by intraoral scanners sometimes shows patterns which seem much different from the patients' occlusion. Therefore, this study aims to evaluate the accuracy of virtual occlusion by comparing virtual occlusal contact area with actual occlusal contact area using a plaster model in vitro. Plaster dental models, 24 sets of Class I models and 20 sets of Class II models, were divided into a Molar, Premolar, and Anterior group. The occlusal contact areas calculated by the Prescale method and the virtual occlusion by scanning method were compared, and the ratio of the molar and incisor area were compared in order to find any particular tendencies. There was no significant difference between the Prescale results and the scanner results in both the molar and premolar groups (p = 0.083 and 0.053, respectively). On the other hand, there was a significant difference between the Prescale and the scanner results in the anterior group with the scanner results presenting overestimation of the occlusal contact points (p < 0.05). In Molars group, the regression analysis shows that the two variables express linear correlation and has a linear equation with a slope of 0.917. R 2 is 0.930. Groups of Premolars and Anteriors had a week linear relationship and greater dispersion. Difference between the actual and virtual occlusion revealed in the anterior portion, where overestimation was observed in the virtual model obtained from the scanning method. Nevertheless, molar and premolar areas showed relatively accurate occlusal contact area in the virtual model.

  7. Comparative Study of Lectin Domains in Model Species: New Insights into Evolutionary Dynamics

    PubMed Central

    Van Holle, Sofie; De Schutter, Kristof; Eggermont, Lore; Tsaneva, Mariya; Dang, Liuyi; Van Damme, Els J. M.

    2017-01-01

    Lectins are present throughout the plant kingdom and are reported to be involved in diverse biological processes. In this study, we provide a comparative analysis of the lectin families from model species in a phylogenetic framework. The analysis focuses on the different plant lectin domains identified in five representative core angiosperm genomes (Arabidopsis thaliana, Glycine max, Cucumis sativus, Oryza sativa ssp. japonica and Oryza sativa ssp. indica). The genomes were screened for genes encoding lectin domains using a combination of Basic Local Alignment Search Tool (BLAST), hidden Markov models, and InterProScan analysis. Additionally, phylogenetic relationships were investigated by constructing maximum likelihood phylogenetic trees. The results demonstrate that the majority of the lectin families are present in each of the species under study. Domain organization analysis showed that most identified proteins are multi-domain proteins, owing to the modular rearrangement of protein domains during evolution. Most of these multi-domain proteins are widespread, while others display a lineage-specific distribution. Furthermore, the phylogenetic analyses reveal that some lectin families evolved to be similar to the phylogeny of the plant species, while others share a closer evolutionary history based on the corresponding protein domain architecture. Our results yield insights into the evolutionary relationships and functional divergence of plant lectins. PMID:28587095

  8. Modeling energy/economy interactions for conservation and renewable energy-policy analysis

    NASA Astrophysics Data System (ADS)

    Groncki, P. J.

    Energy policy and the implications for policy analysis and the methodological tools are discussed. The evolution of one methodological approach and the combined modeling system of the component models, their evolution in response to changing analytic needs, and the development of the integrated framework are reported. The analyses performed over the past several years are summarized. The current philosophy behind energy policy is discussed and compared to recent history. Implications for current policy analysis and methodological approaches are drawn.

  9. Gene Ontology-Based Analysis of Zebrafish Omics Data Using the Web Tool Comparative Gene Ontology.

    PubMed

    Ebrahimie, Esmaeil; Fruzangohar, Mario; Moussavi Nik, Seyyed Hani; Newman, Morgan

    2017-10-01

    Gene Ontology (GO) analysis is a powerful tool in systems biology, which uses a defined nomenclature to annotate genes/proteins within three categories: "Molecular Function," "Biological Process," and "Cellular Component." GO analysis can assist in revealing functional mechanisms underlying observed patterns in transcriptomic, genomic, and proteomic data. The already extensive and increasing use of zebrafish for modeling genetic and other diseases highlights the need to develop a GO analytical tool for this organism. The web tool Comparative GO was originally developed for GO analysis of bacterial data in 2013 ( www.comparativego.com ). We have now upgraded and elaborated this web tool for analysis of zebrafish genetic data using GOs and annotations from the Gene Ontology Consortium.

  10. Modeling time-to-event (survival) data using classification tree analysis.

    PubMed

    Linden, Ariel; Yarnold, Paul R

    2017-12-01

    Time to the occurrence of an event is often studied in health research. Survival analysis differs from other designs in that follow-up times for individuals who do not experience the event by the end of the study (called censored) are accounted for in the analysis. Cox regression is the standard method for analysing censored data, but the assumptions required of these models are easily violated. In this paper, we introduce classification tree analysis (CTA) as a flexible alternative for modelling censored data. Classification tree analysis is a "decision-tree"-like classification model that provides parsimonious, transparent (ie, easy to visually display and interpret) decision rules that maximize predictive accuracy, derives exact P values via permutation tests, and evaluates model cross-generalizability. Using empirical data, we identify all statistically valid, reproducible, longitudinally consistent, and cross-generalizable CTA survival models and then compare their predictive accuracy to estimates derived via Cox regression and an unadjusted naïve model. Model performance is assessed using integrated Brier scores and a comparison between estimated survival curves. The Cox regression model best predicts average incidence of the outcome over time, whereas CTA survival models best predict either relatively high, or low, incidence of the outcome over time. Classification tree analysis survival models offer many advantages over Cox regression, such as explicit maximization of predictive accuracy, parsimony, statistical robustness, and transparency. Therefore, researchers interested in accurate prognoses and clear decision rules should consider developing models using the CTA-survival framework. © 2017 John Wiley & Sons, Ltd.

  11. Data warehouse model design technology analysis and research

    NASA Astrophysics Data System (ADS)

    Jiang, Wenhua; Li, Qingshui

    2012-01-01

    Existing data storage format can not meet the needs of information analysis, data warehouse onto the historical stage, the data warehouse is to support business decision making and the creation of specially designed data collection. With the data warehouse, the companies will all collected information is stored in the data warehouse. The data warehouse is organized according to some, making information easy to access and has value. This paper focuses on the establishment of data warehouse and analysis, design, data warehouse, two barrier models, and compares them.

  12. The multiple complex exponential model and its application to EEG analysis

    NASA Astrophysics Data System (ADS)

    Chen, Dao-Mu; Petzold, J.

    The paper presents a novel approach to the analysis of the EEG signal, which is based on a multiple complex exponential (MCE) model. Parameters of the model are estimated using a nonharmonic Fourier expansion algorithm. The central idea of the algorithm is outlined, and the results, estimated on the basis of simulated data, are presented and compared with those obtained by the conventional methods of signal analysis. Preliminary work on various application possibilities of the MCE model in EEG data analysis is described. It is shown that the parameters of the MCE model reflect the essential information contained in an EEG segment. These parameters characterize the EEG signal in a more objective way because they are closer to the recent supposition of the nonlinear character of the brain's dynamic behavior.

  13. Comparative transcriptome analysis between planarian Dugesia japonica and other platyhelminth species.

    PubMed

    Nishimura, Osamu; Hirao, Yukako; Tarui, Hiroshi; Agata, Kiyokazu

    2012-06-29

    Planarians are considered to be among the extant animals close to one of the earliest groups of organisms that acquired a central nervous system (CNS) during evolution. Planarians have a bilobed brain with nine lateral branches from which a variety of external signals are projected into different portions of the main lobes. Various interneurons process different signals to regulate behavior and learning/memory. Furthermore, planarians have robust regenerative ability and are attracting attention as a new model organism for the study of regeneration. Here we conducted large-scale EST analysis of the head region of the planarian Dugesia japonica to construct a database of the head-region transcriptome, and then performed comparative analyses among related species. A total of 54,752 high-quality EST reads were obtained from a head library of the planarian Dugesia japonica, and 13,167 unigene sequences were produced by de novo assembly. A new method devised here revealed that proteins related to metabolism and defense mechanisms have high flexibility of amino-acid substitutions within the planarian family. Eight-two CNS-development genes were found in the planarian (cf. C. elegans 3; chicken 129). Comparative analysis revealed that 91% of the planarian CNS-development genes could be mapped onto the schistosome genome, but one-third of these shared genes were not expressed in the schistosome. We constructed a database that is a useful resource for comparative planarian transcriptome studies. Analysis comparing homologous genes between two planarian species showed that the potential of genes is important for accumulation of amino-acid substitutions. The presence of many CNS-development genes in our database supports the notion that the planarian has a fundamental brain with regard to evolution and development at not only the morphological/functional, but also the genomic, level. In addition, our results indicate that the planarian CNS-development genes already existed

  14. Comparing models for growth and management of forest tracts

    Treesearch

    J.J. Colbert; Michael Schuckers; Desta Fekedulegn

    2003-01-01

    The Stand Damage Model (SDM) is a PC-based model that is easily installed, calibrated and initialized for use in exploring the future growth and management of forest stands or small wood lots. We compare the basic individual tree growth model incorporated in this model with alternative models that predict the basal area growth of trees. The SDM is a gap-type simulator...

  15. Comparability of river suspended-sediment sampling and laboratory analysis methods

    USGS Publications Warehouse

    Groten, Joel T.; Johnson, Gregory D.

    2018-03-06

    Accurate measurements of suspended sediment, a leading water-quality impairment in many Minnesota rivers, are important for managing and protecting water resources; however, water-quality standards for suspended sediment in Minnesota are based on grab field sampling and total suspended solids (TSS) laboratory analysis methods that have underrepresented concentrations of suspended sediment in rivers compared to U.S. Geological Survey equal-width-increment or equal-discharge-increment (EWDI) field sampling and suspended sediment concentration (SSC) laboratory analysis methods. Because of this underrepresentation, the U.S. Geological Survey, in collaboration with the Minnesota Pollution Control Agency, collected concurrent grab and EWDI samples at eight sites to compare results obtained using different combinations of field sampling and laboratory analysis methods.Study results determined that grab field sampling and TSS laboratory analysis results were biased substantially low compared to EWDI sampling and SSC laboratory analysis results, respectively. Differences in both field sampling and laboratory analysis methods caused grab and TSS methods to be biased substantially low. The difference in laboratory analysis methods was slightly greater than field sampling methods.Sand-sized particles had a strong effect on the comparability of the field sampling and laboratory analysis methods. These results indicated that grab field sampling and TSS laboratory analysis methods fail to capture most of the sand being transported by the stream. The results indicate there is less of a difference among samples collected with grab field sampling and analyzed for TSS and concentration of fines in SSC. Even though differences are present, the presence of strong correlations between SSC and TSS concentrations provides the opportunity to develop site specific relations to address transport processes not captured by grab field sampling and TSS laboratory analysis methods.

  16. Factor Analysis of Drawings: Application to college student models of the greenhouse effect

    NASA Astrophysics Data System (ADS)

    Libarkin, Julie C.; Thomas, Stephen R.; Ording, Gabriel

    2015-09-01

    Exploratory factor analysis was used to identify models underlying drawings of the greenhouse effect made by over 200 entering university freshmen. Initial content analysis allowed deconstruction of drawings into salient features, with grouping of these features via factor analysis. A resulting 4-factor solution explains 62% of the data variance, suggesting that 4 archetype models of the greenhouse effect dominate thinking within this population. Factor scores, indicating the extent to which each student's drawing aligned with representative models, were compared to performance on conceptual understanding and attitudes measures, demographics, and non-cognitive features of drawings. Student drawings were also compared to drawings made by scientists to ascertain the extent to which models reflect more sophisticated and accurate models. Results indicate that student and scientist drawings share some similarities, most notably the presence of some features of the most sophisticated non-scientific model held among the study population. Prior knowledge, prior attitudes, gender, and non-cognitive components are also predictive of an individual student's model. This work presents a new technique for analyzing drawings, with general implications for the use of drawings in investigating student conceptions.

  17. YersiniaBase: a genomic resource and analysis platform for comparative analysis of Yersinia.

    PubMed

    Tan, Shi Yang; Dutta, Avirup; Jakubovics, Nicholas S; Ang, Mia Yang; Siow, Cheuk Chuen; Mutha, Naresh Vr; Heydari, Hamed; Wee, Wei Yee; Wong, Guat Jah; Choo, Siew Woh

    2015-01-16

    Yersinia is a Gram-negative bacteria that includes serious pathogens such as the Yersinia pestis, which causes plague, Yersinia pseudotuberculosis, Yersinia enterocolitica. The remaining species are generally considered non-pathogenic to humans, although there is evidence that at least some of these species can cause occasional infections using distinct mechanisms from the more pathogenic species. With the advances in sequencing technologies, many genomes of Yersinia have been sequenced. However, there is currently no specialized platform to hold the rapidly-growing Yersinia genomic data and to provide analysis tools particularly for comparative analyses, which are required to provide improved insights into their biology, evolution and pathogenicity. To facilitate the ongoing and future research of Yersinia, especially those generally considered non-pathogenic species, a well-defined repository and analysis platform is needed to hold the Yersinia genomic data and analysis tools for the Yersinia research community. Hence, we have developed the YersiniaBase, a robust and user-friendly Yersinia resource and analysis platform for the analysis of Yersinia genomic data. YersiniaBase has a total of twelve species and 232 genome sequences, of which the majority are Yersinia pestis. In order to smooth the process of searching genomic data in a large database, we implemented an Asynchronous JavaScript and XML (AJAX)-based real-time searching system in YersiniaBase. Besides incorporating existing tools, which include JavaScript-based genome browser (JBrowse) and Basic Local Alignment Search Tool (BLAST), YersiniaBase also has in-house developed tools: (1) Pairwise Genome Comparison tool (PGC) for comparing two user-selected genomes; (2) Pathogenomics Profiling Tool (PathoProT) for comparative pathogenomics analysis of Yersinia genomes; (3) YersiniaTree for constructing phylogenetic tree of Yersinia. We ran analyses based on the tools and genomic data in YersiniaBase and the

  18. Anatomical knowledge gain through a clay-modeling exercise compared to live and video observations.

    PubMed

    Kooloos, Jan G M; Schepens-Franke, Annelieke N; Bergman, Esther M; Donders, Rogier A R T; Vorstenbosch, Marc A T M

    2014-01-01

    Clay modeling is increasingly used as a teaching method other than dissection. The haptic experience during clay modeling is supposed to correspond to the learning effect of manipulations during exercises in the dissection room involving tissues and organs. We questioned this assumption in two pretest-post-test experiments. In these experiments, the learning effects of clay modeling were compared to either live observations (Experiment I) or video observations (Experiment II) of the clay-modeling exercise. The effects of learning were measured with multiple choice questions, extended matching questions, and recognition of structures on illustrations of cross-sections. Analysis of covariance with pretest scores as the covariate was used to elaborate the results. Experiment I showed a significantly higher post-test score for the observers, whereas Experiment II showed a significantly higher post-test score for the clay modelers. This study shows that (1) students who perform clay-modeling exercises show less gain in anatomical knowledge than students who attentively observe the same exercise being carried out and (2) performing a clay-modeling exercise is better in anatomical knowledge gain compared to the study of a video of the recorded exercise. The most important learning effect seems to be the engagement in the exercise, focusing attention and stimulating time on task. © 2014 American Association of Anatomists.

  19. ModelMate - A graphical user interface for model analysis

    USGS Publications Warehouse

    Banta, Edward R.

    2011-01-01

    ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.

  20. Population Pharmacokinetic and Pharmacodynamic Model-Based Comparability Assessment of a Recombinant Human Epoetin Alfa and the Biosimilar HX575

    PubMed Central

    Yan, Xiaoyu; Lowe, Philip J.; Fink, Martin; Berghout, Alexander; Balser, Sigrid; Krzyzanski, Wojciech

    2012-01-01

    The aim of this study was to develop an integrated pharmacokinetic and pharmacodynamic (PK/PD) model and assess the comparability between epoetin alfa HEXAL/Binocrit (HX575) and a comparator epoetin alfa by a model-based approach. PK/PD data—including serum drug concentrations, reticulocyte counts, red blood cells, and hemoglobin levels—were obtained from 2 clinical studies. In sum, 149 healthy men received multiple intravenous or subcutaneous doses of HX575 (100 IU/kg) and the comparator 3 times a week for 4 weeks. A population model based on pharmacodynamics-mediated drug disposition and cell maturation processes was used to characterize the PK/PD data for the 2 drugs. Simulations showed that due to target amount changes, total clearance may increase up to 2.4-fold as compared with the baseline. Further simulations suggested that once-weekly and thrice-weekly subcutaneous dosing regimens would result in similar efficacy. The findings from the model-based analysis were consistent with previous results using the standard noncompartmental approach demonstrating PK/PD comparability between HX575 and comparator. However, due to complexity of the PK/PD model, control of random effects was not straightforward. Whereas population PK/PD model-based analyses are suited for studying complex biological systems, such models have their limitations (statistical), and their comparability results should be interpreted carefully. PMID:22162538

  1. Canis familiaris As a Model for Non-Invasive Comparative Neuroscience.

    PubMed

    Bunford, Nóra; Andics, Attila; Kis, Anna; Miklósi, Ádám; Gácsi, Márta

    2017-07-01

    There is an ongoing need to improve animal models for investigating human behavior and its biological underpinnings. The domestic dog (Canis familiaris) is a promising model in cognitive neuroscience. However, before it can contribute to advances in this field in a comparative, reliable, and valid manner, several methodological issues warrant attention. We review recent non-invasive canine neuroscience studies, primarily focusing on (i) variability among dogs and between dogs and humans in cranial characteristics, and (ii) generalizability across dog and dog-human studies. We argue not for methodological uniformity but for functional comparability between methods, experimental designs, and neural responses. We conclude that the dog may become an innovative and unique model in comparative neuroscience, complementing more traditional models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. MODBASE, a database of annotated comparative protein structure models

    PubMed Central

    Pieper, Ursula; Eswar, Narayanan; Stuart, Ashley C.; Ilyin, Valentin A.; Sali, Andrej

    2002-01-01

    MODBASE (http://guitar.rockefeller.edu/modbase) is a relational database of annotated comparative protein structure models for all available protein sequences matched to at least one known protein structure. The models are calculated by MODPIPE, an automated modeling pipeline that relies on PSI-BLAST, IMPALA and MODELLER. MODBASE uses the MySQL relational database management system for flexible and efficient querying, and the MODVIEW Netscape plugin for viewing and manipulating multiple sequences and structures. It is updated regularly to reflect the growth of the protein sequence and structure databases, as well as improvements in the software for calculating the models. For ease of access, MODBASE is organized into different datasets. The largest dataset contains models for domains in 304 517 out of 539 171 unique protein sequences in the complete TrEMBL database (23 March 2001); only models based on significant alignments (PSI-BLAST E-value < 10–4) and models assessed to have the correct fold are included. Other datasets include models for target selection and structure-based annotation by the New York Structural Genomics Research Consortium, models for prediction of genes in the Drosophila melanogaster genome, models for structure determination of several ribosomal particles and models calculated by the MODWEB comparative modeling web server. PMID:11752309

  3. A Lumped Computational Model for Sodium Sulfur Battery Analysis

    NASA Astrophysics Data System (ADS)

    Wu, Fan

    Due to the cost of materials and time consuming testing procedures, development of new batteries is a slow and expensive practice. The purpose of this study is to develop a computational model and assess the capabilities of such a model designed to aid in the design process and control of sodium sulfur batteries. To this end, a transient lumped computational model derived from an integral analysis of the transport of species, energy and charge throughout the battery has been developed. The computation processes are coupled with the use of Faraday's law, and solutions for the species concentrations, electrical potential and current are produced in a time marching fashion. Properties required for solving the governing equations are calculated and updated as a function of time based on the composition of each control volume. The proposed model is validated against multi- dimensional simulations and experimental results from literatures, and simulation results using the proposed model is presented and analyzed. The computational model and electrochemical model used to solve the equations for the lumped model are compared with similar ones found in the literature. The results obtained from the current model compare favorably with those from experiments and other models.

  4. Video analysis of the flight of a model aircraft

    NASA Astrophysics Data System (ADS)

    Tarantino, Giovanni; Fazio, Claudio

    2011-11-01

    A video-analysis software tool has been employed in order to measure the steady-state values of the kinematics variables describing the longitudinal behaviour of a radio-controlled model aircraft during take-off, climbing and gliding. These experimental results have been compared with the theoretical steady-state configurations predicted by the phugoid model for longitudinal flight. A comparison with the parameters and performance of the full-size aircraft has also been outlined.

  5. Evaluation of an experimental rat model for comparative studies of bleaching agents.

    PubMed

    Cintra, Luciano Tavares Angelo; Benetti, Francine; Ferreira, Luciana Louzada; Rahal, Vanessa; Ervolino, Edilson; Jacinto, Rogério de Castilho; Gomes Filho, João Eduardo; Briso, André Luiz Fraga

    2016-04-01

    Dental materials in general are tested in different animal models prior to the clinical use in humans, except for bleaching agents. Objectives To evaluate an experimental rat model for comparative studies of bleaching agents, by investigating the influence of different concentrations and application times of H2O2 gel in the pulp tissue during in-office bleaching of rats' vital teeth. Material and Methods The right and left maxillary molars of 50 Wistar rats were bleached with 20% and 35% H2O2 gels, respectively, for 5, 10, 15, 30, or 45 min (n=10 rats/group). Ten animals were untreated (control). The rats were killed after 2 or 30 days, and the maxillae were examined by light microscopy. Inflammation was evaluated through histomorphometric analysis with inflammatory cell count in the coronal and radicular thirds of the pulp. Fibroblasts were also counted. Scores were attributed to odontoblastic layer and vascular changes. Tertiary dentin area and pulp chamber central area were measured histomorphometrically. Data were compared by analysis of variance and Kruskal-Wallis test (p<0.05). Results After 2 days, the amount of inflammatory cells increased in the coronal pulp occlusal third up to the 15-min application groups of each bleaching gel. In the groups exposed to each concentration for 30 and 45 min, the number of inflammatory cells decreased along with the appearance of necrotic areas. After 30 days, reduction on the pulp chamber central area and enlargement of the tertiary dentin area were observed, without the detection of inflammation areas. Conclusion The rat model of extracoronal bleaching showed to be adequate for studies of bleaching protocols, as it was possible to observe alterations in the pulp tissues and tooth structure caused by different concentrations and application periods of bleaching agents.

  6. Evaluation of an experimental rat model for comparative studies of bleaching agents

    PubMed Central

    Cintra, Luciano Tavares Angelo; Benetti, Francine; Ferreira, Luciana Lousada; Rahal, Vanessa; Ervolino, Edilson; Jacinto, Rogério de Castilho; Gomes, João Eduardo; Briso, André Luiz Fraga

    2016-01-01

    ABSTRACT Dental materials, in general, are tested in different animal models prior to their clinical use in humans, except for bleaching agents. Objectives To evaluate an experimental rat model for comparative studies of bleaching agents by investigating the influence of different concentrations and application times of H2O2 gel in the pulp tissue during in-office bleaching of rats’ vital teeth. Material and methods The right and left maxillary molars of 50 Wistar rats were bleached with 20% and 35% H2O2 gels, respectively, for 5, 10, 15, 30, or 45 min (n=10 rats/group). Ten animals (control) were untreated. The rats were killed after 2 or 30 days, and the maxillae were examined by light microscopy. Inflammation was evaluated by histomorphometric analysis with inflammatory cell counting in the coronal and radicular thirds of the pulp. The counting of fibroblasts was also performed. Scores were attributed to the odontoblastic layer and to vascular changes. The tertiary dentin area and the pulp chamber central area were histomorphometrically measured. Data were compared by the analysis of variance and the Kruskal-Wallis test (p<0.05). Results After 2 days, the amount of inflammatory cells increased in the occlusal third of the coronal pulp until the time of 15 min for both concentrations of bleaching gels. In 30 and 45 min groups of each concentration, the number of inflammatory cells decreased along with the appearance of necrotic areas. After 30 days, a reduction in the pulp chamber central area and an enlargement of tertiary dentin area were observed without the detection of inflammation areas. Conclusion The rat model of extra coronal bleaching showed to be adequate for bleaching protocols studies, as it was possible to observe alterations in the pulp tissues and in the tooth structure caused by different concentrations and periods of application of bleaching agents. PMID:27008262

  7. Evaluation of an experimental rat model for comparative studies of bleaching agents

    PubMed Central

    CINTRA, Luciano Tavares Angelo; BENETTI, Francine; FERREIRA, Luciana Louzada; RAHAL, Vanessa; ERVOLINO, Edilson; JACINTO, Rogério de Castilho; GOMES, João Eduardo; BRISO, André Luiz Fraga

    2016-01-01

    ABSTRACT Dental materials in general are tested in different animal models prior to the clinical use in humans, except for bleaching agents. Objectives To evaluate an experimental rat model for comparative studies of bleaching agents, by investigating the influence of different concentrations and application times of H2O2 gel in the pulp tissue during in-office bleaching of rats’ vital teeth. Material and Methods The right and left maxillary molars of 50 Wistar rats were bleached with 20% and 35% H2O2 gels, respectively, for 5, 10, 15, 30, or 45 min (n=10 rats/group). Ten animals were untreated (control). The rats were killed after 2 or 30 days, and the maxillae were examined by light microscopy. Inflammation was evaluated through histomorphometric analysis with inflammatory cell count in the coronal and radicular thirds of the pulp. Fibroblasts were also counted. Scores were attributed to odontoblastic layer and vascular changes. Tertiary dentin area and pulp chamber central area were measured histomorphometrically. Data were compared by analysis of variance and Kruskal-Wallis test (p<0.05). Results After 2 days, the amount of inflammatory cells increased in the coronal pulp occlusal third up to the 15-min application groups of each bleaching gel. In the groups exposed to each concentration for 30 and 45 min, the number of inflammatory cells decreased along with the appearance of necrotic areas. After 30 days, reduction on the pulp chamber central area and enlargement of the tertiary dentin area were observed, without the detection of inflammation areas. Conclusion The rat model of extracoronal bleaching showed to be adequate for studies of bleaching protocols, as it was possible to observe alterations in the pulp tissues and tooth structure caused by different concentrations and application periods of bleaching agents. PMID:27119766

  8. Evaluation of an experimental rat model for comparative studies of bleaching agents.

    PubMed

    Cintra, Luciano Tavares Angelo; Benetti, Francine; Ferreira, Luciana Lousada; Rahal, Vanessa; Ervolino, Edilson; Jacinto, Rogério de Castilho; Gomes Filho, João Eduardo; Briso, André Luiz Fraga

    2016-01-01

    Dental materials, in general, are tested in different animal models prior to their clinical use in humans, except for bleaching agents. To evaluate an experimental rat model for comparative studies of bleaching agents by investigating the influence of different concentrations and application times of H2O2 gel in the pulp tissue during in-office bleaching of rats' vital teeth. The right and left maxillary molars of 50 Wistar rats were bleached with 20% and 35% H2O2 gels, respectively, for 5, 10, 15, 30, or 45 min (n=10 rats/group). Ten animals (control) were untreated. The rats were killed after 2 or 30 days, and the maxillae were examined by light microscopy. Inflammation was evaluated by histomorphometric analysis with inflammatory cell counting in the coronal and radicular thirds of the pulp. The counting of fibroblasts was also performed. Scores were attributed to the odontoblastic layer and to vascular changes. The tertiary dentin area and the pulp chamber central area were histomorphometrically measured. Data were compared by the analysis of variance and the Kruskal-Wallis test (p<0.05). After 2 days, the amount of inflammatory cells increased in the occlusal third of the coronal pulp until the time of 15 min for both concentrations of bleaching gels. In 30 and 45 min groups of each concentration, the number of inflammatory cells decreased along with the appearance of necrotic areas. After 30 days, a reduction in the pulp chamber central area and an enlargement of tertiary dentin area were observed without the detection of inflammation areas. The rat model of extra coronal bleaching showed to be adequate for bleaching protocols studies, as it was possible to observe alterations in the pulp tissues and in the tooth structure caused by different concentrations and periods of application of bleaching agents.

  9. Comparing the line broadened quasilinear model to Vlasov code

    NASA Astrophysics Data System (ADS)

    Ghantous, K.; Berk, H. L.; Gorelenkov, N. N.

    2014-03-01

    The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations.

  10. Hierarchical model analysis of the Atlantic Flyway Breeding Waterfowl Survey

    USGS Publications Warehouse

    Sauer, John R.; Zimmerman, Guthrie S.; Klimstra, Jon D.; Link, William A.

    2014-01-01

    We used log-linear hierarchical models to analyze data from the Atlantic Flyway Breeding Waterfowl Survey. The survey has been conducted by state biologists each year since 1989 in the northeastern United States from Virginia north to New Hampshire and Vermont. Although yearly population estimates from the survey are used by the United States Fish and Wildlife Service for estimating regional waterfowl population status for mallards (Anas platyrhynchos), black ducks (Anas rubripes), wood ducks (Aix sponsa), and Canada geese (Branta canadensis), they are not routinely adjusted to control for time of day effects and other survey design issues. The hierarchical model analysis permits estimation of year effects and population change while accommodating the repeated sampling of plots and controlling for time of day effects in counting. We compared population estimates from the current stratified random sample analysis to population estimates from hierarchical models with alternative model structures that describe year to year changes as random year effects, a trend with random year effects, or year effects modeled as 1-year differences. Patterns of population change from the hierarchical model results generally were similar to the patterns described by stratified random sample estimates, but significant visibility differences occurred between twilight to midday counts in all species. Controlling for the effects of time of day resulted in larger population estimates for all species in the hierarchical model analysis relative to the stratified random sample analysis. The hierarchical models also provided a convenient means of estimating population trend as derived statistics from the analysis. We detected significant declines in mallard and American black ducks and significant increases in wood ducks and Canada geese, a trend that had not been significant for 3 of these 4 species in the prior analysis. We recommend using hierarchical models for analysis of the Atlantic

  11. A comparative Thermal Analysis of conventional parabolic receiver tube and Cavity model tube in a Solar Parabolic Concentrator

    NASA Astrophysics Data System (ADS)

    Arumugam, S.; Ramakrishna, P.; Sangavi, S.

    2018-02-01

    Improvements in heating technology with solar energy is gaining focus, especially solar parabolic collectors. Solar heating in conventional parabolic collectors is done with the help of radiation concentration on receiver tubes. Conventional receiver tubes are open to atmosphere and loose heat by ambient air currents. In order to reduce the convection losses and also to improve the aperture area, we designed a tube with cavity. This study is a comparative performance behaviour of conventional tube and cavity model tube. The performance formulae were derived for the cavity model based on conventional model. Reduction in overall heat loss coefficient was observed for cavity model, though collector heat removal factor and collector efficiency were nearly same for both models. Improvement in efficiency was also observed in the cavity model’s performance. The approach towards the design of a cavity model tube as the receiver tube in solar parabolic collectors gave improved results and proved as a good consideration.

  12. Comparative analysis of European bat lyssavirus 1 pathogenicity in the mouse model.

    PubMed

    Eggerbauer, Elisa; Pfaff, Florian; Finke, Stefan; Höper, Dirk; Beer, Martin; Mettenleiter, Thomas C; Nolden, Tobias; Teifke, Jens-Peter; Müller, Thomas; Freuling, Conrad M

    2017-06-01

    European bat lyssavirus 1 is responsible for most bat rabies cases in Europe. Although EBLV-1 isolates display a high degree of sequence identity, different sublineages exist. In individual isolates various insertions and deletions have been identified, with unknown impact on viral replication and pathogenicity. In order to assess whether different genetic features of EBLV-1 isolates correlate with phenotypic changes, different EBLV-1 variants were compared for pathogenicity in the mouse model. Groups of three mice were infected intracranially (i.c.) with 102 TCID50/ml and groups of six mice were infected intramuscularly (i.m.) with 105 TCID50/ml and 102 TCID50/ml as well as intranasally (i.n.) with 102 TCID50/ml. Significant differences in survival following i.m. inoculation with low doses as well as i.n. inoculation were observed. Also, striking variations in incubation periods following i.c. inoculation and i.m. inoculation with high doses were seen. Hereby, the clinical picture differed between general symptoms, spasms and aggressiveness depending on the inoculation route. Immunohistochemistry of mouse brains showed that the virus distribution in the brain depended on the inoculation route. In conclusion, different EBLV-1 isolates differ in pathogenicity indicating variation which is not reflected in studies of single isolates.

  13. Comparative analysis of European bat lyssavirus 1 pathogenicity in the mouse model

    PubMed Central

    Eggerbauer, Elisa; Pfaff, Florian; Finke, Stefan; Höper, Dirk; Beer, Martin; Mettenleiter, Thomas C.; Nolden, Tobias; Teifke, Jens-Peter; Müller, Thomas

    2017-01-01

    European bat lyssavirus 1 is responsible for most bat rabies cases in Europe. Although EBLV-1 isolates display a high degree of sequence identity, different sublineages exist. In individual isolates various insertions and deletions have been identified, with unknown impact on viral replication and pathogenicity. In order to assess whether different genetic features of EBLV-1 isolates correlate with phenotypic changes, different EBLV-1 variants were compared for pathogenicity in the mouse model. Groups of three mice were infected intracranially (i.c.) with 102 TCID50/ml and groups of six mice were infected intramuscularly (i.m.) with 105 TCID50/ml and 102 TCID50/ml as well as intranasally (i.n.) with 102 TCID50/ml. Significant differences in survival following i.m. inoculation with low doses as well as i.n. inoculation were observed. Also, striking variations in incubation periods following i.c. inoculation and i.m. inoculation with high doses were seen. Hereby, the clinical picture differed between general symptoms, spasms and aggressiveness depending on the inoculation route. Immunohistochemistry of mouse brains showed that the virus distribution in the brain depended on the inoculation route. In conclusion, different EBLV-1 isolates differ in pathogenicity indicating variation which is not reflected in studies of single isolates. PMID:28628617

  14. Comparing five modelling techniques for predicting forest characteristics

    Treesearch

    Gretchen G. Moisen; Tracey S. Frescino

    2002-01-01

    Broad-scale maps of forest characteristics are needed throughout the United States for a wide variety of forest land management applications. Inexpensive maps can be produced by modelling forest class and structure variables collected in nationwide forest inventories as functions of satellite-based information. But little work has been directed at comparing modelling...

  15. Comparison of 3D quantitative structure-activity relationship methods: Analysis of the in vitro antimalarial activity of 154 artemisinin analogues by hypothetical active-site lattice and comparative molecular field analysis

    NASA Astrophysics Data System (ADS)

    Woolfrey, John R.; Avery, Mitchell A.; Doweyko, Arthur M.

    1998-03-01

    Two three-dimensional quantitative structure-activity relationship (3D-QSAR) methods, comparative molecular field analysis (CoMFA) and hypothetical active site lattice (HASL), were compared with respect to the analysis of a training set of 154 artemisinin analogues. Five models were created, including a complete HASL and two trimmed versions, as well as two CoMFA models (leave-one-out standard CoMFA and the guided-region selection protocol). Similar r2 and q2 values were obtained by each method, although some striking differences existed between CoMFA contour maps and the HASL output. Each of the four predictive models exhibited a similar ability to predict the activity of a test set of 23 artemisinin analogues, although some differences were noted as to which compounds were described well by either model.

  16. Advancing team-based primary health care: a comparative analysis of policies in western Canada.

    PubMed

    Suter, Esther; Mallinson, Sara; Misfeldt, Renee; Boakye, Omenaa; Nasmith, Louise; Wong, Sabrina T

    2017-07-17

    We analyzed and compared primary health care (PHC) policies in British Columbia, Alberta and Saskatchewan to understand how they inform the design and implementation of team-based primary health care service delivery. The goal was to develop policy imperatives that can advance team-based PHC in Canada. We conducted comparative case studies (n = 3). The policy analysis included: Context review: We reviewed relevant information (2007 to 2014) from databases and websites. Policy review and comparative analysis: We compared and contrasted publically available PHC policies. Key informant interviews: Key informants (n = 30) validated narratives prepared from the comparative analysis by offering contextual information on potential policy imperatives. Advisory group and roundtable: An expert advisory group guided this work and a key stakeholder roundtable event guided prioritization of policy imperatives. The concept of team-based PHC varies widely across and within the three provinces. We noted policy gaps related to team configuration, leadership, scope of practice, role clarity and financing of team-based care; few policies speak explicitly to monitoring and evaluation of team-based PHC. We prioritized four policy imperatives: (1) alignment of goals and policies at different system levels; (2) investment of resources for system change; (3) compensation models for all members of the team; and (4) accountability through collaborative practice metrics. Policies supporting team-based PHC have been slow to emerge, lacking a systematic and coordinated approach. Greater alignment with specific consideration of financing, reimbursement, implementation mechanisms and performance monitoring could accelerate systemic transformation by removing some well-known barriers to team-based care.

  17. PSAMM: A Portable System for the Analysis of Metabolic Models

    PubMed Central

    Steffensen, Jon Lund; Dufault-Thompson, Keith; Zhang, Ying

    2016-01-01

    The genome-scale models of metabolic networks have been broadly applied in phenotype prediction, evolutionary reconstruction, community functional analysis, and metabolic engineering. Despite the development of tools that support individual steps along the modeling procedure, it is still difficult to associate mathematical simulation results with the annotation and biological interpretation of metabolic models. In order to solve this problem, here we developed a Portable System for the Analysis of Metabolic Models (PSAMM), a new open-source software package that supports the integration of heterogeneous metadata in model annotations and provides a user-friendly interface for the analysis of metabolic models. PSAMM is independent of paid software environments like MATLAB, and all its dependencies are freely available for academic users. Compared to existing tools, PSAMM significantly reduced the running time of constraint-based analysis and enabled flexible settings of simulation parameters using simple one-line commands. The integration of heterogeneous, model-specific annotation information in PSAMM is achieved with a novel format of YAML-based model representation, which has several advantages, such as providing a modular organization of model components and simulation settings, enabling model version tracking, and permitting the integration of multiple simulation problems. PSAMM also includes a number of quality checking procedures to examine stoichiometric balance and to identify blocked reactions. Applying PSAMM to 57 models collected from current literature, we demonstrated how the software can be used for managing and simulating metabolic models. We identified a number of common inconsistencies in existing models and constructed an updated model repository to document the resolution of these inconsistencies. PMID:26828591

  18. Methods of international health technology assessment agencies for economic evaluations--a comparative analysis.

    PubMed

    Mathes, Tim; Jacobs, Esther; Morfeld, Jana-Carina; Pieper, Dawid

    2013-09-30

    The number of Health Technology Assessment (HTA) agencies increases. One component of HTAs are economic aspects. To incorporate economic aspects commonly economic evaluations are performed. A convergence of recommendations for methods of health economic evaluations between international HTA agencies would facilitate the adaption of results to different settings and avoid unnecessary expense. A first step in this direction is a detailed analysis of existing similarities and differences in recommendations to identify potential for harmonization. The objective is to provide an overview and comparison of the methodological recommendations of international HTA agencies for economic evaluations. The webpages of 127 international HTA agencies were searched for guidelines containing recommendations on methods for the preparation of economic evaluations. Additionally, the HTA agencies were requested information on methods for economic evaluations. Recommendations of the included guidelines were extracted in standardized tables according to 13 methodological aspects. All process steps were performed independently by two reviewers. Finally 25 publications of 14 HTA agencies were included in the analysis. Methods for economic evaluations vary widely. The greatest accordance could be found for the type of analysis and comparator. Cost-utility-analyses or cost-effectiveness-analyses are recommended. The comparator should continuously be usual care. Again the greatest differences were shown in the recommendations on the measurement/sources of effects, discounting and in the analysis of sensitivity. The main difference regarding effects is the focus either on efficacy or effectiveness. Recommended discounting rates range from 1.5%-5% for effects and 3%-5% for costs whereby it is mostly recommended to use the same rate for costs and effects. With respect to the analysis of sensitivity the main difference is that oftentimes the probabilistic or deterministic approach is recommended

  19. [Comparative analysis of Andean and Caribbean region health systems].

    PubMed

    Gómez-Camelo, Diana

    2005-01-01

    Carrying out a comparative analysis of Andean and Caribbean health systems contributing towards the general panorama of Andean and Caribbean region health care system experience. This study was aimed at carrying out a comparative analysis of health systems in Bolivia, Colombia, Ecuador, Peru, Venezuela, the Dominican Republic and Cuba between 1990 and 2004. Documentary information from secondary sources was used. Reform and changes during the aforementioned period were compared, as well as the systems' current configurations. Described typologies were used for studying the health systems. Different organisational designs were found for the systems: a national health system (NHS), segmented systems and systems based on mandatory insurance. The trend of reforms introduced in the 1990s and current proposals in almost all systems are directed towards adopting mandatory insurance via a basic packet of services and strengthening competition in providing services through a public and private mix. The organisation and structure of most systems studied have introduced and continue to introduce changes in line with international guidelines. The generality of these structures means that efforts must still be made to adopt designs strengthening them as instruments improving populations' quality of life. Comparative analysis is a tool leading to studying health systems and producing information which can nourish debate regarding current sector reform. This work took shape during the first approach to a comparative study of Andean region and Caribbean health systems.

  20. Probabilistic bias analysis in pharmacoepidemiology and comparative effectiveness research: a systematic review.

    PubMed

    Hunnicutt, Jacob N; Ulbricht, Christine M; Chrysanthopoulou, Stavroula A; Lapane, Kate L

    2016-12-01

    We systematically reviewed pharmacoepidemiologic and comparative effectiveness studies that use probabilistic bias analysis to quantify the effects of systematic error including confounding, misclassification, and selection bias on study results. We found articles published between 2010 and October 2015 through a citation search using Web of Science and Google Scholar and a keyword search using PubMed and Scopus. Eligibility of studies was assessed by one reviewer. Three reviewers independently abstracted data from eligible studies. Fifteen studies used probabilistic bias analysis and were eligible for data abstraction-nine simulated an unmeasured confounder and six simulated misclassification. The majority of studies simulating an unmeasured confounder did not specify the range of plausible estimates for the bias parameters. Studies simulating misclassification were in general clearer when reporting the plausible distribution of bias parameters. Regardless of the bias simulated, the probability distributions assigned to bias parameters, number of simulated iterations, sensitivity analyses, and diagnostics were not discussed in the majority of studies. Despite the prevalence and concern of bias in pharmacoepidemiologic and comparative effectiveness studies, probabilistic bias analysis to quantitatively model the effect of bias was not widely used. The quality of reporting and use of this technique varied and was often unclear. Further discussion and dissemination of the technique are warranted. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Comparative analysis of predictive models for nongenotoxic hepatocarcinogenicity using both toxicogenomics and quantitative structure-activity relationships.

    PubMed

    Liu, Zhichao; Kelly, Reagan; Fang, Hong; Ding, Don; Tong, Weida

    2011-07-18

    The primary testing strategy to identify nongenotoxic carcinogens largely relies on the 2-year rodent bioassay, which is time-consuming and labor-intensive. There is an increasing effort to develop alternative approaches to prioritize the chemicals for, supplement, or even replace the cancer bioassay. In silico approaches based on quantitative structure-activity relationships (QSAR) are rapid and inexpensive and thus have been investigated for such purposes. A slightly more expensive approach based on short-term animal studies with toxicogenomics (TGx) represents another attractive option for this application. Thus, the primary questions are how much better predictive performance using short-term TGx models can be achieved compared to that of QSAR models, and what length of exposure is sufficient for high quality prediction based on TGx. In this study, we developed predictive models for rodent liver carcinogenicity using gene expression data generated from short-term animal models at different time points and QSAR. The study was focused on the prediction of nongenotoxic carcinogenicity since the genotoxic chemicals can be inexpensively removed from further development using various in vitro assays individually or in combination. We identified 62 chemicals whose hepatocarcinogenic potential was available from the National Center for Toxicological Research liver cancer database (NCTRlcdb). The gene expression profiles of liver tissue obtained from rats treated with these chemicals at different time points (1 day, 3 days, and 5 days) are available from the Gene Expression Omnibus (GEO) database. Both TGx and QSAR models were developed on the basis of the same set of chemicals using the same modeling approach, a nearest-centroid method with a minimum redundancy and maximum relevancy-based feature selection with performance assessed using compound-based 5-fold cross-validation. We found that the TGx models outperformed QSAR in every aspect of modeling. For example, the

  2. Cost-Effectiveness Analysis of Stereotactic Body Radiation Therapy Compared With Radiofrequency Ablation for Inoperable Colorectal Liver Metastases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Hayeon, E-mail: kimh2@upmc.edu; Gill, Beant; Beriwal, Sushil

    Purpose: To conduct a cost-effectiveness analysis to determine whether stereotactic body radiation therapy (SBRT) is a cost-effective therapy compared with radiofrequency ablation (RFA) for patients with unresectable colorectal cancer (CRC) liver metastases. Methods and Materials: A cost-effectiveness analysis was conducted using a Markov model and 1-month cycle over a lifetime horizon. Transition probabilities, quality of life utilities, and costs associated with SBRT and RFA were captured in the model on the basis of a comprehensive literature review and Medicare reimbursements in 2014. Strategies were compared using the incremental cost-effectiveness ratio, with effectiveness measured in quality-adjusted life years (QALYs). To account formore » model uncertainty, 1-way and probabilistic sensitivity analyses were performed. Strategies were evaluated with a willingness-to-pay threshold of $100,000 per QALY gained. Results: In base case analysis, treatment costs for 3 fractions of SBRT and 1 RFA procedure were $13,000 and $4397, respectively. Median survival was assumed the same for both strategies (25 months). The SBRT costs $8202 more than RFA while gaining 0.05 QALYs, resulting in an incremental cost-effectiveness ratio of $164,660 per QALY gained. In 1-way sensitivity analyses, results were most sensitive to variation of median survival from both treatments. Stereotactic body radiation therapy was economically reasonable if better survival was presumed (>1 month gain) or if used for large tumors (>4 cm). Conclusions: If equal survival is assumed, SBRT is not cost-effective compared with RFA for inoperable colorectal liver metastases. However, if better local control leads to small survival gains with SBRT, this strategy becomes cost-effective. Ideally, these results should be confirmed with prospective comparative data.« less

  3. Clinical and multiple gene expression variables in survival analysis of breast cancer: Analysis with the hypertabastic survival model

    PubMed Central

    2012-01-01

    Background We explore the benefits of applying a new proportional hazard model to analyze survival of breast cancer patients. As a parametric model, the hypertabastic survival model offers a closer fit to experimental data than Cox regression, and furthermore provides explicit survival and hazard functions which can be used as additional tools in the survival analysis. In addition, one of our main concerns is utilization of multiple gene expression variables. Our analysis treats the important issue of interaction of different gene signatures in the survival analysis. Methods The hypertabastic proportional hazards model was applied in survival analysis of breast cancer patients. This model was compared, using statistical measures of goodness of fit, with models based on the semi-parametric Cox proportional hazards model and the parametric log-logistic and Weibull models. The explicit functions for hazard and survival were then used to analyze the dynamic behavior of hazard and survival functions. Results The hypertabastic model provided the best fit among all the models considered. Use of multiple gene expression variables also provided a considerable improvement in the goodness of fit of the model, as compared to use of only one. By utilizing the explicit survival and hazard functions provided by the model, we were able to determine the magnitude of the maximum rate of increase in hazard, and the maximum rate of decrease in survival, as well as the times when these occurred. We explore the influence of each gene expression variable on these extrema. Furthermore, in the cases of continuous gene expression variables, represented by a measure of correlation, we were able to investigate the dynamics with respect to changes in gene expression. Conclusions We observed that use of three different gene signatures in the model provided a greater combined effect and allowed us to assess the relative importance of each in determination of outcome in this data set. These

  4. Flows of the Tycho Crater type, comparative analysis

    NASA Astrophysics Data System (ADS)

    Bratkov, Yury

    Some embeddings of the Tycho Crater type flow or, more generally, of the Tycho Butterfly type flow, are demonstrated, and comparative analysis is given. Additionally, identity of the Earthen World Ocean and the Moon Global Ocean is demonstrated. Supersonic flows (jets, shock waves, Mach stems) are comparatively studied [1]. References: [1] Bratkov Yu.N., Caspian Seas, http://viXra.org/abs/1211.0067, 12 Nov 2012

  5. Analysis of Magnitude Correlations in a Self-Similar model of Seismicity

    NASA Astrophysics Data System (ADS)

    Zambrano, A.; Joern, D.

    2017-12-01

    A recent model of seismicity that incorporates a self-similar Omori-Utsu relation, which is used to describe the temporal evolution of earthquake triggering, has been shown to provide a more accurate description of seismicity in Southern California when compared to epidemic type aftershock sequence models. Forecasting of earthquakes is an active research area where one of the debated points is whether magnitude correlations of earthquakes exist within real world seismic data. Prior to this work, the analysis of magnitude correlations of the aforementioned self-similar model had not been addressed. Here we present statistical properties of the magnitude correlations for the self-similar model along with an analytical analysis of the branching ratio and criticality parameters.

  6. Neutral model analysis of landscape patterns from mathematical morphology

    Treesearch

    Kurt H. Riitters; Peter Vogt; Pierre Soille; Jacek Kozak; Christine Estreguil

    2007-01-01

    Mathematical morphology encompasses methods for characterizing land-cover patterns in ecological research and biodiversity assessments. This paper reports a neutral model analysis of patterns in the absence of a structuring ecological process, to help set standards for comparing and interpreting patterns identified by mathematical morphology on real land-cover maps. We...

  7. A comparative analysis of 7.0-Tesla magnetic resonance imaging and histology measurements of knee articular cartilage in a canine posterolateral knee injury model: a preliminary analysis.

    PubMed

    Pepin, Scott R; Griffith, Chad J; Wijdicks, Coen A; Goerke, Ute; McNulty, Margaret A; Parker, Josh B; Carlson, Cathy S; Ellermann, Jutta; LaPrade, Robert F

    2009-11-01

    There has recently been increased interest in the use of 7.0-T magnetic resonance imaging for evaluating articular cartilage degeneration and quantifying the progression of osteoarthritis. The purpose of this study was to evaluate articular cartilage cross-sectional area and maximum thickness in the medial compartment of intact and destabilized canine knees using 7.0-T magnetic resonance images and compare these results with those obtained from the corresponding histologic sections. Controlled laboratory study. Five canines had a surgically created unilateral grade III posterolateral knee injury that was followed for 6 months before euthanasia. The opposite, noninjured knee was used as a control. At necropsy, 3-dimensional gradient echo images of the medial tibial plateau of both knees were obtained using a 7.0-T magnetic resonance imaging scanner. Articular cartilage area and maximum thickness in this site were digitally measured on the magnetic resonance images. The proximal tibias were processed for routine histologic analysis with hematoxylin and eosin staining. Articular cartilage area and maximum thickness were measured in histologic sections corresponding to the sites of the magnetic resonance slices. The magnetic resonance imaging results revealed an increase in articular cartilage area and maximum thickness in surgical knees compared with control knees in all specimens; these changes were significant for both parameters (P <.05 for area; P <.01 for thickness). The average increase in area was 14.8% and the average increase in maximum thickness was 15.1%. The histologic results revealed an average increase in area of 27.4% (P = .05) and an average increase in maximum thickness of 33.0% (P = .06). Correlation analysis between the magnetic resonance imaging and histology data revealed that the area values were significantly correlated (P < .01), but the values for thickness obtained from magnetic resonance imaging were not significantly different from the

  8. Genome-wide comparative analysis of four Indian Drosophila species.

    PubMed

    Mohanty, Sujata; Khanna, Radhika

    2017-12-01

    Comparative analysis of multiple genomes of closely or distantly related Drosophila species undoubtedly creates excitement among evolutionary biologists in exploring the genomic changes with an ecology and evolutionary perspective. We present herewith the de novo assembled whole genome sequences of four Drosophila species, D. bipectinata, D. takahashii, D. biarmipes and D. nasuta of Indian origin using Next Generation Sequencing technology on an Illumina platform along with their detailed assembly statistics. The comparative genomics analysis, e.g. gene predictions and annotations, functional and orthogroup analysis of coding sequences and genome wide SNP distribution were performed. The whole genome of Zaprionus indianus of Indian origin published earlier by us and the genome sequences of previously sequenced 12 Drosophila species available in the NCBI database were included in the analysis. The present work is a part of our ongoing genomics project of Indian Drosophila species.

  9. High-resolution comparative modeling with RosettaCM.

    PubMed

    Song, Yifan; DiMaio, Frank; Wang, Ray Yu-Ruei; Kim, David; Miles, Chris; Brunette, Tj; Thompson, James; Baker, David

    2013-10-08

    We describe an improved method for comparative modeling, RosettaCM, which optimizes a physically realistic all-atom energy function over the conformational space defined by homologous structures. Given a set of sequence alignments, RosettaCM assembles topologies by recombining aligned segments in Cartesian space and building unaligned regions de novo in torsion space. The junctions between segments are regularized using a loop closure method combining fragment superposition with gradient-based minimization. The energies of the resulting models are optimized by all-atom refinement, and the most representative low-energy model is selected. The CASP10 experiment suggests that RosettaCM yields models with more accurate side-chain and backbone conformations than other methods when the sequence identity to the templates is greater than ∼15%. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Impact of model-based risk analysis for liver surgery planning.

    PubMed

    Hansen, C; Zidowitz, S; Preim, B; Stavrou, G; Oldhafer, K J; Hahn, H K

    2014-05-01

    A model-based risk analysis for oncologic liver surgery was described in previous work (Preim et al. in Proceedings of international symposium on computer assisted radiology and surgery (CARS), Elsevier, Amsterdam, pp. 353–358, 2002; Hansen et al. Int I Comput Assist Radiol Surg 4(5):469–474, 2009). In this paper, we present an evaluation of this method. To prove whether and how the risk analysis facilitates the process of liver surgery planning, an explorative user study with 10 liver experts was conducted. The purpose was to compare and analyze their decision-making. The results of the study show that model-based risk analysis enhances the awareness of surgical risk in the planning stage. Participants preferred smaller resection volumes and agreed more on the safety margins’ width in case the risk analysis was available. In addition, time to complete the planning task and confidence of participants were not increased when using the risk analysis. This work shows that the applied model-based risk analysis may influence important planning decisions in liver surgery. It lays a basis for further clinical evaluations and points out important fields for future research.

  11. Cost-effectiveness analysis of a new 8% capsaicin patch compared to existing therapies for postherpetic neuralgia.

    PubMed

    Armstrong, Edward P; Malone, Daniel C; McCarberg, Bill; Panarites, Christopher J; Pham, Sissi V

    2011-05-01

    The purpose of this study was to compare the cost effectiveness of a new 8% capsaicin patch, compared to the current treatments for postherpetic neuralgia (PHN), including tricyclic antidepressants (TCAs), topical lidocaine patches, duloxetine, gabapentin, and pregabalin. A 1-year Markov model was constructed for PHN with monthly cycles, including dose titration and management of adverse events. The perspective of the analysis was from a payer perspective, managed-care organization. Clinical trials were used to determine the proportion of patients achieving at least a 30% improvement in PHN pain, the efficacy parameter. The outcome was cost per quality-adjusted life-year (QALY); second-order probabilistic sensitivity analyses were conducted. The effectiveness results indicated that 8% capsaicin patch and topical lidocaine patch were significantly more effective than the oral PHN products. TCAs were least costly and significantly less costly than duloxetine, pregabalin, topical lidocaine patch, 8% capsaicin patch, but not gabapentin. The incremental cost-effectiveness ratio for the 8% capsaicin patch overlapped with the topical lidocaine patch and was within the accepted threshold of cost per QALY gained compared to TCAs, duloxetine, gabapentin, and pregablin. The frequency of the 8% capsaicin patch retreatment assumption significantly impacts its cost-effectiveness results. There are several limitations to this analysis. Since no head-to-head studies were identified, this model used inputs from multiple clinical trials. Also, a last observation carried forward process was assumed to have continued for the duration of the model. Additionally, the trials with duloxetine may have over-predicted its efficacy in PHN. Although a 30% improvement in pain is often an endpoint in clinical trials, some patients may require greater or less improvement in pain to be considered a clinical success. The effectiveness results demonstrated that 8% capsaicin and topical lidocaine

  12. Methods and theory in bone modeling drift: comparing spatial analyses of primary bone distributions in the human humerus.

    PubMed

    Maggiano, Corey M; Maggiano, Isabel S; Tiesler, Vera G; Chi-Keb, Julio R; Stout, Sam D

    2016-01-01

    This study compares two novel methods quantifying bone shaft tissue distributions, and relates observations on human humeral growth patterns for applications in anthropological and anatomical research. Microstructural variation in compact bone occurs due to developmental and mechanically adaptive circumstances that are 'recorded' by forming bone and are important for interpretations of growth, health, physical activity, adaptation, and identity in the past and present. Those interpretations hinge on a detailed understanding of the modeling process by which bones achieve their diametric shape, diaphyseal curvature, and general position relative to other elements. Bone modeling is a complex aspect of growth, potentially causing the shaft to drift transversely through formation and resorption on opposing cortices. Unfortunately, the specifics of modeling drift are largely unknown for most skeletal elements. Moreover, bone modeling has seen little quantitative methodological development compared with secondary bone processes, such as intracortical remodeling. The techniques proposed here, starburst point-count and 45° cross-polarization hand-drawn histomorphometry, permit the statistical and populational analysis of human primary tissue distributions and provide similar results despite being suitable for different applications. This analysis of a pooled archaeological and modern skeletal sample confirms the importance of extreme asymmetry in bone modeling as a major determinant of microstructural variation in diaphyses. Specifically, humeral drift is posteromedial in the human humerus, accompanied by a significant rotational trend. In general, results encourage the usage of endocortical primary bone distributions as an indicator and summary of bone modeling drift, enabling quantitative analysis by direction and proportion in other elements and populations. © 2015 Anatomical Society.

  13. Comparative Analysis of AhR-Mediated TCDD-Elicited Gene Expression in Human Liver Adult Stem Cells

    PubMed Central

    Kim, Suntae; Dere, Edward; Burgoon, Lyle D.; Chang, Chia-Cheng; Zacharewski, Timothy R.

    2009-01-01

    Time course and dose-response studies were conducted in HL1-1 cells, a human liver cell line with stem cell–like characteristics, to assess the differential gene expression elicited by 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) compared with other established models. Cells were treated with 0.001, 0.01, 0.1, 1, 10, or 100nM TCDD or dimethyl sulfoxide vehicle control for 12 h for the dose-response study, or with 10nM TCDD or vehicle for 1, 2, 4, 8, 12, 24, or 48 h for the time course study. Elicited changes were monitored using a human cDNA microarray with 6995 represented genes. Empirical Bayes analysis identified 144 genes differentially expressed at one or more time points following treatment. Most genes exhibited dose-dependent responses including CYP1A1, CYP1B1, ALDH1A3, and SLC7A5 genes. Comparative analysis of HL1-1 differential gene expression to human HepG2 data identified 74 genes with comparable temporal expression profiles including 12 putative primary responses. HL1-1–specific changes were related to lipid metabolism and immune responses, consistent with effects elicited in vivo. Furthermore, comparative analysis of HL1-1 cells with mouse Hepa1c1c7 hepatoma cell lines and C57BL/6 hepatic tissue identified 18 and 32 commonly regulated orthologous genes, respectively, with functions associated with signal transduction, transcriptional regulation, metabolism and transport. Although some common pathways are affected, the results suggest that TCDD elicits species- and model-specific gene expression profiles. PMID:19684285

  14. Capturing tumor complexity in vitro: Comparative analysis of 2D and 3D tumor models for drug discovery.

    PubMed

    Stock, Kristin; Estrada, Marta F; Vidic, Suzana; Gjerde, Kjersti; Rudisch, Albin; Santo, Vítor E; Barbier, Michaël; Blom, Sami; Arundkar, Sharath C; Selvam, Irwin; Osswald, Annika; Stein, Yan; Gruenewald, Sylvia; Brito, Catarina; van Weerden, Wytske; Rotter, Varda; Boghaert, Erwin; Oren, Moshe; Sommergruber, Wolfgang; Chong, Yolanda; de Hoogt, Ronald; Graeser, Ralph

    2016-07-01

    Two-dimensional (2D) cell cultures growing on plastic do not recapitulate the three dimensional (3D) architecture and complexity of human tumors. More representative models are required for drug discovery and validation. Here, 2D culture and 3D mono- and stromal co-culture models of increasing complexity have been established and cross-comparisons made using three standard cell carcinoma lines: MCF7, LNCaP, NCI-H1437. Fluorescence-based growth curves, 3D image analysis, immunohistochemistry and treatment responses showed that end points differed according to cell type, stromal co-culture and culture format. The adaptable methodologies described here should guide the choice of appropriate simple and complex in vitro models.

  15. Capturing tumor complexity in vitro: Comparative analysis of 2D and 3D tumor models for drug discovery

    PubMed Central

    Stock, Kristin; Estrada, Marta F.; Vidic, Suzana; Gjerde, Kjersti; Rudisch, Albin; Santo, Vítor E.; Barbier, Michaël; Blom, Sami; Arundkar, Sharath C.; Selvam, Irwin; Osswald, Annika; Stein, Yan; Gruenewald, Sylvia; Brito, Catarina; van Weerden, Wytske; Rotter, Varda; Boghaert, Erwin; Oren, Moshe; Sommergruber, Wolfgang; Chong, Yolanda; de Hoogt, Ronald; Graeser, Ralph

    2016-01-01

    Two-dimensional (2D) cell cultures growing on plastic do not recapitulate the three dimensional (3D) architecture and complexity of human tumors. More representative models are required for drug discovery and validation. Here, 2D culture and 3D mono- and stromal co-culture models of increasing complexity have been established and cross-comparisons made using three standard cell carcinoma lines: MCF7, LNCaP, NCI-H1437. Fluorescence-based growth curves, 3D image analysis, immunohistochemistry and treatment responses showed that end points differed according to cell type, stromal co-culture and culture format. The adaptable methodologies described here should guide the choice of appropriate simple and complex in vitro models. PMID:27364600

  16. Digital model as an alternative to plaster model in assessment of space analysis

    PubMed Central

    Kumar, A. Anand; Phillip, Abraham; Kumar, Sathesh; Rawat, Anuradha; Priya, Sakthi; Kumaran, V.

    2015-01-01

    Introduction: Digital three-dimensional models are widely used for orthodontic diagnosis. The purpose of this study was to appraise the accuracy of digital models obtained from computer-aided design/computer-aided manufacturing (CAD/CAM) and cone-beam computed tomography (CBCT) for tooth-width measurements and the Bolton analysis. Materials and Methods: Digital models (CAD/CAM, CBCT) and plaster model were made for each of 50 subjects. Tooth-width measurements on the digital models (CAD/CAM, CBCT) were compared with those on the corresponding plaster models. The anterior and overall Bolton ratios were calculated for each participant and for each method. The paired t-test was applied to determine the validity. Results: Tooth-width measurements, anterior, and overall Bolton ratio of digital models of CAD/CAM and CBCT did not differ significantly from those on the plaster models. Conclusion: Hence, both CBCT and CAD/CAM are trustable and promising technique that can replace plaster models due to its overwhelming advantages. PMID:26538899

  17. Comparing and improving proper orthogonal decomposition (POD) to reduce the complexity of groundwater models

    NASA Astrophysics Data System (ADS)

    Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas

    2017-04-01

    Physically-based modeling is a wide-spread tool in understanding and management of natural systems. With the high complexity of many such models and the huge amount of model runs necessary for parameter estimation and uncertainty analysis, overall run times can be prohibitively long even on modern computer systems. An encouraging strategy to tackle this problem are model reduction methods. In this contribution, we compare different proper orthogonal decomposition (POD, Siade et al. (2010)) methods and their potential applications to groundwater models. The POD method performs a singular value decomposition on system states as simulated by the complex (e.g., PDE-based) groundwater model taken at several time-steps, so-called snapshots. The singular vectors with the highest information content resulting from this decomposition are then used as a basis for projection of the system of model equations onto a subspace of much lower dimensionality than the original complex model, thereby greatly reducing complexity and accelerating run times. In its original form, this method is only applicable to linear problems. Many real-world groundwater models are non-linear, tough. These non-linearities are introduced either through model structure (unconfined aquifers) or boundary conditions (certain Cauchy boundaries, like rivers with variable connection to the groundwater table). To date, applications of POD focused on groundwater models simulating pumping tests in confined aquifers with constant head boundaries. In contrast, POD model reduction either greatly looses accuracy or does not significantly reduce model run time if the above-mentioned non-linearities are introduced. We have also found that variable Dirichlet boundaries are problematic for POD model reduction. An extension to the POD method, called POD-DEIM, has been developed for non-linear groundwater models by Stanko et al. (2016). This method uses spatial interpolation points to build the equation system in the

  18. Combining Static Analysis and Model Checking for Software Analysis

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume; Visser, Willem; Clancy, Daniel (Technical Monitor)

    2003-01-01

    We present an iterative technique in which model checking and static analysis are combined to verify large software systems. The role of the static analysis is to compute partial order information which the model checker uses to reduce the state space. During exploration, the model checker also computes aliasing information that it gives to the static analyzer which can then refine its analysis. The result of this refined analysis is then fed back to the model checker which updates its partial order reduction. At each step of this iterative process, the static analysis computes optimistic information which results in an unsafe reduction of the state space. However we show that the process converges to a fired point at which time the partial order information is safe and the whole state space is explored.

  19. Engine System Loads Analysis Compared to Hot-Fire Data

    NASA Technical Reports Server (NTRS)

    Frady, Gregory P.; Jennings, John M.; Mims, Katherine; Brunty, Joseph; Christensen, Eric R.; McConnaughey, Paul R. (Technical Monitor)

    2002-01-01

    Early implementation of structural dynamics finite element analyses for calculation of design loads is considered common design practice for high volume manufacturing industries such as automotive and aeronautical industries. However with the rarity of rocket engine development programs starts, these tools are relatively new to the design of rocket engines. In the NASA MC-1 engine program, the focus was to reduce the cost-to-weight ratio. The techniques for structural dynamics analysis practices, were tailored in this program to meet both production and structural design goals. Perturbation of rocket engine design parameters resulted in a number of MC-1 load cycles necessary to characterize the impact due to mass and stiffness changes. Evolution of loads and load extraction methodologies, parametric considerations and a discussion of load path sensitivities are important during the design and integration of a new engine system. During the final stages of development, it is important to verify the results of an engine system model to determine the validity of the results. During the final stages of the MC-1 program, hot-fire test results were obtained and compared to the structural design loads calculated by the engine system model. These comparisons are presented in this paper.

  20. The comparative kinetic analysis of Acetocell and Lignoboost® lignin pyrolysis: the estimation of the distributed reactivity models.

    PubMed

    Janković, Bojan

    2011-10-01

    The non-isothermal pyrolysis kinetics of Acetocell (the organosolv) and Lignoboost® (kraft) lignins, in an inert atmosphere, have been studied by thermogravimetric analysis. Using isoconversional analysis, it was concluded that the apparent activation energy for all lignins strongly depends on conversion, showing that the pyrolysis of lignins is not a single chemical process. It was identified that the pyrolysis process of Acetocell and Lignoboost® lignin takes place over three reaction steps, which was confirmed by appearance of the corresponding isokinetic relationships (IKR). It was found that major pyrolysis stage of both lignins is characterized by stilbene pyrolysis reactions, which were subsequently followed by decomposition reactions of products derived from the stilbene pyrolytic process. It was concluded that non-isothermal pyrolysis of Acetocell and Lignoboost® lignins can be best described by n-th (n>1) reaction order kinetics, using the Weibull mixture model (as distributed reactivity model) with alternating shape parameters. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Evaluating the risks of clinical research: direct comparative analysis.

    PubMed

    Rid, Annette; Abdoler, Emily; Roberson-Nay, Roxann; Pine, Daniel S; Wendler, David

    2014-09-01

    Many guidelines and regulations allow children and adolescents to be enrolled in research without the prospect of clinical benefit when it poses minimal risk. However, few systematic methods exist to determine when research risks are minimal. This situation has led to significant variation in minimal risk judgments, raising concern that some children are not being adequately protected. To address this concern, we describe a new method for implementing the widely endorsed "risks of daily life" standard for minimal risk. This standard defines research risks as minimal when they do not exceed the risks posed by daily life activities or routine examinations. This study employed a conceptual and normative analysis, and use of an illustrative example. Different risks are composed of the same basic elements: Type, likelihood, and magnitude of harm. Hence, one can compare the risks of research and the risks of daily life by comparing the respective basic elements with each other. We use this insight to develop a systematic method, direct comparative analysis, for implementing the "risks of daily life" standard for minimal risk. The method offers a way of evaluating research procedures that pose the same types of risk as daily life activities, such as the risk of experiencing anxiety, stress, or other psychological harm. We thus illustrate how direct comparative analysis can be applied in practice by using it to evaluate whether the anxiety induced by a respiratory CO2 challenge poses minimal or greater than minimal risks in children and adolescents. Direct comparative analysis is a systematic method for applying the "risks of daily life" standard for minimal risk to research procedures that pose the same types of risk as daily life activities. It thereby offers a method to protect children and adolescents in research, while ensuring that important studies are not blocked because of unwarranted concerns about research risks.

  2. [Cost-effectiveness analysis of etanercept compared with other biologic therapies in the treatment of rheumatoid arthritis].

    PubMed

    Salinas-Escudero, Guillermo; Vargas-Valencia, Juan; García-García, Erika Gabriela; Munciño-Ortega, Emilio; Galindo-Suárez, Rosa María

    2013-01-01

    to conduct cost-effectiveness analysis of etanercept compared with other biologic therapies in the treatment of moderate or severe rheumatoid arthritis in patients with previous unresponse to immune selective anti-inflammatory derivatives failure. a pharmacoeconomic model based on decision analysis to assess the clinical outcome after giving etanercept, infliximab, adalimumab or tocilizumab to treat moderate or severe rheumatoid arthritis was employed. Effectiveness of medications was assessed with improvement rates of 20 % or 70 % of the parameters established by the American College of Rheumatology (ACR 20 and ACR 70). the model showed that etanercept had the most effective therapeutic response rate: 79.7 % for ACR 20 and 31.4 % for ACR 70, compared with the response to other treatments. Also, etanercept had the lowest cost ($149,629.10 per patient) and had the most cost-effective average ($187,740.40 for clinical success for ACR 20 and $476,525.80 for clinical success for ACR 70) than the other biologic therapies. we demonstrated that treatment with etanercept is more effective and less expensive compared to the other drugs, thus making it more efficient therapeutic option both in terms of means and incremental cost-effectiveness ratios for the treatment of rheumatoid arthritis.

  3. Cost-effectiveness of unicondylar versus total knee arthroplasty: a Markov model analysis.

    PubMed

    Peersman, Geert; Jak, Wouter; Vandenlangenbergh, Tom; Jans, Christophe; Cartier, Philippe; Fennema, Peter

    2014-01-01

    Unicondylar knee arthroplasty (UKA) is believed to lead to less morbidity and enhanced functional outcomes when compared with total knee arthroplasty (TKA). Conversely, UKA is also associated with a higher revision risk than TKA. In order to further clarify the key differences between these separate procedures, the current study assessing the cost-effectiveness of UKA versus TKA was undertaken. A state-transition Markov model was developed to compare the cost-effectiveness of UKA versus TKA for unicondylar osteoarthritis using a Belgian payer's perspective. The model was designed to include the possibility of two revision procedures. Model estimates were obtained through literature review and revision rates were based on registry data. Threshold analysis and probabilistic sensitivity analysis were performed to assess the model's robustness. UKA was associated with a cost reduction of €2,807 and a utility gain of 0.04 quality-adjusted life years in comparison with TKA. Analysis determined that the model is sensitive to clinical effectiveness, and that a marginal reduction in the clinical performance of UKA would lead to TKA being the more cost-effective solution. UKA yields clear advantages in terms of costs and marginal advantages in terms of health effects, in comparison with TKA. © 2014 Elsevier B.V. All rights reserved.

  4. Estimating, Testing, and Comparing Specific Effects in Structural Equation Models: The Phantom Model Approach

    ERIC Educational Resources Information Center

    Macho, Siegfried; Ledermann, Thomas

    2011-01-01

    The phantom model approach for estimating, testing, and comparing specific effects within structural equation models (SEMs) is presented. The rationale underlying this novel method consists in representing the specific effect to be assessed as a total effect within a separate latent variable model, the phantom model that is added to the main…

  5. Cost-effectiveness analysis of a randomized trial comparing care models for chronic kidney disease.

    PubMed

    Hopkins, Robert B; Garg, Amit X; Levin, Adeera; Molzahn, Anita; Rigatto, Claudio; Singer, Joel; Soltys, George; Soroka, Steven; Parfrey, Patrick S; Barrett, Brendan J; Goeree, Ron

    2011-06-01

    Potential cost and effectiveness of a nephrologist/nurse-based multifaceted intervention for stage 3 to 4 chronic kidney disease are not known. This study examines the cost-effectiveness of a chronic disease management model for chronic kidney disease. Cost and cost-effectiveness were prospectively gathered alongside a multicenter trial. The Canadian Prevention of Renal and Cardiovascular Endpoints Trial (CanPREVENT) randomized 236 patients to receive usual care (controls) and another 238 patients to multifaceted nurse/nephrologist-supported care that targeted factors associated with development of kidney and cardiovascular disease (intervention). Cost and outcomes over 2 years were examined to determine the incremental cost-effectiveness of the intervention. Base-case analysis included disease-related costs, and sensitivity analysis included all costs. Consideration of all costs produced statistically significant differences. A lower number of days in hospital explained most of the cost difference. For both base-case and sensitivity analyses with all costs included, the intervention group required fewer resources and had higher quality of life. The direction of the results was unchanged to inclusion of various types of costs, consideration of payer or societal perspective, changes to the discount rate, and levels of GFR. The nephrologist/nurse-based multifaceted intervention represents good value for money because it reduces costs without reducing quality of life for patients with chronic kidney disease.

  6. Cost Utility Analysis of Topical Steroids Compared With Dietary Elimination for Treatment of Eosinophilic Esophagitis.

    PubMed

    Cotton, Cary C; Erim, Daniel; Eluri, Swathi; Palmer, Sarah H; Green, Daniel J; Wolf, W Asher; Runge, Thomas M; Wheeler, Stephanie; Shaheen, Nicholas J; Dellon, Evan S

    2017-06-01

    Topical corticosteroids or dietary elimination are recommended as first-line therapies for eosinophilic esophagitis, but data to directly compare these therapies are scant. We performed a cost utility comparison of topical corticosteroids and the 6-food elimination diet (SFED) in treatment of eosinophilic esophagitis, from the payer perspective. We used a modified Markov model based on current clinical guidelines, in which transition between states depended on histologic response simulated at the individual cohort-member level. Simulation parameters were defined by systematic review and meta-analysis to determine the base-case estimates and bounds of uncertainty for sensitivity analysis. Meta-regression models included adjustment for differences in study and cohort characteristics. In the base-case scenario, topical fluticasone was about as effective as SFED but more expensive at a 5-year time horizon ($9261.58 vs $5719.72 per person). SFED was more effective and less expensive than topical fluticasone and topical budesonide in the base-case scenario. Probabilistic sensitivity analysis revealed little uncertainty in relative treatment effectiveness. There was somewhat greater uncertainty in the relative cost of treatments; most simulations found SFED to be less expensive. In a cost utility analysis comparing topical corticosteroids and SFED for first-line treatment of eosinophilic esophagitis, the therapies were similar in effectiveness. SFED was on average less expensive, and more cost effective in most simulations, than topical budesonide and topical fluticasone, from a payer perspective and not accounting for patient-level costs or quality of life. Copyright © 2017 AGA Institute. Published by Elsevier Inc. All rights reserved.

  7. Comparing soil moisture memory in satellite observations and models

    NASA Astrophysics Data System (ADS)

    Stacke, Tobias; Hagemann, Stefan; Loew, Alexander

    2013-04-01

    A major obstacle to a correct parametrization of soil processes in large scale global land surface models is the lack of long term soil moisture observations for large parts of the globe. Currently, a compilation of soil moisture data derived from a range of satellites is released by the ESA Climate Change Initiative (ECV_SM). Comprising the period from 1978 until 2010, it provides the opportunity to compute climatological relevant statistics on a quasi-global scale and to compare these to the output of climate models. Our study is focused on the investigation of soil moisture memory in satellite observations and models. As a proxy for memory we compute the autocorrelation length (ACL) of the available satellite data and the uppermost soil layer of the models. Additional to the ECV_SM data, AMSR-E soil moisture is used as observational estimate. Simulated soil moisture fields are taken from ERA-Interim reanalysis and generated with the land surface model JSBACH, which was driven with quasi-observational meteorological forcing data. The satellite data show ACLs between one week and one month for the greater part of the land surface while the models simulate a longer memory of up to two months. Some pattern are similar in models and observations, e.g. a longer memory in the Sahel Zone and the Arabian Peninsula, but the models are not able to reproduce regions with a very short ACL of just a few days. If the long term seasonality is subtracted from the data the memory is strongly shortened, indicating the importance of seasonal variations for the memory in most regions. Furthermore, we analyze the change of soil moisture memory in the different soil layers of the models to investigate to which extent the surface soil moisture includes information about the whole soil column. A first analysis reveals that the ACL is increasing for deeper layers. However, its increase is stronger in the soil moisture anomaly than in its absolute values and the first even exceeds the

  8. Viscosity models for pure hydrocarbons at extreme conditions: A review and comparative study

    DOE PAGES

    Baled, Hseen O.; Gamwo, Isaac K.; Enick, Robert M.; ...

    2018-01-12

    Here, viscosity is a critical fundamental property required in many applications in the chemical and oil industries. In this review the performance of seven select viscosity models, representative of various predictive and correlative approaches, is discussed and evaluated by comparison to experimental data of 52 pure hydrocarbons including straight-chain alkanes, branched alkanes, cycloalkanes, and aromatics. This analysis considers viscosity data to extremely high-temperature, high-pressure conditions up to 573 K and 300 MPa. Unsatisfactory results are found, particularly at high pressures, with the Chung-Ajlan-Lee-Starling, Pedersen-Fredenslund, and Lohrenz-Bray-Clark models commonly used for oil reservoir simulation. If sufficient experimental viscosity data are readilymore » available to determine model-specific parameters, the free volume theory and the expanded fluid theory models provide generally comparable results that are superior to those obtained with the friction theory, particularly at pressures higher than 100 MPa. Otherwise, the entropy scaling method by Lötgering-Lin and Gross is recommended as the best predictive model.« less

  9. Viscosity models for pure hydrocarbons at extreme conditions: A review and comparative study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baled, Hseen O.; Gamwo, Isaac K.; Enick, Robert M.

    Here, viscosity is a critical fundamental property required in many applications in the chemical and oil industries. In this review the performance of seven select viscosity models, representative of various predictive and correlative approaches, is discussed and evaluated by comparison to experimental data of 52 pure hydrocarbons including straight-chain alkanes, branched alkanes, cycloalkanes, and aromatics. This analysis considers viscosity data to extremely high-temperature, high-pressure conditions up to 573 K and 300 MPa. Unsatisfactory results are found, particularly at high pressures, with the Chung-Ajlan-Lee-Starling, Pedersen-Fredenslund, and Lohrenz-Bray-Clark models commonly used for oil reservoir simulation. If sufficient experimental viscosity data are readilymore » available to determine model-specific parameters, the free volume theory and the expanded fluid theory models provide generally comparable results that are superior to those obtained with the friction theory, particularly at pressures higher than 100 MPa. Otherwise, the entropy scaling method by Lötgering-Lin and Gross is recommended as the best predictive model.« less

  10. Observed & Modeled Changes in the Onset of Spring: A Preliminary Comparative Analysis by Geographic Regions of the USA

    NASA Astrophysics Data System (ADS)

    Enquist, C.

    2012-12-01

    Phenology, the study of seasonal life cycle events in plants and animals, is a well-recognized indicator of climate change impacts on people and nature. Models, experiments, and observational studies show changes in plant and animal phenology as a function of environmental change. Current research aims to improve our understanding of changes by enhancing existing models, analyzing observations, synthesizing previous research, and comparing outputs. Local to regional climatology is a critical driver of phenological variation of organisms across scales. Because plants respond to the cumulative effects of daily weather over an extended period, timing of life cycle events are effective integrators of climate data. One specific measure, leaf emergence, is particularly important because it often shows a strong response to temperature change, and is crucial for assessment of processes related to start and duration of the growing season. Schwartz et al. (2006) developed a suite of models (the "Spring Indices") linking plant development from historical data from leafing and flowering of cloned lilac and honeysuckle with basic climatic drivers to monitor changes related to the start of the spring growing season. These models can be generated at any location that has daily max-min temperature time series. The new version of these models is called the "Extended Spring Indices," or SI-x (Schwartz et al. in press). The SI-x model output (first leaf date and first bloom date) are produced similarly to the original models (SI-o), but do not incorporate accumulated chilling hours; rather energy accumulation starts for all stations on January 1. This change extends the locations SI model output can be generated into the sub-tropics, allowing full coverage of the conterminous USA. Both SI model versions are highly correlated, with mean bias and mean absolute differences around two days or less, and a similar bias and absolute errors when compared to cloned lilac data. To

  11. Jump Model / Comparability Ratio Model — Joinpoint Help System 4.4.0.0

    Cancer.gov

    The Jump Model / Comparability Ratio Model in the Joinpoint software provides a direct estimation of trend data (e.g. cancer rates) where there is a systematic scale change, which causes a “jump” in the rates, but is assumed not to affect the underlying trend.

  12. A selection model for accounting for publication bias in a full network meta-analysis.

    PubMed

    Mavridis, Dimitris; Welton, Nicky J; Sutton, Alex; Salanti, Georgia

    2014-12-30

    Copas and Shi suggested a selection model to explore the potential impact of publication bias via sensitivity analysis based on assumptions for the probability of publication of trials conditional on the precision of their results. Chootrakool et al. extended this model to three-arm trials but did not fully account for the implications of the consistency assumption, and their model is difficult to generalize for complex network structures with more than three treatments. Fitting these selection models within a frequentist setting requires maximization of a complex likelihood function, and identification problems are common. We have previously presented a Bayesian implementation of the selection model when multiple treatments are compared with a common reference treatment. We now present a general model suitable for complex, full network meta-analysis that accounts for consistency when adjusting results for publication bias. We developed a design-by-treatment selection model to describe the mechanism by which studies with different designs (sets of treatments compared in a trial) and precision may be selected for publication. We fit the model in a Bayesian setting because it avoids the numerical problems encountered in the frequentist setting, it is generalizable with respect to the number of treatments and study arms, and it provides a flexible framework for sensitivity analysis using external knowledge. Our model accounts for the additional uncertainty arising from publication bias more successfully compared to the standard Copas model or its previous extensions. We illustrate the methodology using a published triangular network for the failure of vascular graft or arterial patency. Copyright © 2014 John Wiley & Sons, Ltd.

  13. Cost-Effectiveness Analysis Comparing Pre-Diagnosis Autism Spectrum Disorder (ASD)-Targeted Intervention with Ontario's Autism Intervention Program

    ERIC Educational Resources Information Center

    Penner, Melanie; Rayar, Meera; Bashir, Naazish; Roberts, S. Wendy; Hancock-Howard, Rebecca L.; Coyte, Peter C.

    2015-01-01

    Novel management strategies for autism spectrum disorder (ASD) propose providing interventions before diagnosis. We performed a cost-effectiveness analysis comparing the costs and dependency-free life years (DFLYs) generated by pre-diagnosis intensive Early Start Denver Model (ESDM-I); pre-diagnosis parent-delivered ESDM (ESDM-PD); and the Ontario…

  14. Conceptual Models of Depression in Primary Care Patients: A Comparative Study

    PubMed Central

    Karasz, Alison; Garcia, Nerina; Ferri, Lucia

    2009-01-01

    Conventional psychiatric treatment models are based on a biopsychiatric model of depression. A plausible explanation for low rates of depression treatment utilization among ethnic minorities and the poor is that members of these communities do not share the cultural assumptions underlying the biopsychiatric model. The study examined conceptual models of depression among depressed patients from various ethnic groups, focusing on the degree to which patients’ conceptual models ‘matched’ a biopsychiatric model of depression. The sample included 74 primary care patients from three ethnic groups screening positive for depression. We administered qualitative interviews assessing patients’ conceptual representations of depression. The analysis proceeded in two phases. The first phase involved a strategy called ‘quantitizing’ the qualitative data. A rating scheme was developed and applied to the data by a rater blind to study hypotheses. The data was subjected to statistical analyses. The second phase of the analysis involved the analysis of thematic data using standard qualitative techniques. Study hypotheses were largely supported. The qualitative analysis provided a detailed picture of primary care patients’ conceptual models of depression and suggested interesting directions for future research. PMID:20182550

  15. "Plateau"-related summary statistics are uninformative for comparing working memory models.

    PubMed

    van den Berg, Ronald; Ma, Wei Ji

    2014-10-01

    Performance on visual working memory tasks decreases as more items need to be remembered. Over the past decade, a debate has unfolded between proponents of slot models and slotless models of this phenomenon (Ma, Husain, Bays (Nature Neuroscience 17, 347-356, 2014). Zhang and Luck (Nature 453, (7192), 233-235, 2008) and Anderson, Vogel, and Awh (Attention, Perception, Psychophys 74, (5), 891-910, 2011) noticed that as more items need to be remembered, "memory noise" seems to first increase and then reach a "stable plateau." They argued that three summary statistics characterizing this plateau are consistent with slot models, but not with slotless models. Here, we assess the validity of their methods. We generated synthetic data both from a leading slot model and from a recent slotless model and quantified model evidence using log Bayes factors. We found that the summary statistics provided at most 0.15 % of the expected model evidence in the raw data. In a model recovery analysis, a total of more than a million trials were required to achieve 99 % correct recovery when models were compared on the basis of summary statistics, whereas fewer than 1,000 trials were sufficient when raw data were used. Therefore, at realistic numbers of trials, plateau-related summary statistics are highly unreliable for model comparison. Applying the same analyses to subject data from Anderson et al. (Attention, Perception, Psychophys 74, (5), 891-910, 2011), we found that the evidence in the summary statistics was at most 0.12 % of the evidence in the raw data and far too weak to warrant any conclusions. The evidence in the raw data, in fact, strongly favored the slotless model. These findings call into question claims about working memory that are based on summary statistics.

  16. Comparing fluid mechanics models with experimental data.

    PubMed Central

    Spedding, G R

    2003-01-01

    The art of modelling the physical world lies in the appropriate simplification and abstraction of the complete problem. In fluid mechanics, the Navier-Stokes equations provide a model that is valid under most circumstances germane to animal locomotion, but the complexity of solutions provides strong incentive for the development of further, more simplified practical models. When the flow organizes itself so that all shearing motions are collected into localized patches, then various mathematical vortex models have been very successful in predicting and furthering the physical understanding of many flows, particularly in aerodynamics. Experimental models have the significant added convenience that the fluid mechanics can be generated by a real fluid, not a model, provided the appropriate dimensionless groups have similar values. Then, analogous problems can be encountered in making intelligible but independent descriptions of the experimental results. Finally, model predictions and experimental results may be compared if, and only if, numerical estimates of the likely variations in the tested quantities are provided. Examples from recent experimental measurements of wakes behind a fixed wing and behind a bird in free flight are used to illustrate these principles. PMID:14561348

  17. Quantitative image analysis of immunohistochemical stains using a CMYK color model

    PubMed Central

    Pham, Nhu-An; Morrison, Andrew; Schwock, Joerg; Aviel-Ronen, Sarit; Iakovlev, Vladimir; Tsao, Ming-Sound; Ho, James; Hedley, David W

    2007-01-01

    Background Computer image analysis techniques have decreased effects of observer biases, and increased the sensitivity and the throughput of immunohistochemistry (IHC) as a tissue-based procedure for the evaluation of diseases. Methods We adapted a Cyan/Magenta/Yellow/Key (CMYK) model for automated computer image analysis to quantify IHC stains in hematoxylin counterstained histological sections. Results The spectral characteristics of the chromogens AEC, DAB and NovaRed as well as the counterstain hematoxylin were first determined using CMYK, Red/Green/Blue (RGB), normalized RGB and Hue/Saturation/Lightness (HSL) color models. The contrast of chromogen intensities on a 0–255 scale (24-bit image file) as well as compared to the hematoxylin counterstain was greatest using the Yellow channel of a CMYK color model, suggesting an improved sensitivity for IHC evaluation compared to other color models. An increase in activated STAT3 levels due to growth factor stimulation, quantified using the Yellow channel image analysis was associated with an increase detected by Western blotting. Two clinical image data sets were used to compare the Yellow channel automated method with observer-dependent methods. First, a quantification of DAB-labeled carbonic anhydrase IX hypoxia marker in 414 sections obtained from 138 biopsies of cervical carcinoma showed strong association between Yellow channel and positive color selection results. Second, a linear relationship was also demonstrated between Yellow intensity and visual scoring for NovaRed-labeled epidermal growth factor receptor in 256 non-small cell lung cancer biopsies. Conclusion The Yellow channel image analysis method based on a CMYK color model is independent of observer biases for threshold and positive color selection, applicable to different chromogens, tolerant of hematoxylin, sensitive to small changes in IHC intensity and is applicable to simple automation procedures. These characteristics are advantageous for both

  18. Comparative hazard analysis and toxicological modeling of diverse nanomaterials using the embryonic zebrafish (EZ) metric of toxicity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harper, Bryan; Thomas, Dennis G.; Chikkagoudar, Satish

    The integration of rapid assays, large data sets, informatics and modeling can overcome current barriers in understanding nanomaterial structure-toxicity relationships by providing a weight-of-the-evidence mechanism to generate hazard rankings for nanomaterials. Here we present the use of a rapid, low-cost assay to perform screening-level toxicity evaluations of nanomaterials in vivo. Calculated EZ Metric scores, a combined measure of morbidity and mortality, were established at realistic exposure levels and used to develop a predictive model of nanomaterial toxicity. Hazard ranking and clustering analysis of 68 diverse nanomaterials revealed distinct patterns of toxicity related to both core composition and outermost surface chemistrymore » of nanomaterials. The resulting clusters guided the development of a predictive model of gold nanoparticle toxicity to embryonic zebrafish. In addition, our findings suggest that risk assessments based on the size and core composition of nanomaterials alone may be wholly inappropriate, especially when considering complex engineered nanomaterials. These findings reveal the need to expeditiously increase the availability of quantitative measures of nanomaterial hazard and broaden the sharing of that data and knowledge to support predictive modeling. In addition, research should continue to focus on methodologies for developing predictive models of nanomaterial hazard based on sub-lethal responses to low dose exposures.« less

  19. Comparative hazard analysis and toxicological modeling of diverse nanomaterials using the embryonic zebrafish (EZ) metric of toxicity

    DOE PAGES

    Harper, Bryan; Thomas, Dennis G.; Chikkagoudar, Satish; ...

    2015-06-04

    The integration of rapid assays, large data sets, informatics and modeling can overcome current barriers in understanding nanomaterial structure-toxicity relationships by providing a weight-of-the-evidence mechanism to generate hazard rankings for nanomaterials. Here we present the use of a rapid, low-cost assay to perform screening-level toxicity evaluations of nanomaterials in vivo. Calculated EZ Metric scores, a combined measure of morbidity and mortality, were established at realistic exposure levels and used to develop a predictive model of nanomaterial toxicity. Hazard ranking and clustering analysis of 68 diverse nanomaterials revealed distinct patterns of toxicity related to both core composition and outermost surface chemistrymore » of nanomaterials. The resulting clusters guided the development of a predictive model of gold nanoparticle toxicity to embryonic zebrafish. In addition, our findings suggest that risk assessments based on the size and core composition of nanomaterials alone may be wholly inappropriate, especially when considering complex engineered nanomaterials. These findings reveal the need to expeditiously increase the availability of quantitative measures of nanomaterial hazard and broaden the sharing of that data and knowledge to support predictive modeling. In addition, research should continue to focus on methodologies for developing predictive models of nanomaterial hazard based on sub-lethal responses to low dose exposures.« less

  20. A comparative analysis of readmission rates after outpatient cosmetic surgery.

    PubMed

    Mioton, Lauren M; Alghoul, Mohammed S; Kim, John Y S

    2014-02-01

    Despite the increasing scrutiny of surgical procedures, outpatient cosmetic surgery has an established record of safety and efficacy. A key measure in assessing surgical outcomes is the examination of readmission rates. However, there is a paucity of data on unplanned readmission following cosmetic surgery procedures. The authors studied readmission rates for outpatient cosmetic surgery and compared the data with readmission rates for other surgical procedures. The 2011 National Surgical Quality Improvement Program (NSQIP) data set was queried for all outpatient procedures. Readmission rates were calculated for the 5 surgical specialties with the greatest number of outpatient procedures and for the overall outpatient cosmetic surgery population. Subgroup analysis was performed on the 5 most common cosmetic surgery procedures. Multivariate regression models were used to determine predictors of readmission for cosmetic surgery patients. The 2879 isolated outpatient cosmetic surgery cases had an associated 0.90% unplanned readmission rate. The 5 specialties with the highest number of outpatient surgical procedures were general, orthopedic, gynecologic, urologic, and otolaryngologic surgery; their unplanned readmission rates ranged from 1.21% to 3.73%. The 5 most common outpatient cosmetic surgery procedures and their associated readmission rates were as follows: reduction mammaplasty, 1.30%; mastopexy, 0.31%; liposuction, 1.13%; abdominoplasty, 1.78%; and breast augmentation, 1.20%. Multivariate regression analysis demonstrated that operating time (in hours) was an independent predictor of readmission (odds ratio, 1.40; 95% confidence interval, 1.08-1.81; P=.010). Rates of unplanned readmission with outpatient cosmetic surgery are low and compare favorably to those of other outpatient surgeries.

  1. Critical Factors Analysis for Offshore Software Development Success by Structural Equation Modeling

    NASA Astrophysics Data System (ADS)

    Wada, Yoshihisa; Tsuji, Hiroshi

    In order to analyze the success/failure factors in offshore software development service by the structural equation modeling, this paper proposes to follow two approaches together; domain knowledge based heuristic analysis and factor analysis based rational analysis. The former works for generating and verifying of hypothesis to find factors and causalities. The latter works for verifying factors introduced by theory to build the model without heuristics. Following the proposed combined approaches for the responses from skilled project managers of the questionnaire, this paper found that the vendor property has high causality for the success compared to software property and project property.

  2. How does a three-dimensional continuum muscle model affect the kinematics and muscle strains of a finite element neck model compared to a discrete muscle model in rear-end, frontal, and lateral impacts.

    PubMed

    Hedenstierna, Sofia; Halldin, Peter

    2008-04-15

    A finite element (FE) model of the human neck with incorporated continuum or discrete muscles was used to simulate experimental impacts in rear, frontal, and lateral directions. The aim of this study was to determine how a continuum muscle model influences the impact behavior of a FE human neck model compared with a discrete muscle model. Most FE neck models used for impact analysis today include a spring element musculature and are limited to discrete geometries and nodal output results. A solid-element muscle model was thought to improve the behavior of the model by adding properties such as tissue inertia and compressive stiffness and by improving the geometry. It would also predict the strain distribution within the continuum elements. A passive continuum muscle model with nonlinear viscoelastic materials was incorporated into the KTH neck model together with active spring muscles and used in impact simulations. The resulting head and vertebral kinematics was compared with the results from a discrete muscle model as well as volunteer corridors. The muscle strain prediction was compared between the 2 muscle models. The head and vertebral kinematics were within the volunteer corridors for both models when activated. The continuum model behaved more stiffly than the discrete model and needed less active force to fit the experimental results. The largest difference was seen in the rear impact. The strain predicted by the continuum model was lower than for the discrete model. The continuum muscle model stiffened the response of the KTH neck model compared with a discrete model, and the strain prediction in the muscles was improved.

  3. Comparative analysis of magnetic resonance in the polaron pair recombination and the triplet exciton-polaron quenching models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mkhitaryan, V. V.; Danilovic, D.; Hippola, C.

    We present a comparative theoretical study of magnetic resonance within the polaron pair recombination (PPR) and the triplet exciton-polaron quenching (TPQ) models. Both models have been invoked to interpret the photoluminescence detected magnetic resonance (PLDMR) results in π-conjugated materials and devices. We show that resonance line shapes calculated within the two models differ dramatically in several regards. First, in the PPR model, the line shape exhibits unusual behavior upon increasing the microwave power: it evolves from fully positive at weak power to fully negative at strong power. In contrast, in the TPQ model, the PLDMR is completely positive, showing amore » monotonic saturation. Second, the two models predict different dependencies of the resonance signal on the photoexcitation power, PL. At low PL, the resonance amplitude Δ I/I is ∝ PL within the PPR model, while it is ∝ P2L crossing over to P3L within the TPQ model. On the physical level, the differences stem from different underlying spin dynamics. Most prominently, a negative resonance within the PPR model has its origin in the microwave-induced spin-Dicke effect, leading to the resonant quenching of photoluminescence. The spin-Dicke effect results from the spin-selective recombination, leading to a highly correlated precession of the on-resonance pair partners under the strong microwave power. This effect is not relevant for TPQ mechanism, where the strong zero-field splitting renders the majority of triplets off resonance. On the technical level, the analytical evaluation of the line shapes for the two models is enabled by the fact that these shapes can be expressed via the eigenvalues of a complex Hamiltonian. This bypasses the necessity of solving the much larger complex linear system of the stochastic Liouville equations. Lastly, our findings pave the way towards a reliable discrimination between the two mechanisms via cw PLDMR.« less

  4. Comparative analysis of magnetic resonance in the polaron pair recombination and the triplet exciton-polaron quenching models

    DOE PAGES

    Mkhitaryan, V. V.; Danilovic, D.; Hippola, C.; ...

    2018-01-03

    We present a comparative theoretical study of magnetic resonance within the polaron pair recombination (PPR) and the triplet exciton-polaron quenching (TPQ) models. Both models have been invoked to interpret the photoluminescence detected magnetic resonance (PLDMR) results in π-conjugated materials and devices. We show that resonance line shapes calculated within the two models differ dramatically in several regards. First, in the PPR model, the line shape exhibits unusual behavior upon increasing the microwave power: it evolves from fully positive at weak power to fully negative at strong power. In contrast, in the TPQ model, the PLDMR is completely positive, showing amore » monotonic saturation. Second, the two models predict different dependencies of the resonance signal on the photoexcitation power, PL. At low PL, the resonance amplitude Δ I/I is ∝ PL within the PPR model, while it is ∝ P2L crossing over to P3L within the TPQ model. On the physical level, the differences stem from different underlying spin dynamics. Most prominently, a negative resonance within the PPR model has its origin in the microwave-induced spin-Dicke effect, leading to the resonant quenching of photoluminescence. The spin-Dicke effect results from the spin-selective recombination, leading to a highly correlated precession of the on-resonance pair partners under the strong microwave power. This effect is not relevant for TPQ mechanism, where the strong zero-field splitting renders the majority of triplets off resonance. On the technical level, the analytical evaluation of the line shapes for the two models is enabled by the fact that these shapes can be expressed via the eigenvalues of a complex Hamiltonian. This bypasses the necessity of solving the much larger complex linear system of the stochastic Liouville equations. Lastly, our findings pave the way towards a reliable discrimination between the two mechanisms via cw PLDMR.« less

  5. Comparative analysis of magnetic resonance in the polaron pair recombination and the triplet exciton-polaron quenching models

    NASA Astrophysics Data System (ADS)

    Mkhitaryan, V. V.; Danilović, D.; Hippola, C.; Raikh, M. E.; Shinar, J.

    2018-01-01

    We present a comparative theoretical study of magnetic resonance within the polaron pair recombination (PPR) and the triplet exciton-polaron quenching (TPQ) models. Both models have been invoked to interpret the photoluminescence detected magnetic resonance (PLDMR) results in π -conjugated materials and devices. We show that resonance line shapes calculated within the two models differ dramatically in several regards. First, in the PPR model, the line shape exhibits unusual behavior upon increasing the microwave power: it evolves from fully positive at weak power to fully negative at strong power. In contrast, in the TPQ model, the PLDMR is completely positive, showing a monotonic saturation. Second, the two models predict different dependencies of the resonance signal on the photoexcitation power, PL. At low PL, the resonance amplitude Δ I /I is ∝PL within the PPR model, while it is ∝PL2 crossing over to PL3 within the TPQ model. On the physical level, the differences stem from different underlying spin dynamics. Most prominently, a negative resonance within the PPR model has its origin in the microwave-induced spin-Dicke effect, leading to the resonant quenching of photoluminescence. The spin-Dicke effect results from the spin-selective recombination, leading to a highly correlated precession of the on-resonance pair partners under the strong microwave power. This effect is not relevant for TPQ mechanism, where the strong zero-field splitting renders the majority of triplets off resonance. On the technical level, the analytical evaluation of the line shapes for the two models is enabled by the fact that these shapes can be expressed via the eigenvalues of a complex Hamiltonian. This bypasses the necessity of solving the much larger complex linear system of the stochastic Liouville equations. Our findings pave the way towards a reliable discrimination between the two mechanisms via cw PLDMR.

  6. Comparative analysis on reproducibility among 5 intraoral scanners: sectional analysis according to restoration type and preparation outline form

    PubMed Central

    2016-01-01

    PURPOSE The trueness and precision of acquired images of intraoral digital scanners could be influenced by restoration type, preparation outline form, scanning technology and the application of power. The aim of this study is to perform the comparative evaluation of the 3-dimensional reproducibility of intraoral scanners (IOSs). MATERIALS AND METHODS The phantom containing five prepared teeth was scanned by the reference scanner (Dental Wings) and 5 test IOSs (E4D dentist, Fastscan, iTero, Trios and Zfx Intrascan). The acquired images of the scanner groups were compared with the image from the reference scanner (trueness) and within each scanner groups (precision). Statistical analysis was performed using independent two-samples t-test and analysis of variance (α=.05). RESULTS The average deviations of trueness and precision of Fastscan, iTero and Trios were significantly lower than the other scanners. According to the restoration type, significantly higher trueness was observed in crown and inlay than in bridge. However, no significant difference was observed among four sites of preparation outline form. If compared by the characteristics of IOS, high trueness was observed in the group adopting the active triangulation and using powder. However, there was no significant difference between the still image acquisition and video acquisition groups. CONCLUSION Except for two intraoral scanners, Fastscan, iTero and Trios displayed comparable levels of trueness and precision values in tested phantom model. Difference in trueness was observed depending on the restoration type, the preparation outline form and characteristics of IOS, which should be taken into consideration when the intraoral scanning data are utilized. PMID:27826385

  7. Twenty-five-gauge vitrectomy versus 23-gauge vitrectomy in the management of macular diseases: a comparative analysis through a Health Technology Assessment model.

    PubMed

    Grosso, Andrea; Charrier, Lorena; Lovato, Emanuela; Panico, Claudio; Mariotti, Cesare; Dapavo, Giancarlo; Chiuminatto, Roberto; Siliquini, Roberta; Gianino, Maria Michela

    2014-04-01

    Small-gauge vitreoretinal techniques have been shown to be safe and effective in the management of a wide spectrum of vitreoretinal diseases. However, the costs of the new technologies may represent a critical issue for national health systems. The aim of the study is to plan a Health Technology Assessment (HTA) by performing a comparative analysis between the 23- and 25-gauge techniques in the management of macular diseases (epiretinal membranes, macular holes, vitreo-macular traction syndrome). In this prospective study, 45-80-year-old patients undergoing vitrectomy surgery for macular disease were enrolled at the Torino Eye Hospital. In the HTA model we assessed the safety, clinical effectiveness, and cost and financial evaluation of 23-gauge compared with 25-gauge vitrectomies. Fifty patients entered the study; 14 patients underwent 23-gauge vitrectomy and 36 underwent 25-gauge vitrectomy. There was no statistically significant difference in post-operative visual acuity at 1 year between the two groups. No cases of retinal detachment or endophtalmitis were registered at 1-year follow-up. The 23-gauge technique was slightly more expensive than the 25-gauge: the total surgical costs were EUR1217.70 versus EUR1164.84 (p = 0.351). We provide a financial comparison between new vitreoretinal procedures recently introduced in the market and reimbursed by the Italian National Health System and we also stimulate a critical debate about the expensive technocratic model of medicine.

  8. The Constant Comparative Analysis Method Outside of Grounded Theory

    ERIC Educational Resources Information Center

    Fram, Sheila M.

    2013-01-01

    This commentary addresses the gap in the literature regarding discussion of the legitimate use of Constant Comparative Analysis Method (CCA) outside of Grounded Theory. The purpose is to show the strength of using CCA to maintain the emic perspective and how theoretical frameworks can maintain the etic perspective throughout the analysis. My…

  9. Bayesian Poisson hierarchical models for crash data analysis: Investigating the impact of model choice on site-specific predictions.

    PubMed

    Khazraee, S Hadi; Johnson, Valen; Lord, Dominique

    2018-08-01

    The Poisson-gamma (PG) and Poisson-lognormal (PLN) regression models are among the most popular means for motor vehicle crash data analysis. Both models belong to the Poisson-hierarchical family of models. While numerous studies have compared the overall performance of alternative Bayesian Poisson-hierarchical models, little research has addressed the impact of model choice on the expected crash frequency prediction at individual sites. This paper sought to examine whether there are any trends among candidate models predictions e.g., that an alternative model's prediction for sites with certain conditions tends to be higher (or lower) than that from another model. In addition to the PG and PLN models, this research formulated a new member of the Poisson-hierarchical family of models: the Poisson-inverse gamma (PIGam). Three field datasets (from Texas, Michigan and Indiana) covering a wide range of over-dispersion characteristics were selected for analysis. This study demonstrated that the model choice can be critical when the calibrated models are used for prediction at new sites, especially when the data are highly over-dispersed. For all three datasets, the PIGam model would predict higher expected crash frequencies than would the PLN and PG models, in order, indicating a clear link between the models predictions and the shape of their mixing distributions (i.e., gamma, lognormal, and inverse gamma, respectively). The thicker tail of the PIGam and PLN models (in order) may provide an advantage when the data are highly over-dispersed. The analysis results also illustrated a major deficiency of the Deviance Information Criterion (DIC) in comparing the goodness-of-fit of hierarchical models; models with drastically different set of coefficients (and thus predictions for new sites) may yield similar DIC values, because the DIC only accounts for the parameters in the lowest (observation) level of the hierarchy and ignores the higher levels (regression coefficients

  10. Comparative study between 2 methods of mounting models in semiadjustable articulator for orthognathic surgery.

    PubMed

    Mayrink, Gabriela; Sawazaki, Renato; Asprino, Luciana; de Moraes, Márcio; Fernandes Moreira, Roger William

    2011-11-01

    Compare the traditional method of mounting dental casts on a semiadjustable articulator and the new method suggested by Wolford and Galiano, 1 analyzing the inclination of maxillary occlusal plane in relation to FHP. Two casts of 10 patients were obtained. One of them was used for mounting of models on a traditional articulator, by using a face bow transfer system and the other one was used to mounting models at Occlusal Plane Indicator platform (OPI), using the SAM articulator. After that, na analysis of the accuracy of mounting models was performed. The angle made by de occlusal plane and FHP on the cephalogram should be equal the angle between the occlusal plane and the upper member of the articulator. The measures were tabulated in Microsoft Excell(®) and calculated using a 1-way analysis variance. Statistically, the results did not reveal significant differences among the measures. OPI and face bow presents similar results but more studies are needed to verify its accuracy relative to the maxillary cant in OPI or develop new techniques able to solve the disadvantages of each technique. Copyright © 2011 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  11. Discrete time modeling and stability analysis of TCP Vegas

    NASA Astrophysics Data System (ADS)

    You, Byungyong; Koo, Kyungmo; Lee, Jin S.

    2007-12-01

    This paper presents an analysis method for TCP Vegas network model with single link and single source. Some papers showed global stability of several network models, but those models are not a dual problem where dynamics both exist in sources and links such as TCP Vegas. Other papers studied TCP Vegas as a dual problem, but it did not fully derive an asymptotic stability region. Therefore we analyze TCP Vegas with Jury's criterion which is necessary and sufficient condition. So we use state space model in discrete time and by using Jury's criterion, we could find an asymptotic stability region of TCP Vegas network model. This result is verified by ns-2 simulation. And by comparing with other results, we could know our method performed well.

  12. NTCP modelling of lung toxicity after SBRT comparing the universal survival curve and the linear quadratic model for fractionation correction.

    PubMed

    Wennberg, Berit M; Baumann, Pia; Gagliardi, Giovanna; Nyman, Jan; Drugge, Ninni; Hoyer, Morten; Traberg, Anders; Nilsson, Kristina; Morhed, Elisabeth; Ekberg, Lars; Wittgren, Lena; Lund, Jo-Åsmund; Levin, Nina; Sederholm, Christer; Lewensohn, Rolf; Lax, Ingmar

    2011-05-01

    In SBRT of lung tumours no established relationship between dose-volume parameters and the incidence of lung toxicity is found. The aim of this study is to compare the LQ model and the universal survival curve (USC) to calculate biologically equivalent doses in SBRT to see if this will improve knowledge on this relationship. Toxicity data on radiation pneumonitis grade 2 or more (RP2+) from 57 patients were used, 10.5% were diagnosed with RP2+. The lung DVHs were corrected for fractionation (LQ and USC) and analysed with the Lyman- Kutcher-Burman (LKB) model. In the LQ-correction α/β = 3 Gy was used and the USC parameters used were: α/β = 3 Gy, D(0) = 1.0 Gy, [Formula: see text] = 10, α = 0.206 Gy(-1) and d(T) = 5.8 Gy. In order to understand the relative contribution of different dose levels to the calculated NTCP the concept of fractional NTCP was used. This might give an insight to the questions of whether "high doses to small volumes" or "low doses to large volumes" are most important for lung toxicity. NTCP analysis with the LKB-model using parameters m = 0.4, D(50) = 30 Gy resulted for the volume dependence parameter (n) with LQ correction n = 0.87 and with USC correction n = 0.71. Using parameters m = 0.3, D(50) = 20 Gy n = 0.93 with LQ correction and n = 0.83 with USC correction. In SBRT of lung tumours, NTCP modelling of lung toxicity comparing models (LQ,USC) for fractionation correction, shows that low dose contribute less and high dose more to the NTCP when using the USC-model. Comparing NTCP modelling of SBRT data and data from breast cancer, lung cancer and whole lung irradiation implies that the response of the lung is treatment specific. More data are however needed in order to have a more reliable modelling.

  13. Modeling Intracochlear Magnetic Stimulation: A Finite-Element Analysis.

    PubMed

    Mukesh, S; Blake, D T; McKinnon, B J; Bhatti, P T

    2017-08-01

    This study models induced electric fields, and their gradient, produced by pulsatile current stimulation of submillimeter inductors for cochlear implantation. Using finite-element analysis, the lower chamber of the cochlea, scala tympani, is modeled as a cylindrical structure filled with perilymph bounded by tissue, bone, and cochlear neural elements. Single inductors as well as an array of inductors are modeled. The coil strength (~100 nH) and excitation parameters (peak current of 1-5 A, voltages of 16-20 V) are based on a formative feasibility study conducted by our group. In that study, intracochlear micromagnetic stimulation achieved auditory activation as measured through the auditory brainstem response in a feline model. With respect to the finite element simulations, axial symmetry of the inductor geometry is exploited to improve computation time. It is verified that the inductor coil orientation greatly affects the strength of the induced electric field and thereby the ability to affect the transmembrane potential of nearby neural elements. Furthermore, upon comparing an array of micro-inductors with a typical multi-site electrode array, magnetically excited arrays retain greater focus in terms of the gradient of induced electric fields. Once combined with further in vivo analysis, this modeling study may enable further exploration of the mechanism of magnetically induced, and focused neural stimulation.

  14. “Plateau”-related summary statistics are uninformative for comparing working memory models

    PubMed Central

    van den Berg, Ronald; Ma, Wei Ji

    2014-01-01

    Performance on visual working memory tasks decreases as more items need to be remembered. Over the past decade, a debate has unfolded between proponents of slot models and slotless models of this phenomenon. Zhang and Luck (2008) and Anderson, Vogel, and Awh (2011) noticed that as more items need to be remembered, “memory noise” seems to first increase and then reach a “stable plateau.” They argued that three summary statistics characterizing this plateau are consistent with slot models, but not with slotless models. Here, we assess the validity of their methods. We generated synthetic data both from a leading slot model and from a recent slotless model and quantified model evidence using log Bayes factors. We found that the summary statistics provided, at most, 0.15% of the expected model evidence in the raw data. In a model recovery analysis, a total of more than a million trials were required to achieve 99% correct recovery when models were compared on the basis of summary statistics, whereas fewer than 1,000 trials were sufficient when raw data were used. At realistic numbers of trials, plateau-related summary statistics are completely unreliable for model comparison. Applying the same analyses to subject data from Anderson et al. (2011), we found that the evidence in the summary statistics was, at most, 0.12% of the evidence in the raw data and far too weak to warrant any conclusions. These findings call into question claims about working memory that are based on summary statistics. PMID:24719235

  15. Petro-elastic modelling and characterization of solid-filled reservoirs: Comparative analysis on a Triassic North Sea reservoir

    NASA Astrophysics Data System (ADS)

    Auduson, Aaron E.

    2018-07-01

    One of the most common problems in the North Sea is the occurrence of salt (solid) in the pores of Triassic sandstones. Many wells have failed due to interpretation errors based conventional substitution as described by the Gassmann equation. A way forward is to device a means to model and characterize the salt-plugging scenarios. Modelling the effects of fluid and solids on rock velocity and density will ascertain the influence of pore material types on seismic data. In this study, two different rock physics modelling approaches are adopted in solid-fluid substitution, namely the extended Gassmann theory and multi-mineral mixing modelling. Using the modified new Gassmann equation, solid-and-fluid substitutions were performed from gas or water filling in the hydrocarbon reservoirs to salt materials being the pore-filling. Inverse substitutions were also performed from salt-filled case to gas- and water-filled scenarios. The modelling results show very consistent results - Salt-plugged wells clearly showing different elastic parameters when compared with gas- and water-bearing wells. While the Gassmann equation-based modelling was used to discretely compute effective bulk and shear moduli of the salt plugs, the algorithm based on the mineral-mixing (Hashin-Shtrikman) can only predict elastic moduli in a narrow range. Thus, inasmuch as both of these methods can be used to model elastic parameters and characterize pore-fill scenarios, the New Gassmann-based algorithm, which is capable of precisely predicting the elastic parameters, is recommended for use in forward seismic modelling and characterization of this reservoir and other reservoir types. This will significantly help in reducing seismic interpretation errors.

  16. Comparing Realistic Subthalamic Nucleus Neuron Models

    NASA Astrophysics Data System (ADS)

    Njap, Felix; Claussen, Jens C.; Moser, Andreas; Hofmann, Ulrich G.

    2011-06-01

    The mechanism of action of clinically effective electrical high frequency stimulation is still under debate. However, recent evidence points at the specific activation of GABA-ergic ion channels. Using a computational approach, we analyze temporal properties of the spike trains emitted by biologically realistic neurons of the subthalamic nucleus (STN) as a function of GABA-ergic synaptic input conductances. Our contribution is based on a model proposed by Rubin and Terman and exhibits a wide variety of different firing patterns, silent, low spiking, moderate spiking and intense spiking activity. We observed that most of the cells in our network turn to silent mode when we increase the GABAA input conductance above the threshold of 3.75 mS/cm2. On the other hand, insignificant changes in firing activity are observed when the input conductance is low or close to zero. We thus reproduce Rubin's model with vanishing synaptic conductances. To quantitatively compare spike trains from the original model with the modified model at different conductance levels, we apply four different (dis)similarity measures between them. We observe that Mahalanobis distance, Victor-Purpura metric, and Interspike Interval distribution are sensitive to different firing regimes, whereas Mutual Information seems undiscriminative for these functional changes.

  17. Comparing the costs of three prostate cancer follow-up strategies: a cost minimisation analysis.

    PubMed

    Pearce, Alison M; Ryan, Fay; Drummond, Frances J; Thomas, Audrey Alforque; Timmons, Aileen; Sharp, Linda

    2016-02-01

    Prostate cancer follow-up is traditionally provided by clinicians in a hospital setting. Growing numbers of prostate cancer survivors mean that this model of care may not be economically sustainable, and a number of alternative approaches have been suggested. The aim of this study was to develop an economic model to compare the costs of three alternative strategies for prostate cancer follow-up in Ireland-the European Association of Urology (EAU) guidelines, the National Institute of Health Care Excellence (NICE) guidelines and current practice. A cost minimisation analysis was performed using a Markov model with three arms (EAU guidelines, NICE guidelines and current practice) comparing follow-up for men with prostate cancer treated with curative intent. The model took a health care payer's perspective over a 10-year time horizon. Current practice was the least cost efficient arm of the model, the NICE guidelines were most cost efficient (74 % of current practice costs) and the EAU guidelines intermediate (92 % of current practice costs). For the 2562 new cases of prostate cancer diagnosed in 2009, the Irish health care system could have saved €760,000 over a 10-year period if the NICE guidelines were adopted. This is the first study investigating costs of prostate cancer follow-up in the Irish setting. While economic models are designed as a simplification of complex real-world situations, these results suggest potential for significant savings within the Irish health care system associated with implementation of alternative models of prostate cancer follow-up care.

  18. Comparative bacterial degradation and detoxification of model and kraft lignin from pulp paper wastewater and its metabolites

    NASA Astrophysics Data System (ADS)

    Abhishek, Amar; Dwivedi, Ashish; Tandan, Neeraj; Kumar, Urwashi

    2017-05-01

    Continuous discharge of lignin containing colored wastewater from pulp paper mill into the environment has resulted in building up their high level in various aquatic systems. In this study, the chemical texture of kraft lignin in terms of pollution parameters (COD, TOC, BOD, etc.) was quite different and approximately twofold higher as compared to model lignin at same optical density (OD 3.7 at 465 nm) and lignin content (2000 mg/L). For comparative bacterial degradation and detoxification of model and kraft lignin two bacteria Citrobacter freundii and Serratia marcescens were isolated, screened and applied in axenic and mixed condition. Bacterial mixed culture was found to decolorize 87 and 70 % model and kraft lignin (2000 mg/L), respectively; whereas, axenic culture Citrobacter freundii and Serratia marcescens decolorized 64, 60 % model and 50, 55 % kraft lignin, respectively, at optimized condition (34 °C, pH 8.2, 140 rpm). In addition, the mixed bacterial culture also showed the removal of 76, 61 % TOC; 80, 67 % COD and 87, 65 % lignin from model and kraft lignin, respectively. High pollution parameters (like TOC, COD, BOD, sulphate) and toxic chemicals slow down the degradation of kraft lignin as compared to model lignin. The comparative GC-MS analysis has suggested that the interspecies collaboration, i.e., each bacterial strain in culture medium has cumulative enhancing effect on growth, and degradation of lignin rather than inhibition. Furthermore, toxicity evaluation on human keratinocyte cell line after bacterial treatment has supported the degradation and detoxification of model and kraft lignin.

  19. Predicting crash frequency for multi-vehicle collision types using multivariate Poisson-lognormal spatial model: A comparative analysis.

    PubMed

    Hosseinpour, Mehdi; Sahebi, Sina; Zamzuri, Zamira Hasanah; Yahaya, Ahmad Shukri; Ismail, Noriszura

    2018-06-01

    According to crash configuration and pre-crash conditions, traffic crashes are classified into different collision types. Based on the literature, multi-vehicle crashes, such as head-on, rear-end, and angle crashes, are more frequent than single-vehicle crashes, and most often result in serious consequences. From a methodological point of view, the majority of prior studies focused on multivehicle collisions have employed univariate count models to estimate crash counts separately by collision type. However, univariate models fail to account for correlations which may exist between different collision types. Among others, multivariate Poisson lognormal (MVPLN) model with spatial correlation is a promising multivariate specification because it not only allows for unobserved heterogeneity (extra-Poisson variation) and dependencies between collision types, but also spatial correlation between adjacent sites. However, the MVPLN spatial model has rarely been applied in previous research for simultaneously modelling crash counts by collision type. Therefore, this study aims at utilizing a MVPLN spatial model to estimate crash counts for four different multi-vehicle collision types, including head-on, rear-end, angle, and sideswipe collisions. To investigate the performance of the MVPLN spatial model, a two-stage model and a univariate Poisson lognormal model (UNPLN) spatial model were also developed in this study. Detailed information on roadway characteristics, traffic volume, and crash history were collected on 407 homogeneous segments from Malaysian federal roads. The results indicate that the MVPLN spatial model outperforms the other comparing models in terms of goodness-of-fit measures. The results also show that the inclusion of spatial heterogeneity in the multivariate model significantly improves the model fit, as indicated by the Deviance Information Criterion (DIC). The correlation between crash types is high and positive, implying that the occurrence of a

  20. A new method for comparing rankings through complex networks: Model and analysis of competitiveness of major European soccer leagues

    NASA Astrophysics Data System (ADS)

    Criado, Regino; García, Esther; Pedroche, Francisco; Romance, Miguel

    2013-12-01

    In this paper, we show a new technique to analyze families of rankings. In particular, we focus on sports rankings and, more precisely, on soccer leagues. We consider that two teams compete when they change their relative positions in consecutive rankings. This allows to define a graph by linking teams that compete. We show how to use some structural properties of this competitivity graph to measure to what extend the teams in a league compete. These structural properties are the mean degree, the mean strength, and the clustering coefficient. We give a generalization of the Kendall's correlation coefficient to more than two rankings. We also show how to make a dynamic analysis of a league and how to compare different leagues. We apply this technique to analyze the four major European soccer leagues: Bundesliga, Italian Lega, Spanish Liga, and Premier League. We compare our results with the classical analysis of sport ranking based on measures of competitive balance.

  1. Analyzing Multiple-Choice Questions by Model Analysis and Item Response Curves

    NASA Astrophysics Data System (ADS)

    Wattanakasiwich, P.; Ananta, S.

    2010-07-01

    In physics education research, the main goal is to improve physics teaching so that most students understand physics conceptually and be able to apply concepts in solving problems. Therefore many multiple-choice instruments were developed to probe students' conceptual understanding in various topics. Two techniques including model analysis and item response curves were used to analyze students' responses from Force and Motion Conceptual Evaluation (FMCE). For this study FMCE data from more than 1000 students at Chiang Mai University were collected over the past three years. With model analysis, we can obtain students' alternative knowledge and the probabilities for students to use such knowledge in a range of equivalent contexts. The model analysis consists of two algorithms—concentration factor and model estimation. This paper only presents results from using the model estimation algorithm to obtain a model plot. The plot helps to identify a class model state whether it is in the misconception region or not. Item response curve (IRC) derived from item response theory is a plot between percentages of students selecting a particular choice versus their total score. Pros and cons of both techniques are compared and discussed.

  2. Informing policy makers about future health spending: a comparative analysis of forecasting methods in OECD countries.

    PubMed

    Astolfi, Roberto; Lorenzoni, Luca; Oderkirk, Jillian

    2012-09-01

    Concerns about health care expenditure growth and its long-term sustainability have risen to the top of the policy agenda in many OECD countries. As continued growth in spending places pressure on government budgets, health services provision and patients' personal finances, policy makers have launched forecasting projects to support policy planning. This comparative analysis reviewed 25 models that were developed for policy analysis in OECD countries by governments, research agencies, academics and international organisations. We observed that the policy questions that need to be addressed drive the choice of forecasting model and the model's specification. By considering both the level of aggregation of the units analysed and the level of detail of health expenditure to be projected, we identified three classes of models: micro, component-based, and macro. Virtually all models account for demographic shifts in the population, while two important influences on health expenditure growth that are the least understood include technological innovation and health-seeking behaviour. The landscape for health forecasting models is dynamic and evolving. Advances in computing technology and increases in data granularity are opening up new possibilities for the generation of system of models which become an on-going decision support tool capable of adapting to new questions as they arise. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  3. A Hierarchical Multi-Model Approach for Uncertainty Segregation, Prioritization and Comparative Evaluation of Competing Modeling Propositions

    NASA Astrophysics Data System (ADS)

    Tsai, F. T.; Elshall, A. S.; Hanor, J. S.

    2012-12-01

    Subsurface modeling is challenging because of many possible competing propositions for each uncertain model component. How can we judge that we are selecting the correct proposition for an uncertain model component out of numerous competing propositions? How can we bridge the gap between synthetic mental principles such as mathematical expressions on one hand, and empirical observation such as observation data on the other hand when uncertainty exists on both sides? In this study, we introduce hierarchical Bayesian model averaging (HBMA) as a multi-model (multi-proposition) framework to represent our current state of knowledge and decision for hydrogeological structure modeling. The HBMA framework allows for segregating and prioritizing different sources of uncertainty, and for comparative evaluation of competing propositions for each source of uncertainty. We applied the HBMA to a study of hydrostratigraphy and uncertainty propagation of the Southern Hills aquifer system in the Baton Rouge area, Louisiana. We used geophysical data for hydrogeological structure construction through indictor hydrostratigraphy method and used lithologic data from drillers' logs for model structure calibration. However, due to uncertainty in model data, structure and parameters, multiple possible hydrostratigraphic models were produced and calibrated. The study considered four sources of uncertainties. To evaluate mathematical structure uncertainty, the study considered three different variogram models and two geological stationarity assumptions. With respect to geological structure uncertainty, the study considered two geological structures with respect to the Denham Springs-Scotlandville fault. With respect to data uncertainty, the study considered two calibration data sets. These four sources of uncertainty with their corresponding competing modeling propositions resulted in 24 calibrated models. The results showed that by segregating different sources of uncertainty, HBMA analysis

  4. Digital Elevation Model from Non-Metric Camera in Uas Compared with LIDAR Technology

    NASA Astrophysics Data System (ADS)

    Dayamit, O. M.; Pedro, M. F.; Ernesto, R. R.; Fernando, B. L.

    2015-08-01

    Digital Elevation Model (DEM) data as a representation of surface topography is highly demanded for use in spatial analysis and modelling. Aimed to that issue many methods of acquisition data and process it are developed, from traditional surveying until modern technology like LIDAR. On the other hands, in a past four year the development of Unamend Aerial System (UAS) aimed to Geomatic bring us the possibility to acquire data about surface by non-metric digital camera on board in a short time with good quality for some analysis. Data collectors have attracted tremendous attention on UAS due to possibility of the determination of volume changes over time, monitoring of the breakwaters, hydrological modelling including flood simulation, drainage networks, among others whose support in DEM for proper analysis. The DEM quality is considered as a combination of DEM accuracy and DEM suitability so; this paper is aimed to analyse the quality of the DEM from non-metric digital camera on UAS compared with a DEM from LIDAR corresponding to same geographic space covering 4 km2 in Artemisa province, Cuba. This area is in a frame of urban planning whose need to know the topographic characteristics in order to analyse hydrology behaviour and decide the best place for make roads, building and so on. Base on LIDAR technology is still more accurate method, it offer us a pattern for test DEM from non-metric digital camera on UAS, whose are much more flexible and bring a solution for many applications whose needs DEM of detail.

  5. Data-Flow Based Model Analysis

    NASA Technical Reports Server (NTRS)

    Saad, Christian; Bauer, Bernhard

    2010-01-01

    The concept of (meta) modeling combines an intuitive way of formalizing the structure of an application domain with a high expressiveness that makes it suitable for a wide variety of use cases and has therefore become an integral part of many areas in computer science. While the definition of modeling languages through the use of meta models, e.g. in Unified Modeling Language (UML), is a well-understood process, their validation and the extraction of behavioral information is still a challenge. In this paper we present a novel approach for dynamic model analysis along with several fields of application. Examining the propagation of information along the edges and nodes of the model graph allows to extend and simplify the definition of semantic constraints in comparison to the capabilities offered by e.g. the Object Constraint Language. Performing a flow-based analysis also enables the simulation of dynamic behavior, thus providing an "abstract interpretation"-like analysis method for the modeling domain.

  6. Robustness analysis of a green chemistry-based model for the ...

    EPA Pesticide Factsheets

    This paper proposes a robustness analysis based on Multiple Criteria Decision Aiding (MCDA). The ensuing model was used to assess the implementation of green chemistry principles in the synthesis of silver nanoparticles. Its recommendations were also compared to an earlier developed model for the same purpose to investigate concordance between the models and potential decision support synergies. A three-phase procedure was adopted to achieve the research objectives. Firstly, an ordinal ranking of the evaluation criteria used to characterize the implementation of green chemistry principles was identified through relative ranking analysis. Secondly, a structured selection process for an MCDA classification method was conducted, which ensued in the identification of Stochastic Multi-Criteria Acceptability Analysis (SMAA). Lastly, the agreement of the classifications by the two MCDA models and the resulting synergistic role of decision recommendations were studied. This comparison showed that the results of the two models agree between 76% and 93% of the simulation set-ups and it confirmed that different MCDA models provide a more inclusive and transparent set of recommendations. This integrative research confirmed the beneficial complementary use of MCDA methods to aid responsible development of nanosynthesis, by accounting for multiple objectives and helping communication of complex information in a comprehensive and traceable format, suitable for stakeholders and

  7. Evaluating the Risks of Clinical Research: Direct Comparative Analysis

    PubMed Central

    Abdoler, Emily; Roberson-Nay, Roxann; Pine, Daniel S.; Wendler, David

    2014-01-01

    Abstract Objectives: Many guidelines and regulations allow children and adolescents to be enrolled in research without the prospect of clinical benefit when it poses minimal risk. However, few systematic methods exist to determine when research risks are minimal. This situation has led to significant variation in minimal risk judgments, raising concern that some children are not being adequately protected. To address this concern, we describe a new method for implementing the widely endorsed “risks of daily life” standard for minimal risk. This standard defines research risks as minimal when they do not exceed the risks posed by daily life activities or routine examinations. Methods: This study employed a conceptual and normative analysis, and use of an illustrative example. Results: Different risks are composed of the same basic elements: Type, likelihood, and magnitude of harm. Hence, one can compare the risks of research and the risks of daily life by comparing the respective basic elements with each other. We use this insight to develop a systematic method, direct comparative analysis, for implementing the “risks of daily life” standard for minimal risk. The method offers a way of evaluating research procedures that pose the same types of risk as daily life activities, such as the risk of experiencing anxiety, stress, or other psychological harm. We thus illustrate how direct comparative analysis can be applied in practice by using it to evaluate whether the anxiety induced by a respiratory CO2 challenge poses minimal or greater than minimal risks in children and adolescents. Conclusions: Direct comparative analysis is a systematic method for applying the “risks of daily life” standard for minimal risk to research procedures that pose the same types of risk as daily life activities. It thereby offers a method to protect children and adolescents in research, while ensuring that important studies are not blocked because of unwarranted concerns about

  8. Image Analysis and Modeling

    DTIC Science & Technology

    1976-03-01

    This report summarizes the results of the research program on Image Analysis and Modeling supported by the Defense Advanced Research Projects Agency...The objective is to achieve a better understanding of image structure and to use this knowledge to develop improved image models for use in image ... analysis and processing tasks such as information extraction, image enhancement and restoration, and coding. The ultimate objective of this research is

  9. Comparative genomic analysis and phylogenetic position of Theileria equi

    PubMed Central

    2012-01-01

    Background Transmission of arthropod-borne apicomplexan parasites that cause disease and result in death or persistent infection represents a major challenge to global human and animal health. First described in 1901 as Piroplasma equi, this re-emergent apicomplexan parasite was renamed Babesia equi and subsequently Theileria equi, reflecting an uncertain taxonomy. Understanding mechanisms by which apicomplexan parasites evade immune or chemotherapeutic elimination is required for development of effective vaccines or chemotherapeutics. The continued risk of transmission of T. equi from clinically silent, persistently infected equids impedes the goal of returning the U. S. to non-endemic status. Therefore comparative genomic analysis of T. equi was undertaken to: 1) identify genes contributing to immune evasion and persistence in equid hosts, 2) identify genes involved in PBMC infection biology and 3) define the phylogenetic position of T. equi relative to sequenced apicomplexan parasites. Results The known immunodominant proteins, EMA1, 2 and 3 were discovered to belong to a ten member gene family with a mean amino acid identity, in pairwise comparisons, of 39%. Importantly, the amino acid diversity of EMAs is distributed throughout the length of the proteins. Eight of the EMA genes were simultaneously transcribed. As the agents that cause bovine theileriosis infect and transform host cell PBMCs, we confirmed that T. equi infects equine PBMCs, however, there is no evidence of host cell transformation. Indeed, a number of genes identified as potential manipulators of the host cell phenotype are absent from the T. equi genome. Comparative genomic analysis of T. equi revealed the phylogenetic positioning relative to seven apicomplexan parasites using deduced amino acid sequences from 150 genes placed it as a sister taxon to Theileria spp. Conclusions The EMA family does not fit the paradigm for classical antigenic variation, and we propose a novel model describing the

  10. Religious Education in Russia: A Comparative and Critical Analysis

    ERIC Educational Resources Information Center

    Blinkova, Alexandra; Vermeer, Paul

    2018-01-01

    RE in Russia has been recently introduced as a compulsory regular school subject during the last year of elementary school. The present study offers a critical analysis of the current practice of Russian RE by comparing it with RE in Sweden, Denmark and Britain. This analysis shows that Russian RE is ambivalent. Although it is based on a…

  11. Lithium-ion battery models: a comparative study and a model-based powerline communication

    NASA Astrophysics Data System (ADS)

    Saidani, Fida; Hutter, Franz X.; Scurtu, Rares-George; Braunwarth, Wolfgang; Burghartz, Joachim N.

    2017-09-01

    In this work, various Lithium-ion (Li-ion) battery models are evaluated according to their accuracy, complexity and physical interpretability. An initial classification into physical, empirical and abstract models is introduced. Also known as white, black and grey boxes, respectively, the nature and characteristics of these model types are compared. Since the Li-ion battery cell is a thermo-electro-chemical system, the models are either in the thermal or in the electrochemical state-space. Physical models attempt to capture key features of the physical process inside the cell. Empirical models describe the system with empirical parameters offering poor analytical, whereas abstract models provide an alternative representation. In addition, a model selection guideline is proposed based on applications and design requirements. A complex model with a detailed analytical insight is of use for battery designers but impractical for real-time applications and in situ diagnosis. In automotive applications, an abstract model reproducing the battery behavior in an equivalent but more practical form, mainly as an equivalent circuit diagram, is recommended for the purpose of battery management. As a general rule, a trade-off should be reached between the high fidelity and the computational feasibility. Especially if the model is embedded in a real-time monitoring unit such as a microprocessor or a FPGA, the calculation time and memory requirements rise dramatically with a higher number of parameters. Moreover, examples of equivalent circuit models of Lithium-ion batteries are covered. Equivalent circuit topologies are introduced and compared according to the previously introduced criteria. An experimental sequence to model a 20 Ah cell is presented and the results are used for the purposes of powerline communication.

  12. Decision curve analysis: a novel method for evaluating prediction models.

    PubMed

    Vickers, Andrew J; Elkin, Elena B

    2006-01-01

    Diagnostic and prognostic models are typically evaluated with measures of accuracy that do not address clinical consequences. Decision-analytic techniques allow assessment of clinical outcomes but often require collection of additional information and may be cumbersome to apply to models that yield a continuous result. The authors sought a method for evaluating and comparing prediction models that incorporates clinical consequences,requires only the data set on which the models are tested,and can be applied to models that have either continuous or dichotomous results. The authors describe decision curve analysis, a simple, novel method of evaluating predictive models. They start by assuming that the threshold probability of a disease or event at which a patient would opt for treatment is informative of how the patient weighs the relative harms of a false-positive and a false-negative prediction. This theoretical relationship is then used to derive the net benefit of the model across different threshold probabilities. Plotting net benefit against threshold probability yields the "decision curve." The authors apply the method to models for the prediction of seminal vesicle invasion in prostate cancer patients. Decision curve analysis identified the range of threshold probabilities in which a model was of value, the magnitude of benefit, and which of several models was optimal. Decision curve analysis is a suitable method for evaluating alternative diagnostic and prognostic strategies that has advantages over other commonly used measures and techniques.

  13. A Comparative Analysis of Drug-Induced Hepatotoxicity in Clinically Relevant Situations

    PubMed Central

    Thiel, Christoph; Cordes, Henrik; Fabbri, Lorenzo; Aschmann, Hélène Eloise; Baier, Vanessa; Atkinson, Francis; Blank, Lars Mathias; Kuepfer, Lars

    2017-01-01

    Drug-induced toxicity is a significant problem in clinical care. A key problem here is a general understanding of the molecular mechanisms accompanying the transition from desired drug effects to adverse events following administration of either therapeutic or toxic doses, in particular within a patient context. Here, a comparative toxicity analysis was performed for fifteen hepatotoxic drugs by evaluating toxic changes reflecting the transition from therapeutic drug responses to toxic reactions at the cellular level. By use of physiologically-based pharmacokinetic modeling, in vitro toxicity data were first contextualized to quantitatively describe time-resolved drug responses within a patient context. Comparatively studying toxic changes across the considered hepatotoxicants allowed the identification of subsets of drugs sharing similar perturbations on key cellular processes, functional classes of genes, and individual genes. The identified subsets of drugs were next analyzed with regard to drug-related characteristics and their physicochemical properties. Toxic changes were finally evaluated to predict both molecular biomarkers and potential drug-drug interactions. The results may facilitate the early diagnosis of adverse drug events in clinical application. PMID:28151932

  14. Golden Gate National Recreation Area: Alcatraz Island Ferry Comparability Analysis.

    DOT National Transportation Integrated Search

    2007-08-31

    This report presents a summary of an analysis comparing the ferry operated between San Francisco and Alcatraz Island with : similar water transportation services. The analysis was performed to assist the National Park Service in determining the rates...

  15. Economic Analysis of Panitumumab Compared With Cetuximab in Patients With Wild-type KRAS Metastatic Colorectal Cancer That Progressed After Standard Chemotherapy.

    PubMed

    Graham, Christopher N; Maglinte, Gregory A; Schwartzberg, Lee S; Price, Timothy J; Knox, Hediyyih N; Hechmati, Guy; Hjelmgren, Jonas; Barber, Beth; Fakih, Marwan G

    2016-06-01

    In this analysis, we compared costs and explored the cost-effectiveness of subsequent-line treatment with cetuximab or panitumumab in patients with wild-type KRAS (exon 2) metastatic colorectal cancer (mCRC) after previous chemotherapy treatment failure. Data were used from ASPECCT (A Study of Panitumumab Efficacy and Safety Compared to Cetuximab in Patients With KRAS Wild-Type Metastatic Colorectal Cancer), a Phase III, head-to-head randomized noninferiority study comparing the efficacy and safety of panitumumab and cetuximab in this population. A decision-analytic model was developed to perform a cost-minimization analysis and a semi-Markov model was created to evaluate the cost-effectiveness of panitumumab monotherapy versus cetuximab monotherapy in chemotherapy-resistant wild-type KRAS (exon 2) mCRC. The cost-minimization model assumed equivalent efficacy (progression-free survival) based on data from ASPECCT. The cost-effectiveness analysis was conducted with the full information (uncertainty) from ASPECCT. Both analyses were conducted from a US third-party payer perspective and calculated average anti-epidermal growth factor receptor doses from ASPECCT. Costs associated with drug acquisition, treatment administration (every 2 weeks for panitumumab, weekly for cetuximab), and incidence of infusion reactions were estimated in both models. The cost-effectiveness model also included physician visits, disease progression monitoring, best supportive care, and end-of-life costs and utility weights estimated from EuroQol 5-Dimension questionnaire responses from ASPECCT. The cost-minimization model results demonstrated lower projected costs for patients who received panitumumab versus cetuximab, with a projected cost savings of $9468 (16.5%) per panitumumab-treated patient. In the cost-effectiveness model, the incremental cost per quality-adjusted life-year gained revealed panitumumab to be less costly, with marginally better outcomes than cetuximab. These economic

  16. Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation

    Treesearch

    Goran Stahl; Svetlana Saarela; Sebastian Schnell; Soren Holm; Johannes Breidenbach; Sean P. Healey; Paul L. Patterson; Steen Magnussen; Erik Naesset; Ronald E. McRoberts; Timothy G. Gregoire

    2016-01-01

    This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where...

  17. Spectral Analysis and Experimental Modeling of Ice Accretion Roughness

    NASA Technical Reports Server (NTRS)

    Orr, D. J.; Breuer, K. S.; Torres, B. E.; Hansman, R. J., Jr.

    1996-01-01

    A self-consistent scheme for relating wind tunnel ice accretion roughness to the resulting enhancement of heat transfer is described. First, a spectral technique of quantitative analysis of early ice roughness images is reviewed. The image processing scheme uses a spectral estimation technique (SET) which extracts physically descriptive parameters by comparing scan lines from the experimentally-obtained accretion images to a prescribed test function. Analysis using this technique for both streamwise and spanwise directions of data from the NASA Lewis Icing Research Tunnel (IRT) are presented. An experimental technique is then presented for constructing physical roughness models suitable for wind tunnel testing that match the SET parameters extracted from the IRT images. The icing castings and modeled roughness are tested for enhancement of boundary layer heat transfer using infrared techniques in a "dry" wind tunnel.

  18. Comprehensive analysis of a Metabolic Model for lipid production in Rhodosporidium toruloides.

    PubMed

    Castañeda, María Teresita; Nuñez, Sebastián; Garelli, Fabricio; Voget, Claudio; Battista, Hernán De

    2018-05-19

    The yeast Rhodosporidium toruloides has been extensively studied for its application in biolipid production. The knowledge of its metabolism capabilities and the application of constraint-based flux analysis methodology provide useful information for process prediction and optimization. The accuracy of the resulting predictions is highly dependent on metabolic models. A metabolic reconstruction for R. toruloides metabolism has been recently published. On the basis of this model, we developed a curated version that unblocks the central nitrogen metabolism and, in addition, completes charge and mass balances in some reactions neglected in the former model. Then, a comprehensive analysis of network capability was performed with the curated model and compared with the published metabolic reconstruction. The flux distribution obtained by lipid optimization with Flux Balance Analysis was able to replicate the internal biochemical changes that lead to lipogenesis in oleaginous microorganisms. These results motivate the development of a genome-scale model for complete elucidation of R. toruloides metabolism. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Signal analysis of accelerometry data using gravity-based modeling

    NASA Astrophysics Data System (ADS)

    Davey, Neil P.; James, Daniel A.; Anderson, Megan E.

    2004-03-01

    Triaxial accelerometers have been used to measure human movement parameters in swimming. Interpretation of data is difficult due to interference sources including interaction of external bodies. In this investigation the authors developed a model to simulate the physical movement of the lower back. Theoretical accelerometery outputs were derived thus giving an ideal, or noiseless dataset. An experimental data collection apparatus was developed by adapting a system to the aquatic environment for investigation of swimming. Model data was compared against recorded data and showed strong correlation. Comparison of recorded and modeled data can be used to identify changes in body movement, this is especially useful when cyclic patterns are present in the activity. Strong correlations between data sets allowed development of signal processing algorithms for swimming stroke analysis using first the pure noiseless data set which were then applied to performance data. Video analysis was also used to validate study results and has shown potential to provide acceptable results.

  20. A comparative analysis of chaotic particle swarm optimizations for detecting single nucleotide polymorphism barcodes.

    PubMed

    Chuang, Li-Yeh; Moi, Sin-Hua; Lin, Yu-Da; Yang, Cheng-Hong

    2016-10-01

    Evolutionary algorithms could overcome the computational limitations for the statistical evaluation of large datasets for high-order single nucleotide polymorphism (SNP) barcodes. Previous studies have proposed several chaotic particle swarm optimization (CPSO) methods to detect SNP barcodes for disease analysis (e.g., for breast cancer and chronic diseases). This work evaluated additional chaotic maps combined with the particle swarm optimization (PSO) method to detect SNP barcodes using a high-dimensional dataset. Nine chaotic maps were used to improve PSO method results and compared the searching ability amongst all CPSO methods. The XOR and ZZ disease models were used to compare all chaotic maps combined with PSO method. Efficacy evaluations of CPSO methods were based on statistical values from the chi-square test (χ 2 ). The results showed that chaotic maps could improve the searching ability of PSO method when population are trapped in the local optimum. The minor allele frequency (MAF) indicated that, amongst all CPSO methods, the numbers of SNPs, sample size, and the highest χ 2 value in all datasets were found in the Sinai chaotic map combined with PSO method. We used the simple linear regression results of the gbest values in all generations to compare the all methods. Sinai chaotic map combined with PSO method provided the highest β values (β≥0.32 in XOR disease model and β≥0.04 in ZZ disease model) and the significant p-value (p-value<0.001 in both the XOR and ZZ disease models). The Sinai chaotic map was found to effectively enhance the fitness values (χ 2 ) of PSO method, indicating that the Sinai chaotic map combined with PSO method is more effective at detecting potential SNP barcodes in both the XOR and ZZ disease models. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Comparative secretome analysis of rat stomach under different nutritional status.

    PubMed

    Senin, Lucia L; Roca-Rivada, Arturo; Castelao, Cecilia; Alonso, Jana; Folgueira, Cintia; Casanueva, Felipe F; Pardo, Maria; Seoane, Luisa M

    2015-02-26

    Obesity is a major public health threat for many industrialised countries. Bariatric surgery is the most effective treatment against obesity, suggesting that gut derived signals are crucial for energy balance regulation. Several descriptive studies have proven the presence of gastric endogenous systems that modulate energy homeostasis; however, these systems and the interactions between them are still not well known. In the present study, we show for the first time the comparative 2-DE gastric secretome analysis under different nutritional status. We have identified 38 differently secreted proteins by comparing stomach secretomes from tissue explant cultures of rats under feeding, fasting and re-feeding conditions. Among the proteins identified, glyceraldehyde-3-phosphate dehydrogenase was found to be more abundant in gastric secretome and plasma after re-feeding, and downregulated in obesity. Additionally, two calponin-1 species were decreased in feeding state, and other were modulated by nutritional and metabolic conditions. These and other secreted proteins identified in this work may be considered as potential gastrokines implicated in food intake regulation. The present work has an important impact in the field of obesity, especially in the regulation of body weight maintenance by the stomach. Nowadays, the most effective treatment in the fight against obesity is bariatric surgery, which suggests that stomach derived signals might be crucial for the regulation of the energy homeostasis. However, until now, the knowledge about the gastrokines and its mechanism of action has been poorly elucidated. In the present work, we had updated a previously validated explant secretion model for proteomic studies; this analysis allowed us, for the first time, to study the gastric secretome without interferences from other organs. We had identified 38 differently secreted proteins comparing ex vivo cultured stomachs from rats under feeding, fasting and re-feeding regimes

  2. Comparative evaluation of spectroscopic models using different multivariate statistical tools in a multicancer scenario

    NASA Astrophysics Data System (ADS)

    Ghanate, A. D.; Kothiwale, S.; Singh, S. P.; Bertrand, Dominique; Krishna, C. Murali

    2011-02-01

    Cancer is now recognized as one of the major causes of morbidity and mortality. Histopathological diagnosis, the gold standard, is shown to be subjective, time consuming, prone to interobserver disagreement, and often fails to predict prognosis. Optical spectroscopic methods are being contemplated as adjuncts or alternatives to conventional cancer diagnostics. The most important aspect of these approaches is their objectivity, and multivariate statistical tools play a major role in realizing it. However, rigorous evaluation of the robustness of spectral models is a prerequisite. The utility of Raman spectroscopy in the diagnosis of cancers has been well established. Until now, the specificity and applicability of spectral models have been evaluated for specific cancer types. In this study, we have evaluated the utility of spectroscopic models representing normal and malignant tissues of the breast, cervix, colon, larynx, and oral cavity in a broader perspective, using different multivariate tests. The limit test, which was used in our earlier study, gave high sensitivity but suffered from poor specificity. The performance of other methods such as factorial discriminant analysis and partial least square discriminant analysis are at par with more complex nonlinear methods such as decision trees, but they provide very little information about the classification model. This comparative study thus demonstrates not just the efficacy of Raman spectroscopic models but also the applicability and limitations of different multivariate tools for discrimination under complex conditions such as the multicancer scenario.

  3. COGNAT: a web server for comparative analysis of genomic neighborhoods.

    PubMed

    Klimchuk, Olesya I; Konovalov, Kirill A; Perekhvatov, Vadim V; Skulachev, Konstantin V; Dibrova, Daria V; Mulkidjanian, Armen Y

    2017-11-22

    In prokaryotic genomes, functionally coupled genes can be organized in conserved gene clusters enabling their coordinated regulation. Such clusters could contain one or several operons, which are groups of co-transcribed genes. Those genes that evolved from a common ancestral gene by speciation (i.e. orthologs) are expected to have similar genomic neighborhoods in different organisms, whereas those copies of the gene that are responsible for dissimilar functions (i.e. paralogs) could be found in dissimilar genomic contexts. Comparative analysis of genomic neighborhoods facilitates the prediction of co-regulated genes and helps to discern different functions in large protein families. We intended, building on the attribution of gene sequences to the clusters of orthologous groups of proteins (COGs), to provide a method for visualization and comparative analysis of genomic neighborhoods of evolutionary related genes, as well as a respective web server. Here we introduce the COmparative Gene Neighborhoods Analysis Tool (COGNAT), a web server for comparative analysis of genomic neighborhoods. The tool is based on the COG database, as well as the Pfam protein families database. As an example, we show the utility of COGNAT in identifying a new type of membrane protein complex that is formed by paralog(s) of one of the membrane subunits of the NADH:quinone oxidoreductase of type 1 (COG1009) and a cytoplasmic protein of unknown function (COG3002). This article was reviewed by Drs. Igor Zhulin, Uri Gophna and Igor Rogozin.

  4. Delamination Modeling of Composites for Improved Crash Analysis

    NASA Technical Reports Server (NTRS)

    Fleming, David C.

    1999-01-01

    Finite element crash modeling of composite structures is limited by the inability of current commercial crash codes to accurately model delamination growth. Efforts are made to implement and assess delamination modeling techniques using a current finite element crash code, MSC/DYTRAN. Three methods are evaluated, including a straightforward method based on monitoring forces in elements or constraints representing an interface; a cohesive fracture model proposed in the literature; and the virtual crack closure technique commonly used in fracture mechanics. Results are compared with dynamic double cantilever beam test data from the literature. Examples show that it is possible to accurately model delamination propagation in this case. However, the computational demands required for accurate solution are great and reliable property data may not be available to support general crash modeling efforts. Additional examples are modeled including an impact-loaded beam, damage initiation in laminated crushing specimens, and a scaled aircraft subfloor structures in which composite sandwich structures are used as energy-absorbing elements. These examples illustrate some of the difficulties in modeling delamination as part of a finite element crash analysis.

  5. HASP server: a database and structural visualization platform for comparative models of influenza A hemagglutinin proteins.

    PubMed

    Ambroggio, Xavier I; Dommer, Jennifer; Gopalan, Vivek; Dunham, Eleca J; Taubenberger, Jeffery K; Hurt, Darrell E

    2013-06-18

    Influenza A viruses possess RNA genomes that mutate frequently in response to immune pressures. The mutations in the hemagglutinin genes are particularly significant, as the hemagglutinin proteins mediate attachment and fusion to host cells, thereby influencing viral pathogenicity and species specificity. Large-scale influenza A genome sequencing efforts have been ongoing to understand past epidemics and pandemics and anticipate future outbreaks. Sequencing efforts thus far have generated nearly 9,000 distinct hemagglutinin amino acid sequences. Comparative models for all publicly available influenza A hemagglutinin protein sequences (8,769 to date) were generated using the Rosetta modeling suite. The C-alpha root mean square deviations between a randomly chosen test set of models and their crystallographic templates were less than 2 Å, suggesting that the modeling protocols yielded high-quality results. The models were compiled into an online resource, the Hemagglutinin Structure Prediction (HASP) server. The HASP server was designed as a scientific tool for researchers to visualize hemagglutinin protein sequences of interest in a three-dimensional context. With a built-in molecular viewer, hemagglutinin models can be compared side-by-side and navigated by a corresponding sequence alignment. The models and alignments can be downloaded for offline use and further analysis. The modeling protocols used in the HASP server scale well for large amounts of sequences and will keep pace with expanded sequencing efforts. The conservative approach to modeling and the intuitive search and visualization interfaces allow researchers to quickly analyze hemagglutinin sequences of interest in the context of the most highly related experimental structures, and allow them to directly compare hemagglutinin sequences to each other simultaneously in their two- and three-dimensional contexts. The models and methodology have shown utility in current research efforts and the ongoing aim

  6. Sherrington's Model of Successive Induction for Comparative Analysis of Zebrafish Motor Response

    EPA Science Inventory

    The responses in motor activity of zebrafish to sudden changes in lighting conditions may be modeled by Sherrington’s model of successive induction. Fish left in the dark exhibit very little motion, when exposed to light zebrafish motion increases towards an apparent horizo...

  7. Analysis of Whole-Sky Imager Data to Determine the Validity of PCFLOS models

    DTIC Science & Technology

    1992-12-01

    included in the data sample. 2-5 3.1. Data arrangement for a r x c contingency table ....................... 3-2 3.2. ARIMA models estimated for each...satellites. This model uses the multidimen- sional Boehm Sawtooth Wave Model to establish climatic probabilities through repetitive simula- tions of...analysis techniques to develop an ARIMAe model for each direction at the Columbia and Kirtland sites. Then, the models can be compared and analyzed to

  8. Of Mice and Men: Comparative Analysis of Neuro-Inflammatory Mechanisms in Human and Mouse Using Cause-and-Effect Models.

    PubMed

    Kodamullil, Alpha Tom; Iyappan, Anandhi; Karki, Reagon; Madan, Sumit; Younesi, Erfan; Hofmann-Apitius, Martin

    2017-01-01

    Perturbance in inflammatory pathways have been identified as one of the major factors which leads to neurodegenerative diseases (NDD). Owing to the limited access of human brain tissues and the immense complexity of the brain, animal models, specifically mouse models, play a key role in advancing the NDD field. However, many of these mouse models fail to reproduce the clinical manifestations and end points of the disease. NDD drugs, which passed the efficacy test in mice, were repeatedly not successful in clinical trials. There are numerous studies which are supporting and opposing the applicability of mouse models in neuroinflammation and NDD. In this paper, we assessed to what extend a mouse can mimic the cellular and molecular interactions in humans at a mechanism level. Based on our mechanistic modeling approach, we investigate the failure of a neuroinflammation targeted drug in the late phases of clinical trials based on the comparative analyses between the two species.

  9. Comparing Habitat Suitability and Connectivity Modeling Methods for Conserving Pronghorn Migrations

    PubMed Central

    Poor, Erin E.; Loucks, Colby; Jakes, Andrew; Urban, Dean L.

    2012-01-01

    Terrestrial long-distance migrations are declining globally: in North America, nearly 75% have been lost. Yet there has been limited research comparing habitat suitability and connectivity models to identify migration corridors across increasingly fragmented landscapes. Here we use pronghorn (Antilocapra americana) migrations in prairie habitat to compare two types of models that identify habitat suitability: maximum entropy (Maxent) and expert-based (Analytic Hierarchy Process). We used distance to wells, distance to water, NDVI, land cover, distance to roads, terrain shape and fence presence to parameterize the models. We then used the output of these models as cost surfaces to compare two common connectivity models, least-cost modeling (LCM) and circuit theory. Using pronghorn movement data from spring and fall migrations, we identified potential migration corridors by combining each habitat suitability model with each connectivity model. The best performing model combination was Maxent with LCM corridors across both seasons. Maxent out-performed expert-based habitat suitability models for both spring and fall migrations. However, expert-based corridors can perform relatively well and are a cost-effective alternative if species location data are unavailable. Corridors created using LCM out-performed circuit theory, as measured by the number of pronghorn GPS locations present within the corridors. We suggest the use of a tiered approach using different corridor widths for prioritizing conservation and mitigation actions, such as fence removal or conservation easements. PMID:23166656

  10. Comparing habitat suitability and connectivity modeling methods for conserving pronghorn migrations.

    PubMed

    Poor, Erin E; Loucks, Colby; Jakes, Andrew; Urban, Dean L

    2012-01-01

    Terrestrial long-distance migrations are declining globally: in North America, nearly 75% have been lost. Yet there has been limited research comparing habitat suitability and connectivity models to identify migration corridors across increasingly fragmented landscapes. Here we use pronghorn (Antilocapra americana) migrations in prairie habitat to compare two types of models that identify habitat suitability: maximum entropy (Maxent) and expert-based (Analytic Hierarchy Process). We used distance to wells, distance to water, NDVI, land cover, distance to roads, terrain shape and fence presence to parameterize the models. We then used the output of these models as cost surfaces to compare two common connectivity models, least-cost modeling (LCM) and circuit theory. Using pronghorn movement data from spring and fall migrations, we identified potential migration corridors by combining each habitat suitability model with each connectivity model. The best performing model combination was Maxent with LCM corridors across both seasons. Maxent out-performed expert-based habitat suitability models for both spring and fall migrations. However, expert-based corridors can perform relatively well and are a cost-effective alternative if species location data are unavailable. Corridors created using LCM out-performed circuit theory, as measured by the number of pronghorn GPS locations present within the corridors. We suggest the use of a tiered approach using different corridor widths for prioritizing conservation and mitigation actions, such as fence removal or conservation easements.

  11. Comparing two models for post-wildfire debris flow susceptibility mapping

    NASA Astrophysics Data System (ADS)

    Cramer, J.; Bursik, M. I.; Legorreta Paulin, G.

    2017-12-01

    Traditionally, probabilistic post-fire debris flow susceptibility mapping has been performed based on the typical method of failure for debris flows/landslides, where slip occurs along a basal shear zone as a result of rainfall infiltration. Recent studies have argued that post-fire debris flows are fundamentally different in their method of initiation, which is not infiltration-driven, but surface runoff-driven. We test these competing models by comparing the accuracy of the susceptibility maps produced by each initiation method. Debris flow susceptibility maps are generated according to each initiation method for a mountainous region of Southern California that recently experienced wildfire and subsequent debris flows. A multiple logistic regression (MLR), which uses the occurrence of past debris flows and the values of environmental parameters, was used to determine the probability of future debris flow occurrence. The independent variables used in the MLR are dependent on the initiation method; for example, depth to slip plane, and shear strength of soil are relevant to the infiltration initiation, but not surface runoff. A post-fire debris flow inventory serves as the standard to compare the two susceptibility maps, and was generated by LiDAR analysis and field based ground-truthing. The amount of overlap between the true locations where debris flow erosion can be documented, and where the MLR predicts high probability of debris flow initiation was statistically quantified. The Figure of Merit in Space (FMS) was used to compare the two models, and the results of the FMS comparison suggest that surface runoff-driven initiation better explains debris flow occurrence. Wildfire can breed conditions that induce debris flows in areas that normally would not be prone to them. Because of this, nearby communities at risk may not be equipped to protect themselves against debris flows. In California, there are just a few months between wildland fire season and the wet

  12. VALUING BENEFITS FROM WATER QUALITY IMPROVEMENTS USING KUHN TUCKER MODEL - A COMPARATIVE ANALYSIS ON UTILITY FUNCTIONAL FORMS-

    NASA Astrophysics Data System (ADS)

    Okuyama, Tadahiro

    Kuhn-Tucker model, which has studied in recent years, is a benefit valuation technique using the revealed-preference data, and the feature is to treatvarious patterns of corner solutions flexibly. It is widely known for the benefit calculation using the revealed-preference data that a value of a benefit changes depending on a functional form. However, there are little studies which examine relationship between utility functions and values of benefits in Kuhn-Tucker model. The purpose of this study is to analysis an influence of the functional form to the value of a benefit. Six types of utility functions are employed for benefit calculations. The data of the recreational activity of 26 beaches of Miyagi Prefecture were employed. Calculation results indicated that Phaneuf and Siderelis (2003) and Whitehead et al.(2010)'s functional forms are useful for benefit calculations.

  13. Uncertainty analysis of hydrological modeling in a tropical area using different algorithms

    NASA Astrophysics Data System (ADS)

    Rafiei Emam, Ammar; Kappas, Martin; Fassnacht, Steven; Linh, Nguyen Hoang Khanh

    2018-01-01

    Hydrological modeling outputs are subject to uncertainty resulting from different sources of errors (e.g., error in input data, model structure, and model parameters), making quantification of uncertainty in hydrological modeling imperative and meant to improve reliability of modeling results. The uncertainty analysis must solve difficulties in calibration of hydrological models, which further increase in areas with data scarcity. The purpose of this study is to apply four uncertainty analysis algorithms to a semi-distributed hydrological model, quantifying different source of uncertainties (especially parameter uncertainty) and evaluate their performance. In this study, the Soil and Water Assessment Tools (SWAT) eco-hydrological model was implemented for the watershed in the center of Vietnam. The sensitivity of parameters was analyzed, and the model was calibrated. The uncertainty analysis for the hydrological model was conducted based on four algorithms: Generalized Likelihood Uncertainty Estimation (GLUE), Sequential Uncertainty Fitting (SUFI), Parameter Solution method (ParaSol) and Particle Swarm Optimization (PSO). The performance of the algorithms was compared using P-factor and Rfactor, coefficient of determination (R 2), the Nash Sutcliffe coefficient of efficiency (NSE) and Percent Bias (PBIAS). The results showed the high performance of SUFI and PSO with P-factor>0.83, R-factor <0.56 and R 2>0.91, NSE>0.89, and 0.18analysis. Indeed, the uncertainty analysis must be accounted when the outcomes of the model use for policy or management decisions.

  14. A comparative analysis of restorative materials used in abfraction lesions in tooth with and without occlusal restoration: Three-dimensional finite element analysis

    PubMed Central

    Srirekha, A; Bashetty, Kusum

    2013-01-01

    Objectives: The present comparative analysis aimed at evaluating the mechanical behavior of various restorative materials in abfraction lesion in the presence and absence of occlusal restoration. Materials and Methods: A three-dimensional finite-element analysis was performed. Six experimental models of mandibular first premolar were generated and divided into two groups (groups A and B) of three each. All the groups had cervical abfraction lesion restored with materials and in addition group A had class I occlusal restoration. A load of 90 N, 200 N, and 400 N were applied at 45° loading angle on the buccal inclines of buccal cusp and Von Mises stresses was chosen for analysis. Results: In all the models, the values of stress recorded at the cervical margin of the restorations were at their maxima. Irrespective of the occlusal restoration, all the materials performed well at 90 N and 200 N. At 400 N, only low-shrink composite showed stresses lesser than its tensile strength indicating its success even at higher load. Conclusion: Irrespective of occlusal restoration, restorative materials with low modulus of elasticity are successful in abfraction lesions at moderate tensile stresses; whereas materials with higher modulus of elasticity and mechanical properties can support higher loads and resist wear. Significance: The model allows comparison of different restorative materials for restoration of abfraction lesions in the presence and absence of occlusal restoration. The model can be used to validate more sophisticated computational models as well as to conduct various optimization studies. PMID:23716970

  15. Model Construction and Analysis of Respiration in Halobacterium salinarum.

    PubMed

    Talaue, Cherryl O; del Rosario, Ricardo C H; Pfeiffer, Friedhelm; Mendoza, Eduardo R; Oesterhelt, Dieter

    2016-01-01

    The archaeon Halobacterium salinarum can produce energy using three different processes, namely photosynthesis, oxidative phosphorylation and fermentation of arginine, and is thus a model organism in bioenergetics. Compared to its bacteriorhodopsin-driven photosynthesis, less attention has been devoted to modeling its respiratory pathway. We created a system of ordinary differential equations that models its oxidative phosphorylation. The model consists of the electron transport chain, the ATP synthase, the potassium uniport and the sodium-proton antiport. By fitting the model parameters to experimental data, we show that the model can explain data on proton motive force generation, ATP production, and the charge balancing of ions between the sodium-proton antiporter and the potassium uniport. We performed sensitivity analysis of the model parameters to determine how the model will respond to perturbations in parameter values. The model and the parameters we derived provide a resource that can be used for analytical studies of the bioenergetics of H. salinarum.

  16. High energy helion scattering: A ``model-independent'' analysis

    NASA Astrophysics Data System (ADS)

    Djaloeis, A.; Gopal, S.

    1981-03-01

    Angular distributions of helions elastically scattered from 24Mg, 58Ni, 90Zr and 120Sn at Eτ = 130 MeV have been subjected to a "model-independent" analysis in the framework of the optical model. The real part of the optical potential was represented by a spline-function; volume and surface absorptions were considered. Both the shallow and the deep families of the helion optical potential were investigated. The spline potentials are found to deviate from the Woods-Saxon shape. The experimental data are well described by optical potentials with either a volume or a surface absorption. However, the volume absorption consistently gives better fits. For 24Mg, 90Zr and 120Sn both shallow and deep potential families result in comparable fit qualities. For 58Ni the discrete ambiguity is resolved in favour of the shallow family. From the analysis the values of the rms radius of matter distribution have been extracted.

  17. Comparative study of standard space and real space analysis of quantitative MR brain data.

    PubMed

    Aribisala, Benjamin S; He, Jiabao; Blamire, Andrew M

    2011-06-01

    To compare the robustness of region of interest (ROI) analysis of magnetic resonance imaging (MRI) brain data in real space with analysis in standard space and to test the hypothesis that standard space image analysis introduces more partial volume effect errors compared to analysis of the same dataset in real space. Twenty healthy adults with no history or evidence of neurological diseases were recruited; high-resolution T(1)-weighted, quantitative T(1), and B(0) field-map measurements were collected. Algorithms were implemented to perform analysis in real and standard space and used to apply a simple standard ROI template to quantitative T(1) datasets. Regional relaxation values and histograms for both gray and white matter tissues classes were then extracted and compared. Regional mean T(1) values for both gray and white matter were significantly lower using real space compared to standard space analysis. Additionally, regional T(1) histograms were more compact in real space, with smaller right-sided tails indicating lower partial volume errors compared to standard space analysis. Standard space analysis of quantitative MRI brain data introduces more partial volume effect errors biasing the analysis of quantitative data compared to analysis of the same dataset in real space. Copyright © 2011 Wiley-Liss, Inc.

  18. A comparative verification of high resolution precipitation forecasts using model output statistics

    NASA Astrophysics Data System (ADS)

    van der Plas, Emiel; Schmeits, Maurice; Hooijman, Nicolien; Kok, Kees

    2017-04-01

    Verification of localized events such as precipitation has become even more challenging with the advent of high-resolution meso-scale numerical weather prediction (NWP). The realism of a forecast suggests that it should compare well against precipitation radar imagery with similar resolution, both spatially and temporally. Spatial verification methods solve some of the representativity issues that point verification gives rise to. In this study a verification strategy based on model output statistics is applied that aims to address both double penalty and resolution effects that are inherent to comparisons of NWP models with different resolutions. Using predictors based on spatial precipitation patterns around a set of stations, an extended logistic regression (ELR) equation is deduced, leading to a probability forecast distribution of precipitation for each NWP model, analysis and lead time. The ELR equations are derived for predictands based on areal calibrated radar precipitation and SYNOP observations. The aim is to extract maximum information from a series of precipitation forecasts, like a trained forecaster would. The method is applied to the non-hydrostatic model Harmonie (2.5 km resolution), Hirlam (11 km resolution) and the ECMWF model (16 km resolution), overall yielding similar Brier skill scores for the 3 post-processed models, but larger differences for individual lead times. Besides, the Fractions Skill Score is computed using the 3 deterministic forecasts, showing somewhat better skill for the Harmonie model. In other words, despite the realism of Harmonie precipitation forecasts, they only perform similarly or somewhat better than precipitation forecasts from the 2 lower resolution models, at least in the Netherlands.

  19. Physician-patient argumentation and communication, comparing Toulmin's model, pragma-dialectics, and American sociolinguistics.

    PubMed

    Rivera, Francisco Javier Uribe; Artmann, Elizabeth

    2015-12-01

    This article discusses the application of theories of argumentation and communication to the field of medicine. Based on a literature review, the authors compare Toulmin's model, pragma-dialectics, and the work of Todd and Fisher, derived from American sociolinguistics. These approaches were selected because they belong to the pragmatic field of language. The main results were: pragma-dialectics characterizes medical reasoning more comprehensively, highlighting specific elements of the three disciplines of argumentation: dialectics, rhetoric, and logic; Toulmin's model helps substantiate the declaration of diagnostic and therapeutic hypotheses, and as part of an interpretive medicine, approximates the pragma-dialectical approach by including dialectical elements in the process of formulating arguments; Fisher and Todd's approach allows characterizing, from a pragmatic analysis of speech acts, the degree of symmetry/asymmetry in the doctor-patient relationship, while arguing the possibility of negotiating treatment alternatives.

  20. Modeling discourse management compared to other classroom management styles in university physics

    NASA Astrophysics Data System (ADS)

    Desbien, Dwain Michael

    2002-01-01

    A classroom management technique called modeling discourse management was developed to enhance the modeling theory of physics. Modeling discourse management is a student-centered management that focuses on the epistemology of science. Modeling discourse is social constructivist in nature and was designed to encourage students to present classroom material to each other. In modeling discourse management, the instructor's primary role is of questioner rather than provider of knowledge. Literature is presented that helps validate the components of modeling discourse. Modeling discourse management was compared to other classroom management styles using multiple measures. Both regular and honors university physics classes were investigated. This style of management was found to enhance student understanding of forces, problem-solving skills, and student views of science compared to traditional classroom management styles for both honors and regular students. Compared to other reformed physics classrooms, modeling discourse classes performed as well or better on student understanding of forces. Outside evaluators viewed modeling discourse classes to be reformed, and it was determined that modeling discourse could be effectively disseminated.

  1. A microbial model of economic trading and comparative advantage.

    PubMed

    Enyeart, Peter J; Simpson, Zachary B; Ellington, Andrew D

    2015-01-07

    The economic theory of comparative advantage postulates that beneficial trading relationships can be arrived at by two self-interested entities producing the same goods as long as they have opposing relative efficiencies in producing those goods. The theory predicts that upon entering trade, in order to maximize consumption both entities will specialize in producing the good they can produce at higher efficiency, that the weaker entity will specialize more completely than the stronger entity, and that both will be able to consume more goods as a result of trade than either would be able to alone. We extend this theory to the realm of unicellular organisms by developing mathematical models of genetic circuits that allow trading of a common good (specifically, signaling molecules) required for growth in bacteria in order to demonstrate comparative advantage interactions. In Conception 1, the experimenter controls production rates via exogenous inducers, allowing exploration of the parameter space of specialization. In Conception 2, the circuits self-regulate via feedback mechanisms. Our models indicate that these genetic circuits can demonstrate comparative advantage, and that cooperation in such a manner is particularly favored under stringent external conditions and when the cost of production is not overly high. Further work could involve implementing the models in living bacteria and searching for naturally occurring cooperative relationships between bacteria that conform to the principles of comparative advantage. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Comparing live and remote models in eating conformity research.

    PubMed

    Feeney, Justin R; Polivy, Janet; Pliner, Patricia; Sullivan, Margot D

    2011-01-01

    Research demonstrates that people conform to how much other people eat. This conformity occurs in the presence of other people (live model) and when people view information about how much food prior participants ate (remote models). The assumption in the literature has been that remote models produce a similar effect to live models, but this has never been tested. To investigate this issue, we randomly paired participants with a live or remote model and compared their eating to those who ate alone. We found that participants exposed to both types of model differed significantly from those in the control group, but there was no significant difference between the two modeling procedures. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.

  3. A cost-effectiveness analysis of celecoxib compared with diclofenac in the treatment of pain in osteoarthritis (OA) within the Swedish health system using an adaptation of the NICE OA model.

    PubMed

    Brereton, Nicholas; Pennington, Becky; Ekelund, Mats; Akehurst, Ronald

    2014-09-01

    Celecoxib for the treatment of pain resulting from osteoarthritis (OA) was reviewed by the Tandvårds- och läkemedelsförmånsverket-Dental and Pharmaceutical Benefits Board (TLV) in Sweden in late 2010. This study aimed to evaluate the incremental cost-effectiveness ratio (ICER) of celecoxib plus a proton pump inhibitor (PPI) compared to diclofenac plus a PPI in a Swedish setting. The National Institute for Health and Care Excellence (NICE) in the UK developed a health economic model as part of their 2008 assessment of treatments for OA. In this analysis, the model was reconstructed and adapted to a Swedish perspective. Drug costs were updated using the TLV database. Adverse event costs were calculated using the regional price list of Southern Sweden and the standard treatment guidelines from the county council of Stockholm. Costs for treating cardiovascular (CV) events were taken from the Swedish DRG codes and the literature. Over a patient's lifetime treatment with celecoxib plus a PPI was associated with a quality-adjusted life year (QALY) gain of 0.006 per patient when compared to diclofenac plus a PPI. There was an increase in discounted costs of 529 kr per patient, which resulted in an incremental cost-effectiveness ratio (ICER) of 82,313 kr ($12,141). Sensitivity analysis showed that treatment was more cost effective in patients with an increased risk of bleeding or gastrointestinal (GI) complications. The results suggest that celecoxib plus a PPI is a cost effective treatment for OA when compared to diclofenac plus a PPI. Treatment is shown to be more cost effective in Sweden for patients with a high risk of bleeding or GI complications. It was in this population that the TLV gave a positive recommendation. There are known limitations on efficacy in the original NICE model.

  4. The modeling and analysis of the word-of-mouth marketing

    NASA Astrophysics Data System (ADS)

    Li, Pengdeng; Yang, Xiaofan; Yang, Lu-Xing; Xiong, Qingyu; Wu, Yingbo; Tang, Yuan Yan

    2018-03-01

    As compared to the traditional advertising, word-of-mouth (WOM) communications have striking advantages such as significantly lower cost and much faster propagation, and this is especially the case with the popularity of online social networks. This paper focuses on the modeling and analysis of the WOM marketing. A dynamic model, known as the SIPNS model, capturing the WOM marketing processes with both positive and negative comments is established. On this basis, a measure of the overall profit of a WOM marketing campaign is proposed. The SIPNS model is shown to admit a unique equilibrium, and the equilibrium is determined. The impact of different factors on the equilibrium of the SIPNS model is illuminated through theoretical analysis. Extensive experimental results suggest that the equilibrium is much likely to be globally attracting. Finally, the influence of different factors on the expected overall profit of a WOM marketing campaign is ascertained both theoretically and experimentally. Thereby, some promotion strategies are recommended. To our knowledge, this is the first time the WOM marketing is treated in this way.

  5. Cost effectiveness analysis comparing repetitive transcranial magnetic stimulation to antidepressant medications after a first treatment failure for major depressive disorder in newly diagnosed patients - A lifetime analysis.

    PubMed

    Voigt, Jeffrey; Carpenter, Linda; Leuchter, Andrew

    2017-01-01

    Repetitive Transcranial Magnetic Stimulation (rTMS) commonly is used for the treatment of Major Depressive Disorder (MDD) after patients have failed to benefit from trials of multiple antidepressant medications. No analysis to date has examined the cost-effectiveness of rTMS used earlier in the course of treatment and over a patients' lifetime. We used lifetime Markov simulation modeling to compare the direct costs and quality adjusted life years (QALYs) of rTMS and medication therapy in patients with newly diagnosed MDD (ages 20-59) who had failed to benefit from one pharmacotherapy trial. Patients' life expectancies, rates of response and remission, and quality of life outcomes were derived from the literature, and treatment costs were based upon published Medicare reimbursement data. Baseline costs, aggregate per year quality of life assessments (QALYs), Monte Carlo simulation, tornado analysis, assessment of dominance, and one way sensitivity analysis were also performed. The discount rate applied was 3%. Lifetime direct treatment costs, and QALYs identified rTMS as the dominant therapy compared to antidepressant medications (i.e., lower costs with better outcomes) in all age ranges, with costs/improved QALYs ranging from $2,952/0.32 (older patients) to $11,140/0.43 (younger patients). One-way sensitivity analysis demonstrated that the model was most sensitive to the input variables of cost per rTMS session, monthly prescription drug cost, and the number of rTMS sessions per year. rTMS was identified as the dominant therapy compared to antidepressant medication trials over the life of the patient across the lifespan of adults with MDD, given current costs of treatment. These models support the use of rTMS after a single failed antidepressant medication trial versus further attempts at medication treatment in adults with MDD.

  6. Interactions of cisplatin analogues with lysozyme: a comparative analysis.

    PubMed

    Ferraro, Giarita; De Benedictis, Ilaria; Malfitano, Annamaria; Morelli, Giancarlo; Novellino, Ettore; Marasco, Daniela

    2017-10-01

    The biophysical characterization of drug binding to proteins plays a key role in structural biology and in the discovery and optimization of drug discovery processes. The search for optimal combinations of biophysical techniques that can correctly and efficiently identify and quantify binding of metal-based drugs to their final target is challenging, due to the physicochemical properties of these agents. Different cisplatin derivatives have shown different citotoxicities in most common cancer lines, suggesting that they exert their biological activity via different mechanisms of action. Here we carried out a comparative analysis, by studying the behaviours of three Pt-compounds under the same experimental conditions and binding assays to properly deepen the determinants of the different MAOs. Indeed we compared the results obtained using surface plasmon resonance, isothermal titration calorimetry, fluorescence spectroscopy and thermal shift assays based on circular dichroism experiments in the characterization of the formation of adducts obtained upon reaction of cisplatin, carboplatin and iodinated analogue of cisplatin, cis-Pt (NH 3 ) 2 I 2 , with the model protein hen egg white lysozyme, both at neutral and acid pHs. Further we reasoned on the applicability of employed techniques for the study the thermodynamics and kinetics of the reaction of a metallodrug with a protein and to reveal which information can be obtained using a combination of these analyses. Data were discussed on the light of the existing structural data collected on the platinated protein.

  7. Replica Analysis for Portfolio Optimization with Single-Factor Model

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2017-06-01

    In this paper, we use replica analysis to investigate the influence of correlation among the return rates of assets on the solution of the portfolio optimization problem. We consider the behavior of an optimal solution for the case where the return rate is described with a single-factor model and compare the findings obtained from our proposed methods with correlated return rates with those obtained with independent return rates. We then analytically assess the increase in the investment risk when correlation is included. Furthermore, we also compare our approach with analytical procedures for minimizing the investment risk from operations research.

  8. Comparative optical analysis of cylindrical solar concentrators.

    PubMed

    Durán, J C; Nicolás, R O

    1987-02-01

    A comparison of the intensity distribution in the receiver plane for five different types of cylindrical concentrators is made. To this end, our previous 2-D optical analysis for nonperfect concentrators with plane receivers is used. Values of the local and mean concentration factors for a characteristic set of parameters of each concentrator are obtained and compared. The results show that the cylindrical-parabolic concentrator attains the highest concentration factors among the concentrators considered.

  9. Mind and consciousness in yoga – Vedanta: A comparative analysis with western psychological concepts

    PubMed Central

    Prabhu, H. R. Aravinda; Bhat, P. S.

    2013-01-01

    Study of mind and consciousness through established scientific methods is often difficult due to the observed-observer dichotomy. Cartesian approach of dualism considering the mind and matter as two diverse and unconnected entities has been questioned by oriental schools of Yoga and Vedanta as well as the recent quantum theories of modern physics. Freudian and Neo-freudian schools based on the Cartesian model have been criticized by the humanistic schools which come much closer to the vedantic approach of unitariness. A comparative analysis of the two approaches is discussed. PMID:23858252

  10. Comparative Time Series Analysis of Aerosol Optical Depth over Sites in United States and China Using ARIMA Modeling

    NASA Astrophysics Data System (ADS)

    Li, X.; Zhang, C.; Li, W.

    2017-12-01

    Long-term spatiotemporal analysis and modeling of aerosol optical depth (AOD) distribution is of paramount importance to study radiative forcing, climate change, and human health. This study is focused on the trends and variations of AOD over six stations located in United States and China during 2003 to 2015, using satellite-retrieved Moderate Resolution Imaging Spectrometer (MODIS) Collection 6 retrievals and ground measurements derived from Aerosol Robotic NETwork (AERONET). An autoregressive integrated moving average (ARIMA) model is applied to simulate and predict AOD values. The R2, adjusted R2, Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Bayesian Information Criterion (BIC) are used as indices to select the best fitted model. Results show that there is a persistent decreasing trend in AOD for both MODIS data and AERONET data over three stations. Monthly and seasonal AOD variations reveal consistent aerosol patterns over stations along mid-latitudes. Regional differences impacted by climatology and land cover types are observed for the selected stations. Statistical validation of time series models indicates that the non-seasonal ARIMA model performs better for AERONET AOD data than for MODIS AOD data over most stations, suggesting the method works better for data with higher quality. By contrast, the seasonal ARIMA model reproduces the seasonal variations of MODIS AOD data much more precisely. Overall, the reasonably predicted results indicate the applicability and feasibility of the stochastic ARIMA modeling technique to forecast future and missing AOD values.

  11. The October 1973 NASA mission model analysis and economic assessment

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Results are presented of the 1973 NASA Mission Model Analysis. The purpose was to obtain an economic assessment of using the Shuttle to accommodate the payloads and requirements as identified by the NASA Program Offices and the DoD. The 1973 Payload Model represents a baseline candidate set of future payloads which can be used as a reference base for planning purposes. The cost of implementing these payload programs utilizing the capabilities of the shuttle system is analyzed and compared with the cost of conducting the same payload effort using expendable launch vehicles. There is a net benefit of 14.1 billion dollars as a result of using the shuttle during the 12-year period as compared to using an expendable launch vehicle fleet.

  12. Closed-form model for the analysis of W-type shaped charges

    NASA Astrophysics Data System (ADS)

    Mahdian, A.; Ghayour, M.; Liaghat, G. H.

    2013-09-01

    This paper presents a closed-form model for the analysis of symmetric planar W-type shaped charges (WSCs) with two V-sections, which produce two primary cores and two primary jets. If these two V-sections have proper asymmetry, these primary cores will force two primary jets into a secondary core formed on the axis of symmetry of a planar symmetric WSC. For the analysis of such a planar WSC, a complete generalized model for an asymmetric planar V-shaped charge (VSC) with any desired order of asymmetry is mandatory. In this paper, the model is applied to describe the secondary jet formation in the WSC. By presenting a closed-form analysis of the WSC, the secondary jet specifications can be easily evaluated and, thus, can be compared with respect to the jet quantities in symmetric or asymmetric VSCs. Finally, for the primary and secondary jets, the coherency conditions are investigated, and the critical parameters responsible for these conditions are determined.

  13. Comparative study of wine tannin classification using Fourier transform mid-infrared spectrometry and sensory analysis.

    PubMed

    Fernández, Katherina; Labarca, Ximena; Bordeu, Edmundo; Guesalaga, Andrés; Agosin, Eduardo

    2007-11-01

    Wine tannins are fundamental to the determination of wine quality. However, the chemical and sensorial analysis of these compounds is not straightforward and a simple and rapid technique is necessary. We analyzed the mid-infrared spectra of white, red, and model wines spiked with known amounts of skin or seed tannins, collected using Fourier transform mid-infrared (FT-MIR) transmission spectroscopy (400-4000 cm(-1)). The spectral data were classified according to their tannin source, skin or seed, and tannin concentration by means of discriminant analysis (DA) and soft independent modeling of class analogy (SIMCA) to obtain a probabilistic classification. Wines were also classified sensorially by a trained panel and compared with FT-MIR. SIMCA models gave the most accurate classification (over 97%) and prediction (over 60%) among the wine samples. The prediction was increased (over 73%) using the leave-one-out cross-validation technique. Sensory classification of the wines was less accurate than that obtained with FT-MIR and SIMCA. Overall, these results show the potential of FT-MIR spectroscopy, in combination with adequate statistical tools, to discriminate wines with different tannin levels.

  14. Comparative Reannotation of 21 Aspergillus Genomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salamov, Asaf; Riley, Robert; Kuo, Alan

    2013-03-08

    We used comparative gene modeling to reannotate 21 Aspergillus genomes. Initial automatic annotation of individual genomes may contain some errors of different nature, e.g. missing genes, incorrect exon-intron structures, 'chimeras', which fuse 2 or more real genes or alternatively splitting some real genes into 2 or more models. The main premise behind the comparative modeling approach is that for closely related genomes most orthologous families have the same conserved gene structure. The algorithm maps all gene models predicted in each individual Aspergillus genome to the other genomes and, for each locus, selects from potentially many competing models, the one whichmore » most closely resembles the orthologous genes from other genomes. This procedure is iterated until no further change in gene models is observed. For Aspergillus genomes we predicted in total 4503 new gene models ( ~;;2percent per genome), supported by comparative analysis, additionally correcting ~;;18percent of old gene models. This resulted in a total of 4065 more genes with annotated PFAM domains (~;;3percent increase per genome). Analysis of a few genomes with EST/transcriptomics data shows that the new annotation sets also have a higher number of EST-supported splice sites at exon-intron boundaries.« less

  15. Bilateral Cavernous Nerve Crush Injury in the Rat Model: A Comparative Review of Pharmacologic Interventions.

    PubMed

    Haney, Nora M; Nguyen, Hoang M T; Honda, Matthew; Abdel-Mageed, Asim B; Hellstrom, Wayne J G

    2018-04-01

    It is common for men to develop erectile dysfunction after radical prostatectomy. The anatomy of the rat allows the cavernous nerve (CN) to be identified, dissected, and injured in a controlled fashion. Therefore, bilateral CN injury (BCNI) in the rat model is routinely used to study post-prostatectomy erectile dysfunction. To compare and contrast the available literature on pharmacologic intervention after BCNI in the rat. A literature search was performed on PubMed for cavernous nerve and injury and erectile dysfunction and rat. Only articles with BCNI and pharmacologic intervention that could be grouped into categories of immune modulation, growth factor therapy, receptor kinase inhibition, phosphodiesterase type 5 inhibition, and anti-inflammatory and antifibrotic interventions were included. To assess outcomes of pharmaceutical intervention on erectile function recovery after BCNI in the rat model. The ratio of maximum intracavernous pressure to mean arterial pressure was the main outcome measure chosen for this analysis. All interventions improved erectile function recovery after BCNI based on the ratio of maximum intracavernous pressure to mean arterial pressure results. Additional end-point analysis examined the corpus cavernosa and/or the major pelvic ganglion and CN. There was extreme heterogeneity within the literature, making accurate comparisons between crush injury and therapeutic interventions difficult. BCNI in the rat is the accepted animal model used to study nerve-sparing post-prostatectomy erectile dysfunction. However, an important limitation is extreme variability. Efforts should be made to decrease this variability and increase the translational utility toward clinical trials in humans. Haney NM, Nguyen HMT, Honda M, et al. Bilateral Cavernous Nerve Crush Injury in the Rat Model: A Comparative Review of Pharmacologic Interventions. Sex Med Rev 2018;6:234-241. Copyright © 2017 International Society for Sexual Medicine. Published by Elsevier

  16. A segmentation/clustering model for the analysis of array CGH data.

    PubMed

    Picard, F; Robin, S; Lebarbier, E; Daudin, J-J

    2007-09-01

    Microarray-CGH (comparative genomic hybridization) experiments are used to detect and map chromosomal imbalances. A CGH profile can be viewed as a succession of segments that represent homogeneous regions in the genome whose representative sequences share the same relative copy number on average. Segmentation methods constitute a natural framework for the analysis, but they do not provide a biological status for the detected segments. We propose a new model for this segmentation/clustering problem, combining a segmentation model with a mixture model. We present a new hybrid algorithm called dynamic programming-expectation maximization (DP-EM) to estimate the parameters of the model by maximum likelihood. This algorithm combines DP and the EM algorithm. We also propose a model selection heuristic to select the number of clusters and the number of segments. An example of our procedure is presented, based on publicly available data sets. We compare our method to segmentation methods and to hidden Markov models, and we show that the new segmentation/clustering model is a promising alternative that can be applied in the more general context of signal processing.

  17. Two new methods to fit models for network meta-analysis with random inconsistency effects.

    PubMed

    Law, Martin; Jackson, Dan; Turner, Rebecca; Rhodes, Kirsty; Viechtbauer, Wolfgang

    2016-07-28

    Meta-analysis is a valuable tool for combining evidence from multiple studies. Network meta-analysis is becoming more widely used as a means to compare multiple treatments in the same analysis. However, a network meta-analysis may exhibit inconsistency, whereby the treatment effect estimates do not agree across all trial designs, even after taking between-study heterogeneity into account. We propose two new estimation methods for network meta-analysis models with random inconsistency effects. The model we consider is an extension of the conventional random-effects model for meta-analysis to the network meta-analysis setting and allows for potential inconsistency using random inconsistency effects. Our first new estimation method uses a Bayesian framework with empirically-based prior distributions for both the heterogeneity and the inconsistency variances. We fit the model using importance sampling and thereby avoid some of the difficulties that might be associated with using Markov Chain Monte Carlo (MCMC). However, we confirm the accuracy of our importance sampling method by comparing the results to those obtained using MCMC as the gold standard. The second new estimation method we describe uses a likelihood-based approach, implemented in the metafor package, which can be used to obtain (restricted) maximum-likelihood estimates of the model parameters and profile likelihood confidence intervals of the variance components. We illustrate the application of the methods using two contrasting examples. The first uses all-cause mortality as an outcome, and shows little evidence of between-study heterogeneity or inconsistency. The second uses "ear discharge" as an outcome, and exhibits substantial between-study heterogeneity and inconsistency. Both new estimation methods give results similar to those obtained using MCMC. The extent of heterogeneity and inconsistency should be assessed and reported in any network meta-analysis. Our two new methods can be used to fit

  18. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach

    PubMed Central

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2018-01-01

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach, and has several attractive features compared to the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, since the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. PMID:26303591

  19. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.

    PubMed

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2016-01-15

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Macro-level pedestrian and bicycle crash analysis: Incorporating spatial spillover effects in dual state count models.

    PubMed

    Cai, Qing; Lee, Jaeyoung; Eluru, Naveen; Abdel-Aty, Mohamed

    2016-08-01

    This study attempts to explore the viability of dual-state models (i.e., zero-inflated and hurdle models) for traffic analysis zones (TAZs) based pedestrian and bicycle crash frequency analysis. Additionally, spatial spillover effects are explored in the models by employing exogenous variables from neighboring zones. The dual-state models such as zero-inflated negative binomial and hurdle negative binomial models (with and without spatial effects) are compared with the conventional single-state model (i.e., negative binomial). The model comparison for pedestrian and bicycle crashes revealed that the models that considered observed spatial effects perform better than the models that did not consider the observed spatial effects. Across the models with spatial spillover effects, the dual-state models especially zero-inflated negative binomial model offered better performance compared to single-state models. Moreover, the model results clearly highlighted the importance of various traffic, roadway, and sociodemographic characteristics of the TAZ as well as neighboring TAZs on pedestrian and bicycle crash frequency. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Models to compare management options for a protogynous fish.

    PubMed

    Heppell, Selina S; Heppell, Scott A; Coleman, Felicia C; Koenig, Christopher C

    2006-02-01

    Populations of gag (Mycteroperca microlepis), a hermaphroditic grouper, have experienced a dramatic shift in sex ratio over the past 25 years due to a decline in older age classes. The highly female-skewed sex ratio can be predicted as a consequence of increased fishing mortality that truncates the age distribution, and raises some concern about the overall fitness of the population. Management efforts may need to be directed toward maintenance of sex ratio as well as stock size, with evaluations of recruitment based on sex ratio or male stock size in addition to the traditional female-based stock-recruitment relationship. We used two stochastic, age-structured models to heuristically compare the effects of reducing fishing mortality on different life history stages and the relative impact of reductions in fertilization rates that may occur with highly skewed sex ratios. Our response variables included population size, sex ratio, lost egg fertility, and female spawning stock biomass. Population growth rates were highest for scenarios that reduced mortality for female gag (nearshore closure), while improved sex ratios were obtained most quickly with spawning reserves. The effect of reduced fertility through sex ratio bias was generally low but depended on the management scenario employed. Our results demonstrate the utility of evaluation of fishery management scenarios through model analysis and simulation, the synergistic interaction of life history and response to changes in mortality rates, and the importance of defining management goals.

  2. Comparative study of smile analysis by subjective and computerized methods.

    PubMed

    Basting, Roberta Tarkany; da Trindade, Rita de Cássia Silva; Flório, Flávia Martão

    2006-01-01

    This study compared: 1) the subjective analyses of a smile done by specialists with advanced training and by general dentists; 2) the subjective analysis of a smile, or that associated with the face, by specialists with advanced training and general dentists; 3) subjective analysis using a computerized analysis of the smile by specialists with advanced training, verifying the midline, labial line, smile line, the line between commissures and the golden proportion. The sample consisted of 100 adults with natural dentition; 200 photographs were taken (100 of the smile and 100 of the entire face). Computerized analysis using AutoCAD software was performed, together with the subjective analyses of 2 groups of professionals (3 general dentists and 3 specialists with advanced training), using the following assessment factors: the midline, labial line, smile line, line between the commissures and the golden proportion. The smile itself and the smile associated with the entire face were recorded as being agreeable or not agreeable by the professionals. The McNemar test showed a highly significant difference (p=0.0000) among the subjective analyses performed by specialists compared to general dentists. Between the 2 groups of dental professionals, there were highly significant differences (p=0.0000) found between the subjective analyses of the smile and that of the face. The McNemar test showed statistical differences in all factors assessed, with the exception of the midline (p=0.1951), when the computerized analysis and subjective analysis of the specialists were compared. In order to establish harmony of the smile, it was not possible to establish a greater or lesser relevance among the factors analyzed.

  3. Model-based Bayesian inference for ROC data analysis

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Bae, K. Ty

    2013-03-01

    This paper presents a study of model-based Bayesian inference to Receiver Operating Characteristics (ROC) data. The model is a simple version of general non-linear regression model. Different from Dorfman model, it uses a probit link function with a covariate variable having zero-one two values to express binormal distributions in a single formula. Model also includes a scale parameter. Bayesian inference is implemented by Markov Chain Monte Carlo (MCMC) method carried out by Bayesian analysis Using Gibbs Sampling (BUGS). Contrast to the classical statistical theory, Bayesian approach considers model parameters as random variables characterized by prior distributions. With substantial amount of simulated samples generated by sampling algorithm, posterior distributions of parameters as well as parameters themselves can be accurately estimated. MCMC-based BUGS adopts Adaptive Rejection Sampling (ARS) protocol which requires the probability density function (pdf) which samples are drawing from be log concave with respect to the targeted parameters. Our study corrects a common misconception and proves that pdf of this regression model is log concave with respect to its scale parameter. Therefore, ARS's requirement is satisfied and a Gaussian prior which is conjugate and possesses many analytic and computational advantages is assigned to the scale parameter. A cohort of 20 simulated data sets and 20 simulations from each data set are used in our study. Output analysis and convergence diagnostics for MCMC method are assessed by CODA package. Models and methods by using continuous Gaussian prior and discrete categorical prior are compared. Intensive simulations and performance measures are given to illustrate our practice in the framework of model-based Bayesian inference using MCMC method.

  4. The top skin-associated genes: a comparative analysis of human and mouse skin transcriptomes.

    PubMed

    Gerber, Peter Arne; Buhren, Bettina Alexandra; Schrumpf, Holger; Homey, Bernhard; Zlotnik, Albert; Hevezi, Peter

    2014-06-01

    The mouse represents a key model system for the study of the physiology and biochemistry of skin. Comparison of skin between mouse and human is critical for interpretation and application of data from mouse experiments to human disease. Here, we review the current knowledge on structure and immunology of mouse and human skin. Moreover, we present a systematic comparison of human and mouse skin transcriptomes. To this end, we have recently used a genome-wide database of human gene expression to identify genes highly expressed in skin, with no, or limited expression elsewhere - human skin-associated genes (hSAGs). Analysis of our set of hSAGs allowed us to generate a comprehensive molecular characterization of healthy human skin. Here, we used a similar database to generate a list of mouse skin-associated genes (mSAGs). A comparative analysis between the top human (n=666) and mouse (n=873) skin-associated genes (SAGs) revealed a total of only 30.2% identity between the two lists. The majority of shared genes encode proteins that participate in structural and barrier functions. Analysis of the top functional annotation terms revealed an overlap for morphogenesis, cell adhesion, structure, and signal transduction. The results of this analysis, discussed in the context of published data, illustrate the diversity between the molecular make up of skin of both species and grants a probable explanation, why results generated in murine in vivo models often fail to translate into the human.

  5. Transient analysis of a superconducting AC generator using the compensated 2-D model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chun, Y.D.; Lee, H.W.; Lee, J.

    1999-09-01

    A SCG has many advantages over conventional generators, such as reduction in width and size, improvement in efficiency, and better steady-state stability. The paper presents a 2-D transient analysis of a superconducting AC generator (SCG) using the finite element method (FEM). The compensated 2-D model obtained by lengthening the airgap of the original 2-D model is proposed for the accurate and efficient transient analysis. The accuracy of the compensated 2-D model is verified by the small error 6.4% compared to experimental data. The transient characteristics of the 30 KVA SCG model have been investigated in detail and the damper performancemore » on various design parameters is examined.« less

  6. A De-Novo Genome Analysis Pipeline (DeNoGAP) for large-scale comparative prokaryotic genomics studies.

    PubMed

    Thakur, Shalabh; Guttman, David S

    2016-06-30

    Comparative analysis of whole genome sequence data from closely related prokaryotic species or strains is becoming an increasingly important and accessible approach for addressing both fundamental and applied biological questions. While there are number of excellent tools developed for performing this task, most scale poorly when faced with hundreds of genome sequences, and many require extensive manual curation. We have developed a de-novo genome analysis pipeline (DeNoGAP) for the automated, iterative and high-throughput analysis of data from comparative genomics projects involving hundreds of whole genome sequences. The pipeline is designed to perform reference-assisted and de novo gene prediction, homolog protein family assignment, ortholog prediction, functional annotation, and pan-genome analysis using a range of proven tools and databases. While most existing methods scale quadratically with the number of genomes since they rely on pairwise comparisons among predicted protein sequences, DeNoGAP scales linearly since the homology assignment is based on iteratively refined hidden Markov models. This iterative clustering strategy enables DeNoGAP to handle a very large number of genomes using minimal computational resources. Moreover, the modular structure of the pipeline permits easy updates as new analysis programs become available. DeNoGAP integrates bioinformatics tools and databases for comparative analysis of a large number of genomes. The pipeline offers tools and algorithms for annotation and analysis of completed and draft genome sequences. The pipeline is developed using Perl, BioPerl and SQLite on Ubuntu Linux version 12.04 LTS. Currently, the software package accompanies script for automated installation of necessary external programs on Ubuntu Linux; however, the pipeline should be also compatible with other Linux and Unix systems after necessary external programs are installed. DeNoGAP is freely available at https://sourceforge.net/projects/denogap/ .

  7. The effects of videotape modeling on staff acquisition of functional analysis methodology.

    PubMed

    Moore, James W; Fisher, Wayne W

    2007-01-01

    Lectures and two types of video modeling were compared to determine their relative effectiveness in training 3 staff members to conduct functional analysis sessions. Video modeling that contained a larger number of therapist exemplars resulted in mastery-level performance eight of the nine times it was introduced, whereas neither lectures nor partial video modeling produced significant improvements in performance. Results demonstrated that video modeling provided an effective training strategy but only when a wide range of exemplars of potential therapist behaviors were depicted in the videotape.

  8. Point-based and model-based geolocation analysis of airborne laser scanning data

    NASA Astrophysics Data System (ADS)

    Sefercik, Umut Gunes; Buyuksalih, Gurcan; Jacobsen, Karsten; Alkan, Mehmet

    2017-01-01

    Airborne laser scanning (ALS) is one of the most effective remote sensing technologies providing precise three-dimensional (3-D) dense point clouds. A large-size ALS digital surface model (DSM) covering the whole Istanbul province was analyzed by point-based and model-based comprehensive statistical approaches. Point-based analysis was performed using checkpoints on flat areas. Model-based approaches were implemented in two steps as strip to strip comparing overlapping ALS DSMs individually in three subareas and comparing the merged ALS DSMs with terrestrial laser scanning (TLS) DSMs in four other subareas. In the model-based approach, the standard deviation of height and normalized median absolute deviation were used as the accuracy indicators combined with the dependency of terrain inclination. The results demonstrate that terrain roughness has a strong impact on the vertical accuracy of ALS DSMs. From the relative horizontal shifts determined and partially improved by merging the overlapping strips and comparison of the ALS, and the TLS, data were found not to be negligible. The analysis of ALS DSM in relation to TLS DSM allowed us to determine the characteristics of the DSM in detail.

  9. Grassland and cropland net ecosystem production of the U.S. Great Plains: Regression tree model development and comparative analysis

    USGS Publications Warehouse

    Wylie, Bruce K.; Howard, Daniel; Dahal, Devendra; Gilmanov, Tagir; Ji, Lei; Zhang, Li; Smith, Kelcy

    2016-01-01

    This paper presents the methodology and results of two ecological-based net ecosystem production (NEP) regression tree models capable of up scaling measurements made at various flux tower sites throughout the U.S. Great Plains. Separate grassland and cropland NEP regression tree models were trained using various remote sensing data and other biogeophysical data, along with 15 flux towers contributing to the grassland model and 15 flux towers for the cropland model. The models yielded weekly mean daily grassland and cropland NEP maps of the U.S. Great Plains at 250 m resolution for 2000–2008. The grassland and cropland NEP maps were spatially summarized and statistically compared. The results of this study indicate that grassland and cropland ecosystems generally performed as weak net carbon (C) sinks, absorbing more C from the atmosphere than they released from 2000 to 2008. Grasslands demonstrated higher carbon sink potential (139 g C·m−2·year−1) than non-irrigated croplands. A closer look into the weekly time series reveals the C fluctuation through time and space for each land cover type.

  10. Nonlinear analysis of AS4/PEEK thermoplastic composite laminate using a one parameter plasticity model

    NASA Technical Reports Server (NTRS)

    Sun, C. T.; Yoon, K. J.

    1990-01-01

    A one-parameter plasticity model was shown to adequately describe the orthotropic plastic deformation of AS4/PEEK (APC-2) unidirectional thermoplastic composite. This model was verified further for unidirectional and laminated composite panels with and without a hole. The nonlinear stress-strain relations were measured and compared with those predicted by the finite element analysis using the one-parameter elastic-plastic constitutive model. The results show that the one-parameter orthotropic plasticity model is suitable for the analysis of elastic-plastic deformation of AS4/PEEK composite laminates.

  11. Elastic-plastic analysis of AS4/PEEK composite laminate using a one-parameter plasticity model

    NASA Technical Reports Server (NTRS)

    Sun, C. T.; Yoon, K. J.

    1992-01-01

    A one-parameter plasticity model was shown to adequately describe the plastic deformation of AS4/PEEK (APC-2) unidirectional thermoplastic composite. This model was verified further for unidirectional and laminated composite panels with and without a hole. The elastic-plastic stress-strain relations of coupon specimens were measured and compared with those predicted by the finite element analysis using the one-parameter plasticity model. The results show that the one-parameter plasticity model is suitable for the analysis of elastic-plastic deformation of AS4/PEEK composite laminates.

  12. Water vapor over Europe obtained from remote sensors and compared with a hydrostatic NWP model

    NASA Astrophysics Data System (ADS)

    Johnsen, K.-P.; Kidder, S. Q.

    Due to its high-variability water vapor is a crucial parameter in short-term numerical weather prediction. Integrated water vapor (IWV) data obtained from a network of groundbased Global Positioning System (GPS) receivers mainly over Germany and passive microwave measurements of the Advanced Microwave Sounding Unit (AMSU-A) are compared with the high-resolution regional weather forecast model HRM of the Deutscher Wetterdienst (DWD). Time series of the IWV at 74 GPS stations obtained during the first complete year of the GFZ/GPS network between May 2000 and April 2001 are applied together with colocated forecasts of the HRM model. The low bias (0.08 kg/m 2) between the HRM model and the GPS data can mainly be explained by the bias between the ECMWF analysis data used to initilize the HRM model and the GPS data. The IWV standard deviation between the HRM model and the GPS data during that time is about 2.47 kg/ m2. GPS stations equipped with surface pressure sensors show about 0.29 kg/ m2 lower standard deviation compared with GPS stations with interpolated surface pressure from synoptic stations. The NOAA/NESDIS Total Precipitable Water algorithm is applied to obtain the IWV and to validate the model above the sea. While the mean IWV obtained from the HRM model is about 2.1 kg/ m2 larger than from the AMSU-A data, the standard deviations are 2.46 kg/ m2 (NOAA-15) and 2.29 kg/ m2 (NOAA-16) similar to the IWV standard deviation between HRM and GPS data.

  13. Comparative analysis of radioecological monitoring dosimeters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sobolev, A.I.; Pol`skii, O.G.; Shanin, O.B.

    1995-03-01

    This paper describes comparative estimates of radiation doses measured by two types of thermoluminescence dosimeters and two types of background radiation radiometers. The dosimetry systems were tested by simultaneously recording background radiation and standard radiation sources at a radioactive waste storage facility. Statistical analysis of the measurement results is summarized. The maximum recorded exposure dose rate for the experiment was 19 microrads per hour. The DTK-2 dosimeter overestimated dose rates by 6 to 43% and the DTU-2 dosimeter underestimated dose rates by 7 to 21%. Both devices are recommended for radioecological monitoring in populated areas. 4 refs., 3 figs., 5more » tabs.« less

  14. Wallerstein's World-Systems Analysis in Comparative Education: A Case Study

    ERIC Educational Resources Information Center

    Griffiths, Tom G.; Knezevic, Lisa

    2010-01-01

    Since the 1970s, using his world-systems analysis, Immanuel Wallerstein has developed a wide-ranging framework for the social sciences, with potential applications for comparative educational research. In this paper we outline key aspects of Wallerstein's theorising, and then analyse the uptake, understandings, and applications of his analysis in…

  15. Phenotype–genotype correlation in Hirschsprung disease is illuminated by comparative analysis of the RET protein sequence

    PubMed Central

    Kashuk, Carl S.; Stone, Eric A.; Grice, Elizabeth A.; Portnoy, Matthew E.; Green, Eric D.; Sidow, Arend; Chakravarti, Aravinda; McCallion, Andrew S.

    2005-01-01

    The ability to discriminate between deleterious and neutral amino acid substitutions in the genes of patients remains a significant challenge in human genetics. The increasing availability of genomic sequence data from multiple vertebrate species allows inclusion of sequence conservation and physicochemical properties of residues to be used for functional prediction. In this study, the RET receptor tyrosine kinase serves as a model disease gene in which a broad spectrum (≥116) of disease-associated mutations has been identified among patients with Hirschsprung disease and multiple endocrine neoplasia type 2. We report the alignment of the human RET protein sequence with the orthologous sequences of 12 non-human vertebrates (eight mammalian, one avian, and three teleost species), their comparative analysis, the evolutionary topology of the RET protein, and predicted tolerance for all published missense mutations. We show that, although evolutionary conservation alone provides significant information to predict the effect of a RET mutation, a model that combines comparative sequence data with analysis of physiochemical properties in a quantitative framework provides far greater accuracy. Although the ability to discern the impact of a mutation is imperfect, our analyses permit substantial discrimination between predicted functional classes of RET mutations and disease severity even for a multigenic disease such as Hirschsprung disease. PMID:15956201

  16. Quantitative maps of genetic interactions in yeast - comparative evaluation and integrative analysis.

    PubMed

    Lindén, Rolf O; Eronen, Ville-Pekka; Aittokallio, Tero

    2011-03-24

    High-throughput genetic screening approaches have enabled systematic means to study how interactions among gene mutations contribute to quantitative fitness phenotypes, with the aim of providing insights into the functional wiring diagrams of genetic interaction networks on a global scale. However, it is poorly known how well these quantitative interaction measurements agree across the screening approaches, which hinders their integrated use toward improving the coverage and quality of the genetic interaction maps in yeast and other organisms. Using large-scale data matrices from epistatic miniarray profiling (E-MAP), genetic interaction mapping (GIM), and synthetic genetic array (SGA) approaches, we carried out here a systematic comparative evaluation among these quantitative maps of genetic interactions in yeast. The relatively low association between the original interaction measurements or their customized scores could be improved using a matrix-based modelling framework, which enables the use of single- and double-mutant fitness estimates and measurements, respectively, when scoring genetic interactions. Toward an integrative analysis, we show how the detections from the different screening approaches can be combined to suggest novel positive and negative interactions which are complementary to those obtained using any single screening approach alone. The matrix approximation procedure has been made available to support the design and analysis of the future screening studies. We have shown here that even if the correlation between the currently available quantitative genetic interaction maps in yeast is relatively low, their comparability can be improved by means of our computational matrix approximation procedure, which will enable integrative analysis and detection of a wider spectrum of genetic interactions using data from the complementary screening approaches.

  17. Ortholog Identification and Comparative Analysis of Microbial Genomes Using MBGD and RECOG.

    PubMed

    Uchiyama, Ikuo

    2017-01-01

    Comparative genomics is becoming an essential approach for identification of genes associated with a specific function or phenotype. Here, we introduce the microbial genome database for comparative analysis (MBGD), which is a comprehensive ortholog database among the microbial genomes available so far. MBGD contains several precomputed ortholog tables including the standard ortholog table covering the entire taxonomic range and taxon-specific ortholog tables for various major taxa. In addition, MBGD allows the users to create an ortholog table within any specified set of genomes through dynamic calculations. In particular, MBGD has a "My MBGD" mode where users can upload their original genome sequences and incorporate them into orthology analysis. The created ortholog table can serve as the basis for various comparative analyses. Here, we describe the use of MBGD and briefly explain how to utilize the orthology information during comparative genome analysis in combination with the stand-alone comparative genomics software RECOG, focusing on the application to comparison of closely related microbial genomes.

  18. Comparing a discrete and continuum model of the intestinal crypt

    PubMed Central

    Murray, Philip J.; Walter, Alex; Fletcher, Alex G.; Edwards, Carina M.; Tindall, Marcus J.; Maini, Philip K.

    2011-01-01

    The integration of processes at different scales is a key problem in the modelling of cell populations. Owing to increased computational resources and the accumulation of data at the cellular and subcellular scales, the use of discrete, cell-level models, which are typically solved using numerical simulations, has become prominent. One of the merits of this approach is that important biological factors, such as cell heterogeneity and noise, can be easily incorporated. However, it can be difficult to efficiently draw generalisations from the simulation results, as, often, many simulation runs are required to investigate model behaviour in typically large parameter spaces. In some cases, discrete cell-level models can be coarse-grained, yielding continuum models whose analysis can lead to the development of insight into the underlying simulations. In this paper we apply such an approach to the case of a discrete model of cell dynamics in the intestinal crypt. An analysis of the resulting continuum model demonstrates that there is a limited region of parameter space within which steady-state (and hence biologically realistic) solutions exist. Continuum model predictions show good agreement with corresponding results from the underlying simulations and experimental data taken from murine intestinal crypts. PMID:21411869

  19. Comparative analysis of the intestinal flora in type 2 diabetes and nondiabetic mice

    PubMed Central

    Horie, Masanori; Miura, Takamasa; Hirakata, Satomi; Hosoyama, Akira; Sugino, Sakiko; Umeno, Aya; Murotomi, Kazutoshi; Yoshida, Yasukazu; Koike, Taisuke

    2017-01-01

    A relationship between type 2 diabetes mellitus (T2DM) and intestinal flora has been suggested since development of analysis technology for intestinal flora. An animal model of T2DM is important for investigation of T2DM. Although there are some animal models of T2DM, a comparison of the intestinal flora of healthy animals with that of T2DM animals has not yet been reported. The intestinal flora of Tsumura Suzuki Obese Diabetes (TSOD) mice was compared with that of Tsumura, Suzuki, Non Obesity (TSNO) mice in the present study. The TSOD mice showed typical type 2 diabetes symptoms, which were high-fat diet-independent. The TSOD and the TSNO mouse models were derived from the same strain, ddY. In this study, we compared the intestinal flora of TSOD mice with that if TSNO mice at 5 and 12 weeks of age. We determined that that the number of operational taxonomic units (OTUs) was significantly higher in the cecum of TSOD mice than in that of TSNO mice. The intestinal flora of the cecum and that of the feces were similar between the TSNO and the TSOD strains. The dominant bacteria in the cecum and feces were of the phyla Firmicutes and Bacteroidetes. However, the content of some bacterial species varied between the two strains. The percentage of Lactobacillus spp. within the general intestinal flora was higher in TSOD mice than in TSNO mice. In contrast, the percentages of order Bacteroidales and family Lachnospiraceae were higher in TSNO mice than in TSOD mice. Some species were observed only in TSOD mice, such as genera Turicibacter and SMB53 (family Clostridiaceae), the percentage of which were 3.8% and 2.0%, respectively. Although further analysis of the metabolism of the individual bacteria in the intestinal flora is essential, genera Turicibacter and SMB53 may be important for the abnormal metabolism of type 2 diabetes. PMID:28701620

  20. Comparative analysis of the intestinal flora in type 2 diabetes and nondiabetic mice.

    PubMed

    Horie, Masanori; Miura, Takamasa; Hirakata, Satomi; Hosoyama, Akira; Sugino, Sakiko; Umeno, Aya; Murotomi, Kazutoshi; Yoshida, Yasukazu; Koike, Taisuke

    2017-10-30

    A relationship between type 2 diabetes mellitus (T2DM) and intestinal flora has been suggested since development of analysis technology for intestinal flora. An animal model of T2DM is important for investigation of T2DM. Although there are some animal models of T2DM, a comparison of the intestinal flora of healthy animals with that of T2DM animals has not yet been reported. The intestinal flora of Tsumura Suzuki Obese Diabetes (TSOD) mice was compared with that of Tsumura, Suzuki, Non Obesity (TSNO) mice in the present study. The TSOD mice showed typical type 2 diabetes symptoms, which were high-fat diet-independent. The TSOD and the TSNO mouse models were derived from the same strain, ddY. In this study, we compared the intestinal flora of TSOD mice with that if TSNO mice at 5 and 12 weeks of age. We determined that that the number of operational taxonomic units (OTUs) was significantly higher in the cecum of TSOD mice than in that of TSNO mice. The intestinal flora of the cecum and that of the feces were similar between the TSNO and the TSOD strains. The dominant bacteria in the cecum and feces were of the phyla Firmicutes and Bacteroidetes. However, the content of some bacterial species varied between the two strains. The percentage of Lactobacillus spp. within the general intestinal flora was higher in TSOD mice than in TSNO mice. In contrast, the percentages of order Bacteroidales and family Lachnospiraceae were higher in TSNO mice than in TSOD mice. Some species were observed only in TSOD mice, such as genera Turicibacter and SMB53 (family Clostridiaceae), the percentage of which were 3.8% and 2.0%, respectively. Although further analysis of the metabolism of the individual bacteria in the intestinal flora is essential, genera Turicibacter and SMB53 may be important for the abnormal metabolism of type 2 diabetes.

  1. Financial analysis of technology acquisition using fractionated lasers as a model.

    PubMed

    Jutkowitz, Eric; Carniol, Paul J; Carniol, Alan R

    2010-08-01

    Ablative fractional lasers are among the most advanced and costly devices on the market. Yet, there is a dearth of published literature on the cost and potential return on investment (ROI) of such devices. The objective of this study was to provide a methodological framework for physicians to evaluate ROI. To facilitate this analysis, we conducted a case study on the potential ROI of eight ablative fractional lasers. In the base case analysis, a 5-year lease and a 3-year lease were assumed as the purchase option with a $0 down payment and 3-month payment deferral. In addition to lease payments, service contracts, labor cost, and disposables were included in the total cost estimate. Revenue was estimated as price per procedure multiplied by total number of procedures in a year. Sensitivity analyses were performed to account for variability in model assumptions. Based on the assumptions of the model, all lasers had higher ROI under the 5-year lease agreement compared with that for the 3-year lease agreement. When comparing results between lasers, those with lower operating and purchase cost delivered a higher ROI. Sensitivity analysis indicates the model is most sensitive to purchase method. If physicians opt to purchase the device rather than lease, they can significantly enhance ROI. ROI analysis is an important tool for physicians who are considering making an expensive device acquisition. However, physicians should not rely solely on ROI and must also consider the clinical benefits of a laser. (c) Thieme Medical Publishers.

  2. An adherence based cost-consequence model comparing bimatoprost 0.01% to bimatoprost 0.03%.

    PubMed

    Wong, William B; Patel, Vaishali D; Kowalski, Jonathan W; Schwartz, Gail

    2013-09-01

    Estimate the long-term direct medical costs and clinical consequences of improved adherence with bimatoprost 0.01% compared to bimatoprost 0.03% in the treatment of glaucoma. A cost-consequence model was constructed from the perspective of a US healthcare payer. The model structure included three adherence levels (high, moderate, low) and four mean deviation (MD) defined health states (mild, moderate, severe glaucoma, blindness) for each adherence level. Clinical efficacy in terms of IOP reduction was obtained from the randomized controlled trial comparing bimatoprost 0.01% with bimatoprost 0.03%. Medication adherence was based on observed 12 month rates from an analysis of a nationally representative pharmacy claims database. Patients with high, moderate and low adherence were assumed to receive 100%, 50% and 0% of the IOP reduction observed in the clinical trial, respectively. Each 1 mmHg reduction in IOP was assumed to result in a 10% reduction in the risk of glaucoma progression. Worse glaucoma severity health states were associated with higher medical resource costs. Outcome measures were total costs, proportion of patients who progress and who become blind, and years of blindness. Deterministic sensitivity analyses were performed on uncertain model parameters. The percentage of patients progressing, becoming blind, and the time spent blind slightly favored bimatoprost 0.01%. Improved adherence with bimatoprost 0.01% led to higher costs in the first 2 years; however, starting in year 3 bimatoprost 0.01% became less costly compared to bimatoprost 0.03% with a total reduction in costs reaching US$3433 over a lifetime time horizon. Deterministic sensitivity analyses demonstrated that results were robust, with the majority of analyses favoring bimatoprost 0.01%. Application of 1 year adherence and efficacy over the long term are limitations. Modeling the effect of greater medication adherence with bimatoprost 0.01% compared with bimatoprost 0.03% suggests that

  3. Comparing models of the periodic variations in spin-down and beamwidth for PSR B1828-11

    NASA Astrophysics Data System (ADS)

    Ashton, G.; Jones, D. I.; Prix, R.

    2016-05-01

    We build a framework using tools from Bayesian data analysis to evaluate models explaining the periodic variations in spin-down and beamwidth of PSR B1828-11. The available data consist of the time-averaged spin-down rate, which displays a distinctive double-peaked modulation, and measurements of the beamwidth. Two concepts exist in the literature that are capable of explaining these variations; we formulate predictive models from these and quantitatively compare them. The first concept is phenomenological and stipulates that the magnetosphere undergoes periodic switching between two metastable states as first suggested by Lyne et al. The second concept, precession, was first considered as a candidate for the modulation of B1828-11 by Stairs et al. We quantitatively compare models built from these concepts using a Bayesian odds ratio. Because the phenomenological switching model itself was informed by these data in the first place, it is difficult to specify appropriate parameter-space priors that can be trusted for an unbiased model comparison. Therefore, we first perform a parameter estimation using the spin-down data, and then use the resulting posterior distributions as priors for model comparison on the beamwidth data. We find that a precession model with a simple circular Gaussian beam geometry fails to appropriately describe the data, while allowing for a more general beam geometry provides a good fit to the data. The resulting odds between the precession model (with a general beam geometry) and the switching model are estimated as 102.7±0.5 in favour of the precession model.

  4. A Comparative study of two RVE modelling methods for chopped carbon fiber SMC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Zhangxing; Li, Yi; Shao, Yimin

    To achieve vehicle light-weighting, the chopped carbon fiber sheet molding compound (SMC) is identified as a promising material to replace metals. However, there are no effective tools and methods to predict the mechanical property of the chopped carbon fiber SMC due to the high complexity in microstructure features and the anisotropic properties. In this paper, the Representative Volume Element (RVE) approach is used to model the SMC microstructure. Two modeling methods, the Voronoi diagram-based method and the chip packing method, are developed for material RVE property prediction. The two methods are compared in terms of the predicted elastic modulus andmore » the predicted results are validated using the Digital Image Correlation (DIC) tensile test results. Furthermore, the advantages and shortcomings of these two methods are discussed in terms of the required input information and the convenience of use in the integrated processing-microstructure-property analysis.« less

  5. Multivariate Analysis and Modeling of Sediment Pollution Using Neural Network Models and Geostatistics

    NASA Astrophysics Data System (ADS)

    Golay, Jean; Kanevski, Mikhaïl

    2013-04-01

    The present research deals with the exploration and modeling of a complex dataset of 200 measurement points of sediment pollution by heavy metals in Lake Geneva. The fundamental idea was to use multivariate Artificial Neural Networks (ANN) along with geostatistical models and tools in order to improve the accuracy and the interpretability of data modeling. The results obtained with ANN were compared to those of traditional geostatistical algorithms like ordinary (co)kriging and (co)kriging with an external drift. Exploratory data analysis highlighted a great variety of relationships (i.e. linear, non-linear, independence) between the 11 variables of the dataset (i.e. Cadmium, Mercury, Zinc, Copper, Titanium, Chromium, Vanadium and Nickel as well as the spatial coordinates of the measurement points and their depth). Then, exploratory spatial data analysis (i.e. anisotropic variography, local spatial correlations and moving window statistics) was carried out. It was shown that the different phenomena to be modeled were characterized by high spatial anisotropies, complex spatial correlation structures and heteroscedasticity. A feature selection procedure based on General Regression Neural Networks (GRNN) was also applied to create subsets of variables enabling to improve the predictions during the modeling phase. The basic modeling was conducted using a Multilayer Perceptron (MLP) which is a workhorse of ANN. MLP models are robust and highly flexible tools which can incorporate in a nonlinear manner different kind of high-dimensional information. In the present research, the input layer was made of either two (spatial coordinates) or three neurons (when depth as auxiliary information could possibly capture an underlying trend) and the output layer was composed of one (univariate MLP) to eight neurons corresponding to the heavy metals of the dataset (multivariate MLP). MLP models with three input neurons can be referred to as Artificial Neural Networks with EXternal

  6. Linearised and non-linearised isotherm models optimization analysis by error functions and statistical means

    PubMed Central

    2014-01-01

    In adsorption study, to describe sorption process and evaluation of best-fitting isotherm model is a key analysis to investigate the theoretical hypothesis. Hence, numerous statistically analysis have been extensively used to estimate validity of the experimental equilibrium adsorption values with the predicted equilibrium values. Several statistical error analysis were carried out. In the present study, the following statistical analysis were carried out to evaluate the adsorption isotherm model fitness, like the Pearson correlation, the coefficient of determination and the Chi-square test, have been used. The ANOVA test was carried out for evaluating significance of various error functions and also coefficient of dispersion were evaluated for linearised and non-linearised models. The adsorption of phenol onto natural soil (Local name Kalathur soil) was carried out, in batch mode at 30 ± 20 C. For estimating the isotherm parameters, to get a holistic view of the analysis the models were compared between linear and non-linear isotherm models. The result reveled that, among above mentioned error functions and statistical functions were designed to determine the best fitting isotherm. PMID:25018878

  7. Model based population PK-PD analysis of furosemide for BP lowering effect: A comparative study in primary and secondary hypertension.

    PubMed

    Shukla, Mahendra; Ibrahim, Moustafa M A; Jain, Moon; Jaiswal, Swati; Sharma, Abhisheak; Hanif, Kashif; Lal, Jawahar

    2017-11-15

    Though numerous reports have demonstrated multiple mechanisms by which furosemide can exert its anti-hypertensive response. However, lack of studies describing PK-PD relationship for furosemide featuring its anti-hypertensive property has limited its usage as a blood pressure (BP) lowering agent. Serum concentrations and mean arterial BP were monitored following 40 and 80mgkg -1 multiple oral dose of furosemide in spontaneously hypertensive rats (SHR) and DOCA-salt induced hypertensive (DOCA-salt) rats. A simultaneous population PK-PD relationship using E max model with effect compartment was developed to compare the anti-hypertensive efficacy of furosemide in these rat models. A two-compartment PK model with Weibull-type absorption and first-order elimination best described the serum concentration-time profile of furosemide. In the present study, post dose serum concentrations of furosemide were found to be lower than the EC 50 . The EC 50 predicted in DOCA-salt rats was found to be lower (4.5-fold), whereas the tolerance development was higher than that in SHR model. The PK-PD parameter estimates, particularly lower values of EC 50 , K e and Q in DOCA-salt rats as compared to SHR, pinpointed the higher BP lowering efficacy of furosemide in volume overload induced hypertensive conditions. Insignificantly altered serum creatinine and electrolyte levels indicated a favorable side effect profile of furosemide. In conclusion, the final PK-PD model described the data well and provides detailed insights into the use of furosemide as an anti-hypertensive agent. Copyright © 2017. Published by Elsevier B.V.

  8. Multivariate Probabilistic Analysis of an Hydrological Model

    NASA Astrophysics Data System (ADS)

    Franceschini, Samuela; Marani, Marco

    2010-05-01

    Model predictions derived based on rainfall measurements and hydrological model results are often limited by the systematic error of measuring instruments, by the intrinsic variability of the natural processes and by the uncertainty of the mathematical representation. We propose a means to identify such sources of uncertainty and to quantify their effects based on point-estimate approaches, as a valid alternative to cumbersome Montecarlo methods. We present uncertainty analyses on the hydrologic response to selected meteorological events, in the mountain streamflow-generating portion of the Brenta basin at Bassano del Grappa, Italy. The Brenta river catchment has a relatively uniform morphology and quite a heterogeneous rainfall-pattern. In the present work, we evaluate two sources of uncertainty: data uncertainty (the uncertainty due to data handling and analysis) and model uncertainty (the uncertainty related to the formulation of the model). We thus evaluate the effects of the measurement error of tipping-bucket rain gauges, the uncertainty in estimating spatially-distributed rainfall through block kriging, and the uncertainty associated with estimated model parameters. To this end, we coupled a deterministic model based on the geomorphological theory of the hydrologic response to probabilistic methods. In particular we compare the results of Monte Carlo Simulations (MCS) to the results obtained, in the same conditions, using Li's Point Estimate Method (LiM). The LiM is a probabilistic technique that approximates the continuous probability distribution function of the considered stochastic variables by means of discrete points and associated weights. This allows to satisfactorily reproduce results with only few evaluations of the model function. The comparison between the LiM and MCS results highlights the pros and cons of using an approximating method. LiM is less computationally demanding than MCS, but has limited applicability especially when the model

  9. Comparing the Effectiveness of Polymer Debriding Devices Using a Porcine Wound Biofilm Model.

    PubMed

    Wilkinson, Holly N; McBain, Andrew J; Stephenson, Christian; Hardman, Matthew J

    2016-11-01

    Objective: Debridement to remove necrotic and/or infected tissue and promote active healing remains a cornerstone of contemporary chronic wound management. While there has been a recent shift toward less invasive polymer-based debriding devices, their efficacy requires rigorous evaluation. Approach: This study was designed to directly compare monofilament debriding devices to traditional gauze using a wounded porcine skin biofilm model with standardized application parameters. Biofilm removal was determined using a surface viability assay, bacterial counts, histological assessment, and scanning electron microscopy (SEM). Results: Quantitative analysis revealed that monofilament debriding devices outperformed the standard gauze, resulting in up to 100-fold greater reduction in bacterial counts. Interestingly, histological and morphological analyses suggested that debridement not only removed bacteria, but also differentially disrupted the bacterially-derived extracellular polymeric substance. Finally, SEM of post-debridement monofilaments showed structural changes in attached bacteria, implying a negative impact on viability. Innovation: This is the first study to combine controlled and defined debridement application with a biologically relevant ex vivo biofilm model to directly compare monofilament debriding devices. Conclusion: These data support the use of monofilament debriding devices for the removal of established wound biofilms and suggest variable efficacy towards biofilms composed of different species of bacteria.

  10. Comparing the Effectiveness of Polymer Debriding Devices Using a Porcine Wound Biofilm Model

    PubMed Central

    Wilkinson, Holly N.; McBain, Andrew J.; Stephenson, Christian; Hardman, Matthew J.

    2016-01-01

    Objective: Debridement to remove necrotic and/or infected tissue and promote active healing remains a cornerstone of contemporary chronic wound management. While there has been a recent shift toward less invasive polymer-based debriding devices, their efficacy requires rigorous evaluation. Approach: This study was designed to directly compare monofilament debriding devices to traditional gauze using a wounded porcine skin biofilm model with standardized application parameters. Biofilm removal was determined using a surface viability assay, bacterial counts, histological assessment, and scanning electron microscopy (SEM). Results: Quantitative analysis revealed that monofilament debriding devices outperformed the standard gauze, resulting in up to 100-fold greater reduction in bacterial counts. Interestingly, histological and morphological analyses suggested that debridement not only removed bacteria, but also differentially disrupted the bacterially-derived extracellular polymeric substance. Finally, SEM of post-debridement monofilaments showed structural changes in attached bacteria, implying a negative impact on viability. Innovation: This is the first study to combine controlled and defined debridement application with a biologically relevant ex vivo biofilm model to directly compare monofilament debriding devices. Conclusion: These data support the use of monofilament debriding devices for the removal of established wound biofilms and suggest variable efficacy towards biofilms composed of different species of bacteria. PMID:27867752

  11. Principal process analysis of biological models.

    PubMed

    Casagranda, Stefano; Touzeau, Suzanne; Ropers, Delphine; Gouzé, Jean-Luc

    2018-06-14

    Understanding the dynamical behaviour of biological systems is challenged by their large number of components and interactions. While efforts have been made in this direction to reduce model complexity, they often prove insufficient to grasp which and when model processes play a crucial role. Answering these questions is fundamental to unravel the functioning of living organisms. We design a method for dealing with model complexity, based on the analysis of dynamical models by means of Principal Process Analysis. We apply the method to a well-known model of circadian rhythms in mammals. The knowledge of the system trajectories allows us to decompose the system dynamics into processes that are active or inactive with respect to a certain threshold value. Process activities are graphically represented by Boolean and Dynamical Process Maps. We detect model processes that are always inactive, or inactive on some time interval. Eliminating these processes reduces the complex dynamics of the original model to the much simpler dynamics of the core processes, in a succession of sub-models that are easier to analyse. We quantify by means of global relative errors the extent to which the simplified models reproduce the main features of the original system dynamics and apply global sensitivity analysis to test the influence of model parameters on the errors. The results obtained prove the robustness of the method. The analysis of the sub-model dynamics allows us to identify the source of circadian oscillations. We find that the negative feedback loop involving proteins PER, CRY, CLOCK-BMAL1 is the main oscillator, in agreement with previous modelling and experimental studies. In conclusion, Principal Process Analysis is a simple-to-use method, which constitutes an additional and useful tool for analysing the complex dynamical behaviour of biological systems.

  12. Modelling Thin Film Microbending: A Comparative Study of Three Different Approaches

    NASA Astrophysics Data System (ADS)

    Aifantis, Katerina E.; Nikitas, Nikos; Zaiser, Michael

    2011-09-01

    Constitutive models which describe crystal microplasticity in a continuum framework can be envisaged as average representations of the dynamics of dislocation systems. Thus, their performance needs to be assessed not only by their ability to correctly represent stress-strain characteristics on the specimen scale but also by their ability to correctly represent the evolution of internal stress and strain patterns. In the present comparative study we consider the bending of a free-standing thin film. We compare the results of 3D DDD simulations with those obtained from a simple 1D gradient plasticity model and a more complex dislocation-based continuum model. Both models correctly reproduce the nontrivial strain patterns predicted by DDD for the microbending problem.

  13. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    PubMed

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  14. When Theory Meets Data: Comparing Model Predictions Of Hillslope Sediment Size With Field Measurements.

    NASA Astrophysics Data System (ADS)

    Mahmoudi, M.; Sklar, L. S.; Leclere, S.; Davis, J. D.; Stine, A.

    2017-12-01

    The size distributions of sediment produced on hillslopes and supplied to river channels influence a wide range of fluvial processes, from bedrock river incision to the creation of aquatic habitats. However, the factors that control hillslope sediment size are poorly understood, limiting our ability to predict sediment size and model the evolution of sediment size distributions across landscapes. Recently separate field and theoretical investigations have begun to address this knowledge gap. Here we compare the predictions of several emerging modeling approaches to landscapes where high quality field data are available. Our goals are to explore the sensitivity and applicability of the theoretical models in each field context, and ultimately to provide a foundation for incorporating hillslope sediment size into models of landscape evolution. The field data include published measurements of hillslope sediment size from the Kohala peninsula on the island of Hawaii and tributaries to the Feather River in the northern Sierra Nevada mountains of California, and an unpublished data set from the Inyo Creek catchment of the southern Sierra Nevada. These data are compared to predictions adapted from recently published modeling approaches that include elements of topography, geology, structure, climate and erosion rate. Predictive models for each site are built in ArcGIS using field condition datasets: DEM topography (slope, aspect, curvature), bedrock geology (lithology, mineralogy), structure (fault location, fracture density), climate data (mean annual precipitation and temperature), and estimates of erosion rates. Preliminary analysis suggests that models may be finely tuned to the calibration sites, particularly when field conditions most closely satisfy model assumptions, leading to unrealistic predictions from extrapolation. We suggest a path forward for developing a computationally tractable method for incorporating spatial variation in production of hillslope

  15. A computational approach to compare regression modelling strategies in prediction research.

    PubMed

    Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H

    2016-08-25

    It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.

  16. A Comparative Analysis of MMPI-2 Malingering Detection Models among Inmates

    ERIC Educational Resources Information Center

    Steffan, Jarrod S.; Morgan, Robert D.; Lee, Jeahoon; Sellbom, Martin

    2010-01-01

    There are several strategies, or models, for combining the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) validity indicators to detect malingered psychiatric symptoms. Some scholars have recommended that an elevated F (Infrequency) score should be followed by the inspection of Fp (Infrequency-Psychopathology), whereas a recent…

  17. Cancer cells growing on perfused 3D collagen model produced higher reactive oxygen species level and were more resistant to cisplatin compared to the 2D model.

    PubMed

    Liu, Qingxi; Zhang, Zijiang; Liu, Yupeng; Cui, Zhanfeng; Zhang, Tongcun; Li, Zhaohui; Ma, Wenjian

    2018-03-01

    Three-dimensional (3D) collagen scaffold models, due to their ability to mimic the tissue and organ structure in vivo, have received increasing interest in drug discovery and toxicity evaluation. In this study, we developed a perfused 3D model and studied cellular response to cytotoxic drugs in comparison with traditional 2D cell cultures as evaluated by cancer drug cisplatin. Cancer cells grown in perfused 3D environments showed increased levels of reactive oxygen species (ROS) production compared to the 2D culture. As determined by growth analysis, cells in the 3D culture, after forming a spheroid, were more resistant to the cancer drug cisplatin compared to that of the 2D cell culture. In addition, 3D culturing cells showed elevated level of ROS, indicating a physiological change or the formation of a microenvironment that resembles tumor cells in vivo. These data revealed that cellular response to drugs for cells growing in 3D environments are dramatically different from that of 2D cultured cells. Thus, the perfused 3D collagen scaffold model we report here might be a potentially very useful tool for drug analysis.

  18. Intelligent Decisions Need Intelligent Choice of Models and Data - a Bayesian Justifiability Analysis for Models with Vastly Different Complexity

    NASA Astrophysics Data System (ADS)

    Nowak, W.; Schöniger, A.; Wöhling, T.; Illman, W. A.

    2016-12-01

    Model-based decision support requires justifiable models with good predictive capabilities. This, in turn, calls for a fine adjustment between predictive accuracy (small systematic model bias that can be achieved with rather complex models), and predictive precision (small predictive uncertainties that can be achieved with simpler models with fewer parameters). The implied complexity/simplicity trade-off depends on the availability of informative data for calibration. If not available, additional data collection can be planned through optimal experimental design. We present a model justifiability analysis that can compare models of vastly different complexity. It rests on Bayesian model averaging (BMA) to investigate the complexity/performance trade-off dependent on data availability. Then, we disentangle the complexity component from the performance component. We achieve this by replacing actually observed data by realizations of synthetic data predicted by the models. This results in a "model confusion matrix". Based on this matrix, the modeler can identify the maximum model complexity that can be justified by the available (or planned) amount and type of data. As a side product, the matrix quantifies model (dis-)similarity. We apply this analysis to aquifer characterization via hydraulic tomography, comparing four models with a vastly different number of parameters (from a homogeneous model to geostatistical random fields). As a testing scenario, we consider hydraulic tomography data. Using subsets of these data, we determine model justifiability as a function of data set size. The test case shows that geostatistical parameterization requires a substantial amount of hydraulic tomography data to be justified, while a zonation-based model can be justified with more limited data set sizes. The actual model performance (as opposed to model justifiability), however, depends strongly on the quality of prior geological information.

  19. The Effects of Videotape Modeling on Staff Acquisition of Functional Analysis Methodology

    PubMed Central

    Moore, James W; Fisher, Wayne W

    2007-01-01

    Lectures and two types of video modeling were compared to determine their relative effectiveness in training 3 staff members to conduct functional analysis sessions. Video modeling that contained a larger number of therapist exemplars resulted in mastery-level performance eight of the nine times it was introduced, whereas neither lectures nor partial video modeling produced significant improvements in performance. Results demonstrated that video modeling provided an effective training strategy but only when a wide range of exemplars of potential therapist behaviors were depicted in the videotape. PMID:17471805

  20. A Comparative Analysis of Charter Schools and Traditional Public Schools

    ERIC Educational Resources Information Center

    Smith, Jodi Renee Abbott

    2014-01-01

    The focus of this descriptive research study was to compare charter and traditional public schools on the academic knowledge of fifth grade students as measured by Arizona's Instrument to Measure Standards (AIMS) in a suburb of a large southwestern city. This analysis also compared charter and traditional public schools on AYP status. It was…

  1. Cost effectiveness analysis comparing repetitive transcranial magnetic stimulation to antidepressant medications after a first treatment failure for major depressive disorder in newly diagnosed patients – A lifetime analysis

    PubMed Central

    2017-01-01

    Objective Repetitive Transcranial Magnetic Stimulation (rTMS) commonly is used for the treatment of Major Depressive Disorder (MDD) after patients have failed to benefit from trials of multiple antidepressant medications. No analysis to date has examined the cost-effectiveness of rTMS used earlier in the course of treatment and over a patients’ lifetime. Methods We used lifetime Markov simulation modeling to compare the direct costs and quality adjusted life years (QALYs) of rTMS and medication therapy in patients with newly diagnosed MDD (ages 20–59) who had failed to benefit from one pharmacotherapy trial. Patients’ life expectancies, rates of response and remission, and quality of life outcomes were derived from the literature, and treatment costs were based upon published Medicare reimbursement data. Baseline costs, aggregate per year quality of life assessments (QALYs), Monte Carlo simulation, tornado analysis, assessment of dominance, and one way sensitivity analysis were also performed. The discount rate applied was 3%. Results Lifetime direct treatment costs, and QALYs identified rTMS as the dominant therapy compared to antidepressant medications (i.e., lower costs with better outcomes) in all age ranges, with costs/improved QALYs ranging from $2,952/0.32 (older patients) to $11,140/0.43 (younger patients). One-way sensitivity analysis demonstrated that the model was most sensitive to the input variables of cost per rTMS session, monthly prescription drug cost, and the number of rTMS sessions per year. Conclusion rTMS was identified as the dominant therapy compared to antidepressant medication trials over the life of the patient across the lifespan of adults with MDD, given current costs of treatment. These models support the use of rTMS after a single failed antidepressant medication trial versus further attempts at medication treatment in adults with MDD. PMID:29073256

  2. The 15 years of comet photometry: A comparative analysis of 80 comets

    NASA Technical Reports Server (NTRS)

    Osip, David J.; Schleicher, David G.; Millis, Robert L.; Ahearn, Michael F.; Birch, Peter V.

    1991-01-01

    In 1976, a program of narrowband photometry of comets was initiated that has encompassed well over 400 nights of observations. To date, the program has provided detailed information on 80 comets, 11 of which were observed during multiple apparitions. The filters (initially isolating CN, C2, and continuum and later including C3, OH, and NH) as well as the detectors used for the observations were changed over time, and the parameters adopted in the reduction and modeling of the data have likewise evolved. Accordingly, we have re-reduced the entire database and have derived production rates using current values for scalelengths and fluorescence efficiencies. Having completed this task, the results for different comets can now be meaningfully compared. The general characteristics that are discussed include ranges in composition (molecular production rate ratios) and dustiness (gas production compared with Af(rho)). Additionally an analysis of trends on how the production rates vary with heliocentric distance and on pre- and post-perihelion asymmetries in the production rates of individual comets. Possible taxonomic groupings are also described.

  3. Comparative modeling of InP solar cell structures

    NASA Technical Reports Server (NTRS)

    Jain, R. K.; Weinberg, I.; Flood, D. J.

    1991-01-01

    The comparative modeling of p(+)n and n(+)p indium phosphide solar cell structures is studied using a numerical program PC-1D. The optimal design study has predicted that the p(+)n structure offers improved cell efficiencies as compared to n(+)p structure, due to higher open-circuit voltage. The various cell material and process parameters to achieve the maximum cell efficiencies are reported. The effect of some of the cell parameters on InP cell I-V characteristics was studied. The available radiation resistance data on n(+)p and p(+)p InP solar cells are also critically discussed.

  4. Modeling and analysis of transport in the mammary glands

    NASA Astrophysics Data System (ADS)

    Quezada, Ana; Vafai, Kambiz

    2014-08-01

    The transport of three toxins moving from the blood stream into the ducts of the mammary glands is analyzed in this work. The model predictions are compared with experimental data from the literature. The utility of the model lies in its potential to improve our understanding of toxin transport as a pre-disposing factor to breast cancer. This work is based on a multi-layer transport model to analyze the toxins present in the breast milk. The breast milk in comparison with other sampling strategies allows us to understand the mass transport of toxins once inside the bloodstream of breastfeeding women. The multi-layer model presented describes the transport of caffeine, DDT and cimetidine. The analysis performed takes into account the unique transport mechanisms for each of the toxins. Our model predicts the movement of toxins and/or drugs within the mammary glands as well as their bioaccumulation in the tissues.

  5. The use of docking-based comparative intermolecular contacts analysis to identify optimal docking conditions within glucokinase and to discover of new GK activators

    NASA Astrophysics Data System (ADS)

    Taha, Mutasem O.; Habash, Maha; Khanfar, Mohammad A.

    2014-05-01

    Glucokinase (GK) is involved in normal glucose homeostasis and therefore it is a valid target for drug design and discovery efforts. GK activators (GKAs) have excellent potential as treatments of hyperglycemia and diabetes. The combined recent interest in GKAs, together with docking limitations and shortages of docking validation methods prompted us to use our new 3D-QSAR analysis, namely, docking-based comparative intermolecular contacts analysis (dbCICA), to validate docking configurations performed on a group of GKAs within GK binding site. dbCICA assesses the consistency of docking by assessing the correlation between ligands' affinities and their contacts with binding site spots. Optimal dbCICA models were validated by receiver operating characteristic curve analysis and comparative molecular field analysis. dbCICA models were also converted into valid pharmacophores that were used as search queries to mine 3D structural databases for new GKAs. The search yielded several potent bioactivators that experimentally increased GK bioactivity up to 7.5-folds at 10 μM.

  6. A general framework for the use of logistic regression models in meta-analysis.

    PubMed

    Simmonds, Mark C; Higgins, Julian Pt

    2016-12-01

    Where individual participant data are available for every randomised trial in a meta-analysis of dichotomous event outcomes, "one-stage" random-effects logistic regression models have been proposed as a way to analyse these data. Such models can also be used even when individual participant data are not available and we have only summary contingency table data. One benefit of this one-stage regression model over conventional meta-analysis methods is that it maximises the correct binomial likelihood for the data and so does not require the common assumption that effect estimates are normally distributed. A second benefit of using this model is that it may be applied, with only minor modification, in a range of meta-analytic scenarios, including meta-regression, network meta-analyses and meta-analyses of diagnostic test accuracy. This single model can potentially replace the variety of often complex methods used in these areas. This paper considers, with a range of meta-analysis examples, how random-effects logistic regression models may be used in a number of different types of meta-analyses. This one-stage approach is compared with widely used meta-analysis methods including Bayesian network meta-analysis and the bivariate and hierarchical summary receiver operating characteristic (ROC) models for meta-analyses of diagnostic test accuracy. © The Author(s) 2014.

  7. Permeability model of sintered porous media: analysis and experiments

    NASA Astrophysics Data System (ADS)

    Flórez Mera, Juan Pablo; Chiamulera, Maria E.; Mantelli, Marcia B. H.

    2017-11-01

    In this paper, the permeability of porous media fabricated from copper powder sintering process was modeled and measured, aiming the use of the porosity as input parameter for the prediction of the permeability of sintering porous media. An expression relating the powder particle mean diameter with the permeability was obtained, based on an elementary porous media cell, which is physically represented by a duct formed by the arrangement of spherical particles forming a simple or orthorhombic packing. A circular duct with variable section was used to model the fluid flow within the porous media, where the concept of the hydraulic diameter was applied. Thus, the porous is modeled as a converging-diverging duct. The electrical circuit analogy was employed to determine two hydraulic resistances of the cell: based on the Navier-Stokes equation and on the Darcýs law. The hydraulic resistances are compared between themselves and an expression to determine the permeability as function of average particle diameter is obtained. The atomized copper powder was sifted to reduce the size dispersion of the particles. The porosities and permeabilities of sintered media fabricated from powders with particle mean diameters ranging from 20 to 200 microns were measured, by means of the image analysis method and using an experimental apparatus. The permeability data of a porous media, made of copper powder and saturated with distilled water, was used to compare with the permeability model. Permeability literature models, which considers that powder particles have the same diameter and include porosity data as input parameter, were compared with the present model and experimental data. This comparison showed to be quite good.

  8. Model building techniques for analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walther, Howard P.; McDaniel, Karen Lynn; Keener, Donald

    2009-09-01

    The practice of mechanical engineering for product development has evolved into a complex activity that requires a team of specialists for success. Sandia National Laboratories (SNL) has product engineers, mechanical designers, design engineers, manufacturing engineers, mechanical analysts and experimentalists, qualification engineers, and others that contribute through product realization teams to develop new mechanical hardware. The goal of SNL's Design Group is to change product development by enabling design teams to collaborate within a virtual model-based environment whereby analysis is used to guide design decisions. Computer-aided design (CAD) models using PTC's Pro/ENGINEER software tools are heavily relied upon in the productmore » definition stage of parts and assemblies at SNL. The three-dimensional CAD solid model acts as the design solid model that is filled with all of the detailed design definition needed to manufacture the parts. Analysis is an important part of the product development process. The CAD design solid model (DSM) is the foundation for the creation of the analysis solid model (ASM). Creating an ASM from the DSM currently is a time-consuming effort; the turnaround time for results of a design needs to be decreased to have an impact on the overall product development. This effort can be decreased immensely through simple Pro/ENGINEER modeling techniques that summarize to the method features are created in a part model. This document contains recommended modeling techniques that increase the efficiency of the creation of the ASM from the DSM.« less

  9. Changes in catchment hydrology in relation to vegetation recovery: a comparative modelling experiment

    NASA Astrophysics Data System (ADS)

    Lana-Renault, Noemí; Karssenberg, Derek; Latron, Jérôme; Serrano, Mā Pilar; Regüés, David; Bierkens, Marc F. P.

    2010-05-01

    Mediterranean mountains have been largely affected by land abandonment and subsequent vegetation recovery, with a general expansion of shrubs and forests. Such a large scale land-cover change has modified the hydrological behavior of these areas, with significant impact on runoff production. Forecasting the trend of water resources under future re-vegetation scenarios is of paramount importance in Mediterranean basins, where water management relies on runoff generated in these areas. With this purpose, a modelling experiment was designed based on the information collected in two neighbouring research catchments with a different history of land use in the central Spanish Pyrenees. One (2.84 km2) is an abandoned agricultural catchment subjected to plant colonization and at present mainly covered by shrubs. The other (0.92 km2) is a catchment covered by dense natural forest, representative of undisturbed environments. Here we present the results of the analysis of the hydrological differences between the two catchments, and a description of the approach and results of the modelling experiment. In a statistical analysis of the field data, significant differences were observed in the streamflow response of the two catchments. The forested catchment recorded fewer floods per year compared to the old agricultural catchment, and its hydrological response was characterised by a marked seasonality, with autumn and spring as the only high flow periods. Stormflow was generally higher in the old agricultural catchment, especially for low to intermediate size events; only for large events the stormflow in the forested catchment was sometimes greater. Under drier conditions, the relative differences in the stormflow between the two catchments tended to increase whereas under wet conditions they tended to be similar. The forested catchment always reacted more slowly to rainfall, with lower peakflows (generally one order of magnitude lower) and longer recession limbs. The modelling

  10. Comparing the reported burn conditions for different severity burns in porcine models: a systematic review.

    PubMed

    Andrews, Christine J; Cuttle, Leila

    2017-12-01

    There are many porcine burn models that create burns using different materials (e.g. metal, water) and different burn conditions (e.g. temperature and duration of exposure). This review aims to determine whether a pooled analysis of these studies can provide insight into the burn materials and conditions required to create burns of a specific severity. A systematic review of 42 porcine burn studies describing the depth of burn injury with histological evaluation is presented. Inclusion criteria included thermal burns, burns created with a novel method or material, histological evaluation within 7 days post-burn and method for depth of injury assessment specified. Conditions causing deep dermal scald burns compared to contact burns of equivalent severity were disparate, with lower temperatures and shorter durations reported for scald burns (83°C for 14 seconds) compared to contact burns (111°C for 23 seconds). A valuable archive of the different mechanisms and materials used for porcine burn models is presented to aid design and optimisation of future models. Significantly, this review demonstrates the effect of the mechanism of injury on burn severity and that caution is recommended when burn conditions established by porcine contact burn models are used by regulators to guide scald burn prevention strategies. © 2017 Medicalhelplines.com Inc and John Wiley & Sons Ltd.

  11. A comparative study of the proposed models for the components of the national health information system.

    PubMed

    Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz

    2014-04-01

    National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system - for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini's 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the "process" section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. the results showed that all the three models have had a brief discussion about the

  12. Comparative genome analysis of Basidiomycete fungi

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riley, Robert; Salamov, Asaf; Henrissat, Bernard

    Fungi of the phylum Basidiomycota (basidiomycetes), make up some 37percent of the described fungi, and are important in forestry, agriculture, medicine, and bioenergy. This diverse phylum includes symbionts, pathogens, and saprotrophs including the majority of wood decaying and ectomycorrhizal species. To better understand the genetic diversity of this phylum we compared the genomes of 35 basidiomycetes including 6 newly sequenced genomes. These genomes span extremes of genome size, gene number, and repeat content. Analysis of core genes reveals that some 48percent of basidiomycete proteins are unique to the phylum with nearly half of those (22percent) found in only one organism.more » Correlations between lifestyle and certain gene families are evident. Phylogenetic patterns of plant biomass-degrading genes in Agaricomycotina suggest a continuum rather than a dichotomy between the white rot and brown rot modes of wood decay. Based on phylogenetically-informed PCA analysis of wood decay genes, we predict that that Botryobasidium botryosum and Jaapia argillacea have properties similar to white rot species, although neither has typical ligninolytic class II fungal peroxidases (PODs). This prediction is supported by growth assays in which both fungi exhibit wood decay with white rot-like characteristics. Based on this, we suggest that the white/brown rot dichotomy may be inadequate to describe the full range of wood decaying fungi. Analysis of the rate of discovery of proteins with no or few homologs suggests the value of continued sequencing of basidiomycete fungi.« less

  13. Comparing methods for analysis of biomedical hyperspectral image data

    NASA Astrophysics Data System (ADS)

    Leavesley, Silas J.; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter F.; Annamdevula, Naga S.; Rich, Thomas C.

    2017-02-01

    Over the past 2 decades, hyperspectral imaging technologies have been adapted to address the need for molecule-specific identification in the biomedical imaging field. Applications have ranged from single-cell microscopy to whole-animal in vivo imaging and from basic research to clinical systems. Enabling this growth has been the availability of faster, more effective hyperspectral filtering technologies and more sensitive detectors. Hence, the potential for growth of biomedical hyperspectral imaging is high, and many hyperspectral imaging options are already commercially available. However, despite the growth in hyperspectral technologies for biomedical imaging, little work has been done to aid users of hyperspectral imaging instruments in selecting appropriate analysis algorithms. Here, we present an approach for comparing the effectiveness of spectral analysis algorithms by combining experimental image data with a theoretical "what if" scenario. This approach allows us to quantify several key outcomes that characterize a hyperspectral imaging study: linearity of sensitivity, positive detection cut-off slope, dynamic range, and false positive events. We present results of using this approach for comparing the effectiveness of several common spectral analysis algorithms for detecting weak fluorescent protein emission in the midst of strong tissue autofluorescence. Results indicate that this approach should be applicable to a very wide range of applications, allowing a quantitative assessment of the effectiveness of the combined biology, hardware, and computational analysis for detecting a specific molecular signature.

  14. Posttest analysis of LOFT LOCE L2-3 using the ESA RELAP4 blowdown model. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perryman, J.L.; Samuels, T.K.; Cooper, C.H.

    A posttest analysis of the blowdown portion of Loss-of-Coolant Experiment (LOCE) L2-3, which was conducted in the Loss-of-Fluid Test (LOFT) facility, was performed using the experiment safety analysis (ESA) RELAP4/MOD5 computer model. Measured experimental parameters were compared with the calculations in order to assess the conservatisms in the ESA RELAP4/MOD5 model.

  15. IMG: the integrated microbial genomes database and comparative analysis system

    PubMed Central

    Markowitz, Victor M.; Chen, I-Min A.; Palaniappan, Krishna; Chu, Ken; Szeto, Ernest; Grechkin, Yuri; Ratner, Anna; Jacob, Biju; Huang, Jinghua; Williams, Peter; Huntemann, Marcel; Anderson, Iain; Mavromatis, Konstantinos; Ivanova, Natalia N.; Kyrpides, Nikos C.

    2012-01-01

    The Integrated Microbial Genomes (IMG) system serves as a community resource for comparative analysis of publicly available genomes in a comprehensive integrated context. IMG integrates publicly available draft and complete genomes from all three domains of life with a large number of plasmids and viruses. IMG provides tools and viewers for analyzing and reviewing the annotations of genes and genomes in a comparative context. IMG's data content and analytical capabilities have been continuously extended through regular updates since its first release in March 2005. IMG is available at http://img.jgi.doe.gov. Companion IMG systems provide support for expert review of genome annotations (IMG/ER: http://img.jgi.doe.gov/er), teaching courses and training in microbial genome analysis (IMG/EDU: http://img.jgi.doe.gov/edu) and analysis of genomes related to the Human Microbiome Project (IMG/HMP: http://www.hmpdacc-resources.org/img_hmp). PMID:22194640

  16. IMG: the Integrated Microbial Genomes database and comparative analysis system.

    PubMed

    Markowitz, Victor M; Chen, I-Min A; Palaniappan, Krishna; Chu, Ken; Szeto, Ernest; Grechkin, Yuri; Ratner, Anna; Jacob, Biju; Huang, Jinghua; Williams, Peter; Huntemann, Marcel; Anderson, Iain; Mavromatis, Konstantinos; Ivanova, Natalia N; Kyrpides, Nikos C

    2012-01-01

    The Integrated Microbial Genomes (IMG) system serves as a community resource for comparative analysis of publicly available genomes in a comprehensive integrated context. IMG integrates publicly available draft and complete genomes from all three domains of life with a large number of plasmids and viruses. IMG provides tools and viewers for analyzing and reviewing the annotations of genes and genomes in a comparative context. IMG's data content and analytical capabilities have been continuously extended through regular updates since its first release in March 2005. IMG is available at http://img.jgi.doe.gov. Companion IMG systems provide support for expert review of genome annotations (IMG/ER: http://img.jgi.doe.gov/er), teaching courses and training in microbial genome analysis (IMG/EDU: http://img.jgi.doe.gov/edu) and analysis of genomes related to the Human Microbiome Project (IMG/HMP: http://www.hmpdacc-resources.org/img_hmp).

  17. A Fractional Cartesian Composition Model for Semi-Spatial Comparative Visualization Design.

    PubMed

    Kolesar, Ivan; Bruckner, Stefan; Viola, Ivan; Hauser, Helwig

    2017-01-01

    The study of spatial data ensembles leads to substantial visualization challenges in a variety of applications. In this paper, we present a model for comparative visualization that supports the design of according ensemble visualization solutions by partial automation. We focus on applications, where the user is interested in preserving selected spatial data characteristics of the data as much as possible-even when many ensemble members should be jointly studied using comparative visualization. In our model, we separate the design challenge into a minimal set of user-specified parameters and an optimization component for the automatic configuration of the remaining design variables. We provide an illustrated formal description of our model and exemplify our approach in the context of several application examples from different domains in order to demonstrate its generality within the class of comparative visualization problems for spatial data ensembles.

  18. A coupled stochastic rainfall-evapotranspiration model for hydrological impact analysis

    NASA Astrophysics Data System (ADS)

    Pham, Minh Tu; Vernieuwe, Hilde; De Baets, Bernard; Verhoest, Niko E. C.

    2018-02-01

    A hydrological impact analysis concerns the study of the consequences of certain scenarios on one or more variables or fluxes in the hydrological cycle. In such an exercise, discharge is often considered, as floods originating from extremely high discharges often cause damage. Investigating the impact of extreme discharges generally requires long time series of precipitation and evapotranspiration to be used to force a rainfall-runoff model. However, such kinds of data may not be available and one should resort to stochastically generated time series, even though the impact of using such data on the overall discharge, and especially on the extreme discharge events, is not well studied. In this paper, stochastically generated rainfall and corresponding evapotranspiration time series, generated by means of vine copulas, are used to force a simple conceptual hydrological model. The results obtained are comparable to the modelled discharge using observed forcing data. Yet, uncertainties in the modelled discharge increase with an increasing number of stochastically generated time series used. Notwithstanding this finding, it can be concluded that using a coupled stochastic rainfall-evapotranspiration model has great potential for hydrological impact analysis.

  19. Comparing multi-criteria decision analysis and integrated assessment to support long-term water supply planning

    PubMed Central

    Maurer, Max; Lienert, Judit

    2017-01-01

    We compare the use of multi-criteria decision analysis (MCDA)–or more precisely, models used in multi-attribute value theory (MAVT)–to integrated assessment (IA) models for supporting long-term water supply planning in a small town case study in Switzerland. They are used to evaluate thirteen system scale water supply alternatives in four future scenarios regarding forty-four objectives, covering technical, social, environmental, and economic aspects. The alternatives encompass both conventional and unconventional solutions and differ regarding technical, spatial and organizational characteristics. This paper focuses on the impact assessment and final evaluation step of the structured MCDA decision support process. We analyze the performance of the alternatives for ten stakeholders. We demonstrate the implications of model assumptions by comparing two IA and three MAVT evaluation model layouts of different complexity. For this comparison, we focus on the validity (ranking stability), desirability (value), and distinguishability (value range) of the alternatives given the five model layouts. These layouts exclude or include stakeholder preferences and uncertainties. Even though all five led us to identify the same best alternatives, they did not produce identical rankings. We found that the MAVT-type models provide higher distinguishability and a more robust basis for discussion than the IA-type models. The needed complexity of the model, however, should be determined based on the intended use of the model within the decision support process. The best-performing alternatives had consistently strong performance for all stakeholders and future scenarios, whereas the current water supply system was outperformed in all evaluation layouts. The best-performing alternatives comprise proactive pipe rehabilitation, adapted firefighting provisions, and decentralized water storage and/or treatment. We present recommendations for possible ways of improving water supply

  20. Comparative and Predictive Multimedia Assessments Using Monte Carlo Uncertainty Analyses

    NASA Astrophysics Data System (ADS)

    Whelan, G.

    2002-05-01

    Multiple-pathway frameworks (sometimes referred to as multimedia models) provide a platform for combining medium-specific environmental models and databases, such that they can be utilized in a more holistic assessment of contaminant fate and transport in the environment. These frameworks provide a relatively seamless transfer of information from one model to the next and from databases to models. Within these frameworks, multiple models are linked, resulting in models that consume information from upstream models and produce information to be consumed by downstream models. The Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES) is an example, which allows users to link their models to other models and databases. FRAMES is an icon-driven, site-layout platform that is an open-architecture, object-oriented system that interacts with environmental databases; helps the user construct a Conceptual Site Model that is real-world based; allows the user to choose the most appropriate models to solve simulation requirements; solves the standard risk paradigm of release transport and fate; and exposure/risk assessments to people and ecology; and presents graphical packages for analyzing results. FRAMES is specifically designed allow users to link their own models into a system, which contains models developed by others. This paper will present the use of FRAMES to evaluate potential human health exposures using real site data and realistic assumptions from sources, through the vadose and saturated zones, to exposure and risk assessment at three real-world sites, using the Multimedia Environmental Pollutant Assessment System (MEPAS), which is a multimedia model contained within FRAMES. These real-world examples use predictive and comparative approaches coupled with a Monte Carlo analysis. A predictive analysis is where models are calibrated to monitored site data, prior to the assessment, and a comparative analysis is where models are not calibrated but