Shek, Daniel T L; Ma, Cecilia M S
2011-01-05
Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.
Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations
Shek, Daniel T. L.; Ma, Cecilia M. S.
2011-01-01
Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented. PMID:21218263
Ishikawa, Sohta A; Inagaki, Yuji; Hashimoto, Tetsuo
2012-01-01
In phylogenetic analyses of nucleotide sequences, 'homogeneous' substitution models, which assume the stationarity of base composition across a tree, are widely used, albeit individual sequences may bear distinctive base frequencies. In the worst-case scenario, a homogeneous model-based analysis can yield an artifactual union of two distantly related sequences that achieved similar base frequencies in parallel. Such potential difficulty can be countered by two approaches, 'RY-coding' and 'non-homogeneous' models. The former approach converts four bases into purine and pyrimidine to normalize base frequencies across a tree, while the heterogeneity in base frequency is explicitly incorporated in the latter approach. The two approaches have been applied to real-world sequence data; however, their basic properties have not been fully examined by pioneering simulation studies. Here, we assessed the performances of the maximum-likelihood analyses incorporating RY-coding and a non-homogeneous model (RY-coding and non-homogeneous analyses) on simulated data with parallel convergence to similar base composition. Both RY-coding and non-homogeneous analyses showed superior performances compared with homogeneous model-based analyses. Curiously, the performance of RY-coding analysis appeared to be significantly affected by a setting of the substitution process for sequence simulation relative to that of non-homogeneous analysis. The performance of a non-homogeneous analysis was also validated by analyzing a real-world sequence data set with significant base heterogeneity.
Report of the LSPI/NASA Workshop on Lunar Base Methodology Development
NASA Technical Reports Server (NTRS)
Nozette, Stewart; Roberts, Barney
1985-01-01
Groundwork was laid for computer models which will assist in the design of a manned lunar base. The models, herein described, will provide the following functions for the successful conclusion of that task: strategic planning; sensitivity analyses; impact analyses; and documentation. Topics addressed include: upper level model description; interrelationship matrix; user community; model features; model descriptions; system implementation; model management; and plans for future action.
Multi-country health surveys: are the analyses misleading?
Masood, Mohd; Reidpath, Daniel D
2014-05-01
The aim of this paper was to review the types of approaches currently utilized in the analysis of multi-country survey data, specifically focusing on design and modeling issues with a focus on analyses of significant multi-country surveys published in 2010. A systematic search strategy was used to identify the 10 multi-country surveys and the articles published from them in 2010. The surveys were selected to reflect diverse topics and foci; and provide an insight into analytic approaches across research themes. The search identified 159 articles appropriate for full text review and data extraction. The analyses adopted in the multi-country surveys can be broadly classified as: univariate/bivariate analyses, and multivariate/multivariable analyses. Multivariate/multivariable analyses may be further divided into design- and model-based analyses. Of the 159 articles reviewed, 129 articles used model-based analysis, 30 articles used design-based analyses. Similar patterns could be seen in all the individual surveys. While there is general agreement among survey statisticians that complex surveys are most appropriately analyzed using design-based analyses, most researchers continued to use the more common model-based approaches. Recent developments in design-based multi-level analysis may be one approach to include all the survey design characteristics. This is a relatively new area, however, and there remains statistical, as well as applied analytic research required. An important limitation of this study relates to the selection of the surveys used and the choice of year for the analysis, i.e., year 2010 only. There is, however, no strong reason to believe that analytic strategies have changed radically in the past few years, and 2010 provides a credible snapshot of current practice.
Slade, Eric P.; Becker, Kimberly D.
2014-01-01
This paper discusses the steps and decisions involved in proximal-distal economic modeling, in which social, behavioral, and academic outcomes data for children may be used to inform projections of the economic consequences of interventions. Economic projections based on proximal-distal modeling techniques may be used in cost-benefit analyses when information is unavailable for certain long term outcomes data in adulthood or to build entire cost-benefit analyses. Although examples of proximal-distal economic analyses of preventive interventions exist in policy reports prepared for governmental agencies, such analyses have rarely been completed in conjunction with research trials. The modeling decisions on which these prediction models are based are often opaque to policymakers and other end-users. This paper aims to illuminate some of the key steps and considerations involved in constructing proximal-distal prediction models and to provide examples and suggestions that may help guide future proximal-distal analyses. PMID:24337979
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Chwalowski, Pawel; Wieseman, Carol D.; Eller, David; Ringertz, Ulf
2017-01-01
A status report is provided on the collaboration between the Royal Institute of Technology (KTH) in Sweden and the NASA Langley Research Center regarding the aeroelastic analyses of a full-span fighter configuration wind-tunnel model. This wind-tunnel model was tested in the Transonic Dynamics Tunnel (TDT) in the summer of 2016. Large amounts of data were acquired including steady/unsteady pressures, accelerations, strains, and measured dynamic deformations. The aeroelastic analyses presented include linear aeroelastic analyses, CFD steady analyses, and analyses using CFD-based reduced-order models (ROMs).
Movement behavior explains genetic differentiation in American black bears
Samuel A Cushman; Jesse S. Lewis
2010-01-01
Individual-based landscape genetic analyses provide empirically based models of gene flow. It would be valuable to verify the predictions of these models using independent data of a different type. Analyses using different data sources that produce consistent results provide strong support for the generality of the findings. Mating and dispersal movements are the...
Comparing Distance vs. Campus-Based Delivery of Research Methods Courses
ERIC Educational Resources Information Center
Girod, Mark; Wojcikiewicz, Steve
2009-01-01
A causal-comparative pre-test, post-test design was used to investigate differences in learning in a research methods course for face-to-face and web-based delivery models. Analyses of participant achievement (N = 205) revealed almost no differences but post-hoc analyses revealed important differences in pedagogy between delivery models despite…
SVDS plume impingement modeling development. Sensitivity analysis supporting level B requirements
NASA Technical Reports Server (NTRS)
Chiu, P. B.; Pearson, D. J.; Muhm, P. M.; Schoonmaker, P. B.; Radar, R. J.
1977-01-01
A series of sensitivity analyses (trade studies) performed to select features and capabilities to be implemented in the plume impingement model is described. Sensitivity analyses were performed in study areas pertaining to geometry, flowfield, impingement, and dynamical effects. Recommendations based on these analyses are summarized.
A python framework for environmental model uncertainty analysis
White, Jeremy; Fienen, Michael N.; Doherty, John E.
2016-01-01
We have developed pyEMU, a python framework for Environmental Modeling Uncertainty analyses, open-source tool that is non-intrusive, easy-to-use, computationally efficient, and scalable to highly-parameterized inverse problems. The framework implements several types of linear (first-order, second-moment (FOSM)) and non-linear uncertainty analyses. The FOSM-based analyses can also be completed prior to parameter estimation to help inform important modeling decisions, such as parameterization and objective function formulation. Complete workflows for several types of FOSM-based and non-linear analyses are documented in example notebooks implemented using Jupyter that are available in the online pyEMU repository. Example workflows include basic parameter and forecast analyses, data worth analyses, and error-variance analyses, as well as usage of parameter ensemble generation and management capabilities. These workflows document the necessary steps and provides insights into the results, with the goal of educating users not only in how to apply pyEMU, but also in the underlying theory of applied uncertainty quantification.
Nasejje, Justine B; Mwambi, Henry; Dheda, Keertan; Lesosky, Maia
2017-07-28
Random survival forest (RSF) models have been identified as alternative methods to the Cox proportional hazards model in analysing time-to-event data. These methods, however, have been criticised for the bias that results from favouring covariates with many split-points and hence conditional inference forests for time-to-event data have been suggested. Conditional inference forests (CIF) are known to correct the bias in RSF models by separating the procedure for the best covariate to split on from that of the best split point search for the selected covariate. In this study, we compare the random survival forest model to the conditional inference model (CIF) using twenty-two simulated time-to-event datasets. We also analysed two real time-to-event datasets. The first dataset is based on the survival of children under-five years of age in Uganda and it consists of categorical covariates with most of them having more than two levels (many split-points). The second dataset is based on the survival of patients with extremely drug resistant tuberculosis (XDR TB) which consists of mainly categorical covariates with two levels (few split-points). The study findings indicate that the conditional inference forest model is superior to random survival forest models in analysing time-to-event data that consists of covariates with many split-points based on the values of the bootstrap cross-validated estimates for integrated Brier scores. However, conditional inference forests perform comparably similar to random survival forests models in analysing time-to-event data consisting of covariates with fewer split-points. Although survival forests are promising methods in analysing time-to-event data, it is important to identify the best forest model for analysis based on the nature of covariates of the dataset in question.
Challenges and Opportunities in Analysing Students Modelling
ERIC Educational Resources Information Center
Blanco-Anaya, Paloma; Justi, Rosária; Díaz de Bustamante, Joaquín
2017-01-01
Modelling-based teaching activities have been designed and analysed from distinct theoretical perspectives. In this paper, we use one of them--the model of modelling diagram (MMD)--as an analytical tool in a regular classroom context. This paper examines the challenges that arise when the MMD is used as an analytical tool to characterise the…
Model Documentation of Base Case Data | Regional Energy Deployment System
Model | Energy Analysis | NREL Documentation of Base Case Data Model Documentation of Base Case base case of the model. The base case was developed simply as a point of departure for other analyses Base Case derives many of its inputs from the Energy Information Administration's (EIA's) Annual Energy
CyTOF workflow: differential discovery in high-throughput high-dimensional cytometry datasets
Nowicka, Malgorzata; Krieg, Carsten; Weber, Lukas M.; Hartmann, Felix J.; Guglietta, Silvia; Becher, Burkhard; Levesque, Mitchell P.; Robinson, Mark D.
2017-01-01
High dimensional mass and flow cytometry (HDCyto) experiments have become a method of choice for high throughput interrogation and characterization of cell populations.Here, we present an R-based pipeline for differential analyses of HDCyto data, largely based on Bioconductor packages. We computationally define cell populations using FlowSOM clustering, and facilitate an optional but reproducible strategy for manual merging of algorithm-generated clusters. Our workflow offers different analysis paths, including association of cell type abundance with a phenotype or changes in signaling markers within specific subpopulations, or differential analyses of aggregated signals. Importantly, the differential analyses we show are based on regression frameworks where the HDCyto data is the response; thus, we are able to model arbitrary experimental designs, such as those with batch effects, paired designs and so on. In particular, we apply generalized linear mixed models to analyses of cell population abundance or cell-population-specific analyses of signaling markers, allowing overdispersion in cell count or aggregated signals across samples to be appropriately modeled. To support the formal statistical analyses, we encourage exploratory data analysis at every step, including quality control (e.g. multi-dimensional scaling plots), reporting of clustering results (dimensionality reduction, heatmaps with dendrograms) and differential analyses (e.g. plots of aggregated signals). PMID:28663787
Yang, Yanzheng; Zhu, Qiuan; Peng, Changhui; Wang, Han; Xue, Wei; Lin, Guanghui; Wen, Zhongming; Chang, Jie; Wang, Meng; Liu, Guobin; Li, Shiqing
2016-01-01
Increasing evidence indicates that current dynamic global vegetation models (DGVMs) have suffered from insufficient realism and are difficult to improve, particularly because they are built on plant functional type (PFT) schemes. Therefore, new approaches, such as plant trait-based methods, are urgently needed to replace PFT schemes when predicting the distribution of vegetation and investigating vegetation sensitivity. As an important direction towards constructing next-generation DGVMs based on plant functional traits, we propose a novel approach for modelling vegetation distributions and analysing vegetation sensitivity through trait-climate relationships in China. The results demonstrated that a Gaussian mixture model (GMM) trained with a LMA-Nmass-LAI data combination yielded an accuracy of 72.82% in simulating vegetation distribution, providing more detailed parameter information regarding community structures and ecosystem functions. The new approach also performed well in analyses of vegetation sensitivity to different climatic scenarios. Although the trait-climate relationship is not the only candidate useful for predicting vegetation distributions and analysing climatic sensitivity, it sheds new light on the development of next-generation trait-based DGVMs. PMID:27052108
Flutter suppression via piezoelectric actuation
NASA Technical Reports Server (NTRS)
Heeg, Jennifer
1991-01-01
Experimental flutter results obtained from wind tunnel tests of a two degree of freedom wind tunnel model are presented for the open and closed loop systems. The wind tunnel model is a two degree of freedom system which is actuated by piezoelectric plates configured as bimorphs. The model design was based on finite element structural analyses and flutter analyses. A control law was designed based on a discrete system model; gain feedback of strain measurements was utilized in the control task. The results show a 21 pct. increase in the flutter speed.
Panchal, Mitesh B; Upadhyay, Sanjay H
2014-09-01
The unprecedented dynamic characteristics of nanoelectromechanical systems make them suitable for nanoscale mass sensing applications. Owing to superior biocompatibility, boron nitride nanotubes (BNNTs) are being increasingly used for such applications. In this study, the feasibility of single walled BNNT (SWBNNT)-based bio-sensor has been explored. Molecular structural mechanics-based finite element (FE) modelling approach has been used to analyse the dynamic behaviour of SWBNNT-based biosensors. The application of an SWBNNT-based mass sensing for zeptogram level of mass has been reported. Also, the effect of size of the nanotube in terms of length as well as different chiral atomic structures of SWBNNT has been analysed for their sensitivity analysis. The vibrational behaviour of SWBNNT has been analysed for higher-order modes of vibrations to identify the intermediate landing position of biological object of zeptogram scale. The present molecular structural mechanics-based FE modelling approach is found to be very effectual to incorporate different chiralities of the atomic structures. Also, different boundary conditions can be effectively simulated using the present approach to analyse the dynamic behaviour of the SWBNNT-based mass sensor. The presented study has explored the potential of SWBNNT, as a nanobiosensor having the capability of zeptogram level mass sensing.
NASA Technical Reports Server (NTRS)
Duda, David P.; Minnis, Patrick
2009-01-01
Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.
Toward a Model-Based Approach to Flight System Fault Protection
NASA Technical Reports Server (NTRS)
Day, John; Murray, Alex; Meakin, Peter
2012-01-01
Fault Protection (FP) is a distinct and separate systems engineering sub-discipline that is concerned with the off-nominal behavior of a system. Flight system fault protection is an important part of the overall flight system systems engineering effort, with its own products and processes. As with other aspects of systems engineering, the FP domain is highly amenable to expression and management in models. However, while there are standards and guidelines for performing FP related analyses, there are not standards or guidelines for formally relating the FP analyses to each other or to the system hardware and software design. As a result, the material generated for these analyses are effectively creating separate models that are only loosely-related to the system being designed. Development of approaches that enable modeling of FP concerns in the same model as the system hardware and software design enables establishment of formal relationships that has great potential for improving the efficiency, correctness, and verification of the implementation of flight system FP. This paper begins with an overview of the FP domain, and then continues with a presentation of a SysML/UML model of the FP domain and the particular analyses that it contains, by way of showing a potential model-based approach to flight system fault protection, and an exposition of the use of the FP models in FSW engineering. The analyses are small examples, inspired by current real-project examples of FP analyses.
Argumentation in Science Education: A Model-based Framework
NASA Astrophysics Data System (ADS)
Böttcher, Florian; Meisert, Anke
2011-02-01
The goal of this article is threefold: First, the theoretical background for a model-based framework of argumentation to describe and evaluate argumentative processes in science education is presented. Based on the general model-based perspective in cognitive science and the philosophy of science, it is proposed to understand arguments as reasons for the appropriateness of a theoretical model which explains a certain phenomenon. Argumentation is considered to be the process of the critical evaluation of such a model if necessary in relation to alternative models. Secondly, some methodological details are exemplified for the use of a model-based analysis in the concrete classroom context. Third, the application of the approach in comparison with other analytical models will be presented to demonstrate the explicatory power and depth of the model-based perspective. Primarily, the framework of Toulmin to structurally analyse arguments is contrasted with the approach presented here. It will be demonstrated how common methodological and theoretical problems in the context of Toulmin's framework can be overcome through a model-based perspective. Additionally, a second more complex argumentative sequence will also be analysed according to the invented analytical scheme to give a broader impression of its potential in practical use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maslenikov, O.R.; Mraz, M.J.; Johnson, J.J.
1986-03-01
This report documents the seismic analyses performed by SMA for the MFTF-B Axicell vacuum vessel. In the course of this study we performed response spectrum analyses, CLASSI fixed-base analyses, and SSI analyses that included interaction effects between the vessel and vault. The response spectrum analysis served to benchmark certain modeling differences between the LLNL and SMA versions of the vessel model. The fixed-base analysis benchmarked the differences between analysis techniques. The SSI analyses provided our best estimate of vessel response to the postulated seismic excitation for the MFTF-B facility, and included consideration of uncertainties in soil properties by calculating responsemore » for a range of soil shear moduli. Our results are presented in this report as tables of comparisons of specific member forces from our analyses and the analyses performed by LLNL. Also presented are tables of maximum accelerations and relative displacements and plots of response spectra at various selected locations.« less
Review of Statistical Methods for Analysing Healthcare Resources and Costs
Mihaylova, Borislava; Briggs, Andrew; O'Hagan, Anthony; Thompson, Simon G
2011-01-01
We review statistical methods for analysing healthcare resource use and costs, their ability to address skewness, excess zeros, multimodality and heavy right tails, and their ease for general use. We aim to provide guidance on analysing resource use and costs focusing on randomised trials, although methods often have wider applicability. Twelve broad categories of methods were identified: (I) methods based on the normal distribution, (II) methods following transformation of data, (III) single-distribution generalized linear models (GLMs), (IV) parametric models based on skewed distributions outside the GLM family, (V) models based on mixtures of parametric distributions, (VI) two (or multi)-part and Tobit models, (VII) survival methods, (VIII) non-parametric methods, (IX) methods based on truncation or trimming of data, (X) data components models, (XI) methods based on averaging across models, and (XII) Markov chain methods. Based on this review, our recommendations are that, first, simple methods are preferred in large samples where the near-normality of sample means is assured. Second, in somewhat smaller samples, relatively simple methods, able to deal with one or two of above data characteristics, may be preferable but checking sensitivity to assumptions is necessary. Finally, some more complex methods hold promise, but are relatively untried; their implementation requires substantial expertise and they are not currently recommended for wider applied work. Copyright © 2010 John Wiley & Sons, Ltd. PMID:20799344
Computational Aeroelastic Analyses of a Low-Boom Supersonic Configuration
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Sanetrik, Mark D.; Chwalowski, Pawel; Connolly, Joseph
2015-01-01
An overview of NASA's Commercial Supersonic Technology (CST) Aeroservoelasticity (ASE) element is provided with a focus on recent computational aeroelastic analyses of a low-boom supersonic configuration developed by Lockheed-Martin and referred to as the N+2 configuration. The overview includes details of the computational models developed to date including a linear finite element model (FEM), linear unsteady aerodynamic models, unstructured CFD grids, and CFD-based aeroelastic analyses. In addition, a summary of the work involving the development of aeroelastic reduced-order models (ROMs) and the development of an aero-propulso-servo-elastic (APSE) model is provided.
Modeling the Drift Towards Sex Role Deviance.
ERIC Educational Resources Information Center
James, Jennifer; Vitaliano, Peter Paul
The interrelationships of deviant life experiences and current status, i.e., prostitution versus non-prostitution, were investigated by the application of multivariate analyses. Variables were studied involving early home life, pregnancy history, sexual history, and criminal involvement. Based on the analyses, three models were developed that…
Geospace Environment Modeling 2008-2009 Challenge: Ground Magnetic Field Perturbations
NASA Technical Reports Server (NTRS)
Pulkkinen, A.; Kuznetsova, M.; Ridley, A.; Raeder, J.; Vapirev, A.; Weimer, D.; Weigel, R. S.; Wiltberger, M.; Millward, G.; Rastatter, L.;
2011-01-01
Acquiring quantitative metrics!based knowledge about the performance of various space physics modeling approaches is central for the space weather community. Quantification of the performance helps the users of the modeling products to better understand the capabilities of the models and to choose the approach that best suits their specific needs. Further, metrics!based analyses are important for addressing the differences between various modeling approaches and for measuring and guiding the progress in the field. In this paper, the metrics!based results of the ground magnetic field perturbation part of the Geospace Environment Modeling 2008 2009 Challenge are reported. Predictions made by 14 different models, including an ensemble model, are compared to geomagnetic observatory recordings from 12 different northern hemispheric locations. Five different metrics are used to quantify the model performances for four storm events. It is shown that the ranking of the models is strongly dependent on the type of metric used to evaluate the model performance. None of the models rank near or at the top systematically for all used metrics. Consequently, one cannot pick the absolute winner : the choice for the best model depends on the characteristics of the signal one is interested in. Model performances vary also from event to event. This is particularly clear for root!mean!square difference and utility metric!based analyses. Further, analyses indicate that for some of the models, increasing the global magnetohydrodynamic model spatial resolution and the inclusion of the ring current dynamics improve the models capability to generate more realistic ground magnetic field fluctuations.
Rational analyses of information foraging on the web.
Pirolli, Peter
2005-05-06
This article describes rational analyses and cognitive models of Web users developed within information foraging theory. This is done by following the rational analysis methodology of (a) characterizing the problems posed by the environment, (b) developing rational analyses of behavioral solutions to those problems, and (c) developing cognitive models that approach the realization of those solutions. Navigation choice is modeled as a random utility model that uses spreading activation mechanisms that link proximal cues (information scent) that occur in Web browsers to internal user goals. Web-site leaving is modeled as an ongoing assessment by the Web user of the expected benefits of continuing at a Web site as opposed to going elsewhere. These cost-benefit assessments are also based on spreading activation models of information scent. Evaluations include a computational model of Web user behavior called Scent-Based Navigation and Information Foraging in the ACT Architecture, and the Law of Surfing, which characterizes the empirical distribution of the length of paths of visitors at a Web site. 2005 Lawrence Erlbaum Associates, Inc.
Regression and multivariate models for predicting particulate matter concentration level.
Nazif, Amina; Mohammed, Nurul Izma; Malakahmad, Amirhossein; Abualqumboz, Motasem S
2018-01-01
The devastating health effects of particulate matter (PM 10 ) exposure by susceptible populace has made it necessary to evaluate PM 10 pollution. Meteorological parameters and seasonal variation increases PM 10 concentration levels, especially in areas that have multiple anthropogenic activities. Hence, stepwise regression (SR), multiple linear regression (MLR) and principal component regression (PCR) analyses were used to analyse daily average PM 10 concentration levels. The analyses were carried out using daily average PM 10 concentration, temperature, humidity, wind speed and wind direction data from 2006 to 2010. The data was from an industrial air quality monitoring station in Malaysia. The SR analysis established that meteorological parameters had less influence on PM 10 concentration levels having coefficient of determination (R 2 ) result from 23 to 29% based on seasoned and unseasoned analysis. While, the result of the prediction analysis showed that PCR models had a better R 2 result than MLR methods. The results for the analyses based on both seasoned and unseasoned data established that MLR models had R 2 result from 0.50 to 0.60. While, PCR models had R 2 result from 0.66 to 0.89. In addition, the validation analysis using 2016 data also recognised that the PCR model outperformed the MLR model, with the PCR model for the seasoned analysis having the best result. These analyses will aid in achieving sustainable air quality management strategies.
Micromechanics based simulation of ductile fracture in structural steels
NASA Astrophysics Data System (ADS)
Yellavajjala, Ravi Kiran
The broader aim of this research is to develop fundamental understanding of ductile fracture process in structural steels, propose robust computational models to quantify the associated damage, and provide numerical tools to simplify the implementation of these computational models into general finite element framework. Mechanical testing on different geometries of test specimens made of ASTM A992 steels is conducted to experimentally characterize the ductile fracture at different stress states under monotonic and ultra-low cycle fatigue (ULCF) loading. Scanning electron microscopy studies of the fractured surfaces is conducted to decipher the underlying microscopic damage mechanisms that cause fracture in ASTM A992 steels. Detailed micromechanical analyses for monotonic and cyclic loading are conducted to understand the influence of stress triaxiality and Lode parameter on the void growth phase of ductile fracture. Based on monotonic analyses, an uncoupled micromechanical void growth model is proposed to predict ductile fracture. This model is then incorporated in to finite element program as a weakly coupled model to simulate the loss of load carrying capacity in the post microvoid coalescence regime for high triaxialities. Based on the cyclic analyses, an uncoupled micromechanics based cyclic void growth model is developed to predict the ULCF life of ASTM A992 steels subjected to high stress triaxialities. Furthermore, a computational fracture locus for ASTM A992 steels is developed and incorporated in to finite element program as an uncoupled ductile fracture model. This model can be used to predict the ductile fracture initiation under monotonic loading in a wide range of triaxiality and Lode parameters. Finally, a coupled microvoid elongation and dilation based continuum damage model is proposed, implemented, calibrated and validated. This model is capable of simulating the local softening caused by the various phases of ductile fracture process under monotonic loading for a wide range of stress states. Novel differentiation procedures based on complex analyses along with existing finite difference methods and automatic differentiation are extended using perturbation techniques to evaluate tensor derivatives. These tensor differentiation techniques are then used to automate nonlinear constitutive models into implicit finite element framework. Finally, the efficiency of these automation procedures is demonstrated using benchmark problems.
Economics of residue harvest: Regional partnership evaluation
USDA-ARS?s Scientific Manuscript database
Economic analyses on the viability of corn (Zea mays, L.) stover harvest for bioenergy production have largely been based on simulation modeling. While some studies have utilized field research data, most field-based analyses have included a limited number of sites and a narrow geographic distributi...
Comparison of Numerical Analyses with a Static Load Test of a Continuous Flight Auger Pile
NASA Astrophysics Data System (ADS)
Hoľko, Michal; Stacho, Jakub
2014-12-01
The article deals with numerical analyses of a Continuous Flight Auger (CFA) pile. The analyses include a comparison of calculated and measured load-settlement curves as well as a comparison of the load distribution over a pile's length. The numerical analyses were executed using two types of software, i.e., Ansys and Plaxis, which are based on FEM calculations. Both types of software are different from each other in the way they create numerical models, model the interface between the pile and soil, and use constitutive material models. The analyses have been prepared in the form of a parametric study, where the method of modelling the interface and the material models of the soil are compared and analysed. Our analyses show that both types of software permit the modelling of pile foundations. The Plaxis software uses advanced material models as well as the modelling of the impact of groundwater or overconsolidation. The load-settlement curve calculated using Plaxis is equal to the results of a static load test with a more than 95 % degree of accuracy. In comparison, the load-settlement curve calculated using Ansys allows for the obtaining of only an approximate estimate, but the software allows for the common modelling of large structure systems together with a foundation system.
Satoshi Hirabayashi; Chuck Kroll; David Nowak
2011-01-01
The Urban Forest Effects-Deposition model (UFORE-D) was developed with a component-based modeling approach. Functions of the model were separated into components that are responsible for user interface, data input/output, and core model functions. Taking advantage of the component-based approach, three UFORE-D applications were developed: a base application to estimate...
The concept and use of elasticity in population viability models [Exercise 13
Carolyn Hull Sieg; Rudy M. King; Fred Van Dyke
2003-01-01
As you have seen in exercise 12, plants, such as the western prairie fringed orchid, typically have distinct life stages and complex life cycles that require the matrix analyses associated with a stage-based population model. Some statistics that can be generated from such matrix analyses can be very informative in determining which variables in the model have the...
NASA Astrophysics Data System (ADS)
Sulistianingsih, E.; Kiftiah, M.; Rosadi, D.; Wahyuni, H.
2017-04-01
Gross Domestic Product (GDP) is an indicator of economic growth in a region. GDP is a panel data, which consists of cross-section and time series data. Meanwhile, panel regression is a tool which can be utilised to analyse panel data. There are three models in panel regression, namely Common Effect Model (CEM), Fixed Effect Model (FEM) and Random Effect Model (REM). The models will be chosen based on results of Chow Test, Hausman Test and Lagrange Multiplier Test. This research analyses palm oil about production, export, and government consumption to five district GDP are in West Kalimantan, namely Sanggau, Sintang, Sambas, Ketapang and Bengkayang by panel regression. Based on the results of analyses, it concluded that REM, which adjusted-determination-coefficient is 0,823, is the best model in this case. Also, according to the result, only Export and Government Consumption that influence GDP of the districts.
Chung, Sang M; Lee, David J; Hand, Austin; Young, Philip; Vaidyanathan, Jayabharathi; Sahajwalla, Chandrahas
2015-12-01
The study evaluated whether the renal function decline rate per year with age in adults varies based on two primary statistical analyses: cross-section (CS), using one observation per subject, and longitudinal (LT), using multiple observations per subject over time. A total of 16628 records (3946 subjects; age range 30-92 years) of creatinine clearance and relevant demographic data were used. On average, four samples per subject were collected for up to 2364 days (mean: 793 days). A simple linear regression and random coefficient models were selected for CS and LT analyses, respectively. The renal function decline rates per year were 1.33 and 0.95 ml/min/year for CS and LT analyses, respectively, and were slower when the repeated individual measurements were considered. The study confirms that rates are different based on statistical analyses, and that a statistically robust longitudinal model with a proper sampling design provides reliable individual as well as population estimates of the renal function decline rates per year with age in adults. In conclusion, our findings indicated that one should be cautious in interpreting the renal function decline rate with aging information because its estimation was highly dependent on the statistical analyses. From our analyses, a population longitudinal analysis (e.g. random coefficient model) is recommended if individualization is critical, such as a dose adjustment based on renal function during a chronic therapy. Copyright © 2015 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, Clifford Kuofei
Chemical transport through human skin can play a significant role in human exposure to toxic chemicals in the workplace, as well as to chemical/biological warfare agents in the battlefield. The viability of transdermal drug delivery also relies on chemical transport processes through the skin. Models of percutaneous absorption are needed for risk-based exposure assessments and drug-delivery analyses, but previous mechanistic models have been largely deterministic. A probabilistic, transient, three-phase model of percutaneous absorption of chemicals has been developed to assess the relative importance of uncertain parameters and processes that may be important to risk-based assessments. Penetration routes through the skinmore » that were modeled include the following: (1) intercellular diffusion through the multiphase stratum corneum; (2) aqueous-phase diffusion through sweat ducts; and (3) oil-phase diffusion through hair follicles. Uncertainty distributions were developed for the model parameters, and a Monte Carlo analysis was performed to simulate probability distributions of mass fluxes through each of the routes. Sensitivity analyses using stepwise linear regression were also performed to identify model parameters that were most important to the simulated mass fluxes at different times. This probabilistic analysis of percutaneous absorption (PAPA) method has been developed to improve risk-based exposure assessments and transdermal drug-delivery analyses, where parameters and processes can be highly uncertain.« less
Hess, Lisa M; Rajan, Narayan; Winfree, Katherine; Davey, Peter; Ball, Mark; Knox, Hediyyih; Graham, Christopher
2015-12-01
Health technology assessment is not required for regulatory submission or approval in either the United States (US) or Japan. This study was designed as a cross-country evaluation of cost analyses conducted in the US and Japan based on the PRONOUNCE phase III lung cancer trial, which compared pemetrexed plus carboplatin followed by pemetrexed (PemC) versus paclitaxel plus carboplatin plus bevacizumab followed by bevacizumab (PCB). Two cost analyses were conducted in accordance with International Society For Pharmacoeconomics and Outcomes Research good research practice standards. Costs were obtained based on local pricing structures; outcomes were considered equivalent based on the PRONOUNCE trial results. Other inputs were included from the trial data (e.g., toxicity rates) or from local practice sources (e.g., toxicity management). The models were compared across key input and transferability factors. Despite differences in local input data, both models demonstrated a similar direction, with the cost of PemC being consistently lower than the cost of PCB. The variation in individual input parameters did affect some of the specific categories, such as toxicity, and impacted sensitivity analyses, with the cost differential between comparators being greater in Japan than in the US. When economic models are based on clinical trial data, many inputs and outcomes are held consistent. The alterable inputs were not in and of themselves large enough to significantly impact the results between countries, which were directionally consistent with greater variation seen in sensitivity analyses. The factors that vary across jurisdictions, even when minor, can have an impact on trial-based economic analyses. Eli Lilly and Company.
Project-based learning in Geotechnics: cooperative versus collaborative teamwork
NASA Astrophysics Data System (ADS)
Pinho-Lopes, Margarida; Macedo, Joaquim
2016-01-01
Since 2007/2008 project-based learning models have been used to deliver two fundamental courses on Geotechnics in University of Aveiro, Portugal. These models have evolved and have encompassed either cooperative or collaborative teamwork. Using data collected in five editions of each course (Soil Mechanics I and Soil Mechanics II), the different characteristics of the models using cooperative or collaborative teamwork are pointed out and analysed, namely in terms of the students' perceptions. The data collected include informal feedback from students, monitoring of their marks and academic performance, and answers to two sets of questionnaires: developed for these courses, and institutional. The data indicate students have good opinion of the project-based learning model, though collaborative teamwork is the best rated. The overall efficacy of the models was analysed (sum of their effectiveness, efficiency and attractiveness). The collaborative model was found more adequate.
Khanfar, Mohammad A; Banat, Fahmy; Alabed, Shada; Alqtaishat, Saja
2017-02-01
High expression of Nek2 has been detected in several types of cancer and it represents a novel target for human cancer. In the current study, structure-based pharmacophore modeling combined with multiple linear regression (MLR)-based QSAR analyses was applied to disclose the structural requirements for NEK2 inhibition. Generated pharmacophoric models were initially validated with receiver operating characteristic (ROC) curve, and optimum models were subsequently implemented in QSAR modeling with other physiochemical descriptors. QSAR-selected models were implied as 3D search filters to mine the National Cancer Institute (NCI) database for novel NEK2 inhibitors, whereas the associated QSAR model prioritized the bioactivities of captured hits for in vitro evaluation. Experimental validation identified several potent NEK2 inhibitors of novel structural scaffolds. The most potent captured hit exhibited an [Formula: see text] value of 237 nM.
Chindelevitch, Leonid; Trigg, Jason; Regev, Aviv; Berger, Bonnie
2014-01-01
Constraint-based models are currently the only methodology that allows the study of metabolism at the whole-genome scale. Flux balance analysis is commonly used to analyse constraint-based models. Curiously, the results of this analysis vary with the software being run, a situation that we show can be remedied by using exact rather than floating-point arithmetic. Here we introduce MONGOOSE, a toolbox for analysing the structure of constraint-based metabolic models in exact arithmetic. We apply MONGOOSE to the analysis of 98 existing metabolic network models and find that the biomass reaction is surprisingly blocked (unable to sustain non-zero flux) in nearly half of them. We propose a principled approach for unblocking these reactions and extend it to the problems of identifying essential and synthetic lethal reactions and minimal media. Our structural insights enable a systematic study of constraint-based metabolic models, yielding a deeper understanding of their possibilities and limitations. PMID:25291352
Linear regression metamodeling as a tool to summarize and present simulation model results.
Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M
2013-10-01
Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.
Vibro-Acoustic FE Analyses of the Saab 2000 Aircraft
NASA Technical Reports Server (NTRS)
Green, Inge S.
1992-01-01
A finite element model of the Saab 2000 fuselage structure and interior cavity has been created in order to compute the noise level in the passenger cabin due to propeller noise. Areas covered in viewgraph format include the following: coupled acoustic/structural noise; data base creation; frequency response analysis; model validation; and planned analyses.
Representative Structural Element - A New Paradigm for Multi-Scale Structural Modeling
2016-07-05
developed by NASA Glenn Research Center based on Aboudi’s micromechanics theories [5] that provides a wide range of capabilities for modeling ...to use appropriate models for related problems based on the capability of corresponding approaches. Moreover, the analyses will give a general...interface of heterogeneous materials but also help engineers to use appropriate models for related problems based on the capability of corresponding
Regional analyses of labor markets and demography: a model based Norwegian example.
Stambol, L S; Stolen, N M; Avitsland, T
1998-01-01
The authors discuss the regional REGARD model, developed by Statistics Norway to analyze the regional implications of macroeconomic development of employment, labor force, and unemployment. "In building the model, empirical analyses of regional producer behavior in manufacturing industries have been performed, and the relation between labor market development and regional migration has been investigated. Apart from providing a short description of the REGARD model, this article demonstrates the functioning of the model, and presents some results of an application." excerpt
NASA Astrophysics Data System (ADS)
Mitchell, K. E.
2006-12-01
The Environmental Modeling Center (EMC) of the National Centers for Environmental Prediction (NCEP) applies several different analyses of observed precipitation in both the data assimilation and validation components of NCEP's global and regional numerical weather and climate prediction/analysis systems (including in NCEP global and regional reanalysis). This invited talk will survey these data assimilation and validation applications and methodologies, as well as the temporal frequency, spatial domains, spatial resolution, data sources, data density and data quality control in the precipitation analyses that are applied. Some of the precipitation analyses applied by EMC are produced by NCEP's Climate Prediction Center (CPC), while others are produced by the River Forecast Centers (RFCs) of the National Weather Service (NWS), or by automated algorithms of the NWS WSR-88D Radar Product Generator (RPG). Depending on the specific type of application in data assimilation or model forecast validation, the temporal resolution of the precipitation analyses may be hourly, daily, or pentad (5-day) and the domain may be global, continental U.S. (CONUS), or Mexico. The data sources for precipitation include ground-based gauge observations, radar-based estimates, and satellite-based estimates. The precipitation analyses over the CONUS are analyses of either hourly, daily or monthly totals of precipitation, and they are of two distinct types: gauge-only or primarily radar-estimated. The gauge-only CONUS analysis of daily precipitation utilizes an orographic-adjustment technique (based on the well-known PRISM precipitation climatology of Oregon State University) developed by the NWS Office of Hydrologic Development (OHD). The primary NCEP global precipitation analysis is the pentad CPC Merged Analysis of Precipitation (CMAP), which blends both gauge observations and satellite estimates. The presentation will include a brief comparison between the CMAP analysis and other global precipitation analyses by other institutions. Other global precipitation analyses produced by other methodologies are also used by EMC in certain applications, such as CPC's well-known satellite-IR based technique known as "GPI", and satellite-microwave based estimates from NESDIS or NASA. Finally, the presentation will cover the three assimilation methods used by EMC to assimilate precipitation data, including 1) 3D-VAR variational assimilation in NCEP's Global Data Assimilation System (GDAS), 2) direct insertion of precipitation-inferred vertical latent heating profiles in NCEP's N. American Data Assimilation System (NDAS) and its N. American Regional Reanalysis (NARR) counterpart, and 3) direct use of observed precipitation to drive the Noah land model component of NCEP's Global and N. American Land Data Assimilation Systems (GLDAS and NLDAS). In the applications of precipitation analyses in data assimilation at NCEP, the analyses are temporally disaggregated to hourly or less using time-weights calculated from A) either radar-based estimates or an analysis of hourly gauge-observations for the CONUS-domain daily precipitation analyses, or B) global model forecasts of 6-hourly precipitation (followed by linear interpolation to hourly or less) for the global CMAP precipitation analysis.
Abstract Trichloroethylene (TCE) is an industrial chemical and an environmental contaminant. TCE and its metabolites may be carcinogenic and affect human health. Physiologically based pharmacokinetic (PBPK) models that differ in compartmentalization are developed for TCE metabo...
Villamor Ordozgoiti, Alberto; Delgado Hito, Pilar; Guix Comellas, Eva María; Fernandez Sanchez, Carlos Manuel; Garcia Hernandez, Milagros; Lluch Canut, Teresa
2016-01-01
Information and Communications Technologies in healthcare has increased the need to consider quality criteria through standardised processes. The aim of this study was to analyse the software quality evaluation models applicable to healthcare from the perspective of ICT-purchasers. Through a systematic literature review with the keywords software, product, quality, evaluation and health, we selected and analysed 20 original research papers published from 2005-2016 in health science and technology databases. The results showed four main topics: non-ISO models, software quality evaluation models based on ISO/IEC standards, studies analysing software quality evaluation models, and studies analysing ISO standards for software quality evaluation. The models provide cost-efficiency criteria for specific software, and improve use outcomes. The ISO/IEC25000 standard is shown as the most suitable for evaluating the quality of ICTs for healthcare use from the perspective of institutional acquisition.
Zero adjusted models with applications to analysing helminths count data.
Chipeta, Michael G; Ngwira, Bagrey M; Simoonga, Christopher; Kazembe, Lawrence N
2014-11-27
It is common in public health and epidemiology that the outcome of interest is counts of events occurrence. Analysing these data using classical linear models is mostly inappropriate, even after transformation of outcome variables due to overdispersion. Zero-adjusted mixture count models such as zero-inflated and hurdle count models are applied to count data when over-dispersion and excess zeros exist. Main objective of the current paper is to apply such models to analyse risk factors associated with human helminths (S. haematobium) particularly in a case where there's a high proportion of zero counts. The data were collected during a community-based randomised control trial assessing the impact of mass drug administration (MDA) with praziquantel in Malawi, and a school-based cross sectional epidemiology survey in Zambia. Count data models including traditional (Poisson and negative binomial) models, zero modified models (zero inflated Poisson and zero inflated negative binomial) and hurdle models (Poisson logit hurdle and negative binomial logit hurdle) were fitted and compared. Using Akaike information criteria (AIC), the negative binomial logit hurdle (NBLH) and zero inflated negative binomial (ZINB) showed best performance in both datasets. With regards to zero count capturing, these models performed better than other models. This paper showed that zero modified NBLH and ZINB models are more appropriate methods for the analysis of data with excess zeros. The choice between the hurdle and zero-inflated models should be based on the aim and endpoints of the study.
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.; Harp, D.
2010-12-01
The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.
ERIC Educational Resources Information Center
Thompson, Kate; Reimann, Peter
2010-01-01
A classification system that was developed for the use of agent-based models was applied to strategies used by school-aged students to interrogate an agent-based model and a system dynamics model. These were compared, and relationships between learning outcomes and the strategies used were also analysed. It was found that the classification system…
Model authoring system for fail safe analysis
NASA Technical Reports Server (NTRS)
Sikora, Scott E.
1990-01-01
The Model Authoring System is a prototype software application for generating fault tree analyses and failure mode and effects analyses for circuit designs. Utilizing established artificial intelligence and expert system techniques, the circuits are modeled as a frame-based knowledge base in an expert system shell, which allows the use of object oriented programming and an inference engine. The behavior of the circuit is then captured through IF-THEN rules, which then are searched to generate either a graphical fault tree analysis or failure modes and effects analysis. Sophisticated authoring techniques allow the circuit to be easily modeled, permit its behavior to be quickly defined, and provide abstraction features to deal with complexity.
Analysis hierarchical model for discrete event systems
NASA Astrophysics Data System (ADS)
Ciortea, E. M.
2015-11-01
The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.
Advanced Computational Framework for Environmental Management ZEM, Version 1.x
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vesselinov, Velimir V.; O'Malley, Daniel; Pandey, Sachin
2016-11-04
Typically environmental management problems require analysis of large and complex data sets originating from concurrent data streams with different data collection frequencies and pedigree. These big data sets require on-the-fly integration into a series of models with different complexity for various types of model analyses where the data are applied as soft and hard model constraints. This is needed to provide fast iterative model analyses based on the latest available data to guide decision-making. Furthermore, the data and model are associated with uncertainties. The uncertainties are probabilistic (e.g. measurement errors) and non-probabilistic (unknowns, e.g. alternative conceptual models characterizing site conditions).more » To address all of these issues, we have developed an integrated framework for real-time data and model analyses for environmental decision-making called ZEM. The framework allows for seamless and on-the-fly integration of data and modeling results for robust and scientifically-defensible decision-making applying advanced decision analyses tools such as Bayesian- Information-Gap Decision Theory (BIG-DT). The framework also includes advanced methods for optimization that are capable of dealing with a large number of unknown model parameters, and surrogate (reduced order) modeling capabilities based on support vector regression techniques. The framework is coded in Julia, a state-of-the-art high-performance programing language (http://julialang.org). The ZEM framework is open-source and can be applied to any environmental management site. The framework will be open-source and released under GPL V3 license.« less
ERIC Educational Resources Information Center
Pringle, James; Huisman, Jeroen
2011-01-01
In analyses of higher education systems, many models and frameworks are based on governance, steering, or coordination models. Although much can be gained by such analyses, we argue that the language used in the present-day policy documents (knowledge economy, competitive position, etc.) calls for an analysis of higher education as an industry. In…
Sybil--efficient constraint-based modelling in R.
Gelius-Dietrich, Gabriel; Desouki, Abdelmoneim Amer; Fritzemeier, Claus Jonathan; Lercher, Martin J
2013-11-13
Constraint-based analyses of metabolic networks are widely used to simulate the properties of genome-scale metabolic networks. Publicly available implementations tend to be slow, impeding large scale analyses such as the genome-wide computation of pairwise gene knock-outs, or the automated search for model improvements. Furthermore, available implementations cannot easily be extended or adapted by users. Here, we present sybil, an open source software library for constraint-based analyses in R; R is a free, platform-independent environment for statistical computing and graphics that is widely used in bioinformatics. Among other functions, sybil currently provides efficient methods for flux-balance analysis (FBA), MOMA, and ROOM that are about ten times faster than previous implementations when calculating the effect of whole-genome single gene deletions in silico on a complete E. coli metabolic model. Due to the object-oriented architecture of sybil, users can easily build analysis pipelines in R or even implement their own constraint-based algorithms. Based on its highly efficient communication with different mathematical optimisation programs, sybil facilitates the exploration of high-dimensional optimisation problems on small time scales. Sybil and all its dependencies are open source. Sybil and its documentation are available for download from the comprehensive R archive network (CRAN).
A decision model for cost effective design of biomass based green energy supply chains.
Yılmaz Balaman, Şebnem; Selim, Hasan
2015-09-01
The core driver of this study is to deal with the design of anaerobic digestion based biomass to energy supply chains in a cost effective manner. In this concern, a decision model is developed. The model is based on fuzzy multi objective decision making in order to simultaneously optimize multiple economic objectives and tackle the inherent uncertainties in the parameters and decision makers' aspiration levels for the goals. The viability of the decision model is explored with computational experiments on a real-world biomass to energy supply chain and further analyses are performed to observe the effects of different conditions. To this aim, scenario analyses are conducted to investigate the effects of energy crop utilization and operational costs on supply chain structure and performance measures. Copyright © 2015 Elsevier Ltd. All rights reserved.
Chang, Hsien-Yen; Weiner, Jonathan P
2010-01-18
Diagnosis-based risk adjustment is becoming an important issue globally as a result of its implications for payment, high-risk predictive modelling and provider performance assessment. The Taiwanese National Health Insurance (NHI) programme provides universal coverage and maintains a single national computerized claims database, which enables the application of diagnosis-based risk adjustment. However, research regarding risk adjustment is limited. This study aims to examine the performance of the Adjusted Clinical Group (ACG) case-mix system using claims-based diagnosis information from the Taiwanese NHI programme. A random sample of NHI enrollees was selected. Those continuously enrolled in 2002 were included for concurrent analyses (n = 173,234), while those in both 2002 and 2003 were included for prospective analyses (n = 164,562). Health status measures derived from 2002 diagnoses were used to explain the 2002 and 2003 health expenditure. A multivariate linear regression model was adopted after comparing the performance of seven different statistical models. Split-validation was performed in order to avoid overfitting. The performance measures were adjusted R2 and mean absolute prediction error of five types of expenditure at individual level, and predictive ratio of total expenditure at group level. The more comprehensive models performed better when used for explaining resource utilization. Adjusted R2 of total expenditure in concurrent/prospective analyses were 4.2%/4.4% in the demographic model, 15%/10% in the ACGs or ADGs (Aggregated Diagnosis Group) model, and 40%/22% in the models containing EDCs (Expanded Diagnosis Cluster). When predicting expenditure for groups based on expenditure quintiles, all models underpredicted the highest expenditure group and overpredicted the four other groups. For groups based on morbidity burden, the ACGs model had the best performance overall. Given the widespread availability of claims data and the superior explanatory power of claims-based risk adjustment models over demographics-only models, Taiwan's government should consider using claims-based models for policy-relevant applications. The performance of the ACG case-mix system in Taiwan was comparable to that found in other countries. This suggested that the ACG system could be applied to Taiwan's NHI even though it was originally developed in the USA. Many of the findings in this paper are likely to be relevant to other diagnosis-based risk adjustment methodologies.
DOT National Transportation Integrated Search
2010-09-01
This report presents data and technical analyses for Texas Department of Transportation Project 0-5235. This : project focused on the evaluation of traffic sign sheeting performance in terms of meeting the nighttime : driver needs. The goal was to de...
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Perry, Boyd, III; Florance, James R.; Sanetrik, Mark D.; Wieseman, Carol D.; Stevens, William L.; Funk, Christie J.; Hur, Jiyoung; Christhilf, David M.; Coulson, David A.
2011-01-01
A summary of computational and experimental aeroelastic and aeroservoelastic (ASE) results for the Semi-Span Super-Sonic Transport (S4T) wind-tunnel model is presented. A broad range of analyses and multiple ASE wind-tunnel tests of the S4T have been performed in support of the ASE element in the Supersonics Program, part of NASA's Fundamental Aeronautics Program. The computational results to be presented include linear aeroelastic and ASE analyses, nonlinear aeroelastic analyses using an aeroelastic CFD code, and rapid aeroelastic analyses using CFD-based reduced-order models (ROMs). Experimental results from two closed-loop wind-tunnel tests performed at NASA Langley's Transonic Dynamics Tunnel (TDT) will be presented as well.
Mental Models about Seismic Effects: Students' Profile Based Comparative Analysis
ERIC Educational Resources Information Center
Moutinho, Sara; Moura, Rui; Vasconcelos, Clara
2016-01-01
Nowadays, meaningful learning takes a central role in science education and is based in mental models that allow the representation of the real world by individuals. Thus, it is essential to analyse the student's mental models by promoting an easier reconstruction of scientific knowledge, by allowing them to become consistent with the curricular…
The Development and Application of the Explanatory Model of School Dysfunctions
ERIC Educational Resources Information Center
Bergman, Manfred Max; Bergman, Zinette; Gravett, Sarah
2011-01-01
This article develops the Explanatory Model of School Dysfunctions based on 80 essays of school principals and their representatives in Gauteng. It reveals the degree and kinds of school dysfunctions, as well as their interconnectedness with actors, networks, and domains. The model provides a basis for theory-based analyses of specific…
NASA Astrophysics Data System (ADS)
Pandey, S.; Vesselinov, V. V.; O'Malley, D.; Karra, S.; Hansen, S. K.
2016-12-01
Models and data are used to characterize the extent of contamination and remediation, both of which are dependent upon the complex interplay of processes ranging from geochemical reactions, microbial metabolism, and pore-scale mixing to heterogeneous flow and external forcings. Characterization is wrought with important uncertainties related to the model itself (e.g. conceptualization, model implementation, parameter values) and the data used for model calibration (e.g. sparsity, measurement errors). This research consists of two primary components: (1) Developing numerical models that incorporate the complex hydrogeology and biogeochemistry that drive groundwater contamination and remediation; (2) Utilizing novel techniques for data/model-based analyses (such as parameter calibration and uncertainty quantification) to aid in decision support for optimal uncertainty reduction related to characterization and remediation of contaminated sites. The reactive transport models are developed using PFLOTRAN and are capable of simulating a wide range of biogeochemical and hydrologic conditions that affect the migration and remediation of groundwater contaminants under diverse field conditions. Data/model-based analyses are achieved using MADS, which utilizes Bayesian methods and Information Gap theory to address the data/model uncertainties discussed above. We also use these tools to evaluate different models, which vary in complexity, in order to weigh and rank models based on model accuracy (in representation of existing observations), model parsimony (everything else being equal, models with smaller number of model parameters are preferred), and model robustness (related to model predictions of unknown future states). These analyses are carried out on synthetic problems, but are directly related to real-world problems; for example, the modeled processes and data inputs are consistent with the conditions at the Los Alamos National Laboratory contamination sites (RDX and Chromium).
Johnson, Brent A
2009-10-01
We consider estimation and variable selection in the partial linear model for censored data. The partial linear model for censored data is a direct extension of the accelerated failure time model, the latter of which is a very important alternative model to the proportional hazards model. We extend rank-based lasso-type estimators to a model that may contain nonlinear effects. Variable selection in such partial linear model has direct application to high-dimensional survival analyses that attempt to adjust for clinical predictors. In the microarray setting, previous methods can adjust for other clinical predictors by assuming that clinical and gene expression data enter the model linearly in the same fashion. Here, we select important variables after adjusting for prognostic clinical variables but the clinical effects are assumed nonlinear. Our estimator is based on stratification and can be extended naturally to account for multiple nonlinear effects. We illustrate the utility of our method through simulation studies and application to the Wisconsin prognostic breast cancer data set.
NASA Astrophysics Data System (ADS)
Cook, B.; Anchukaitis, K. J.
2017-12-01
Comparative analyses of paleoclimate reconstructions and climate model simulations can provide valuable insights into past and future climate events. Conducting meaningful and quantitative comparisons, however, can be difficult for a variety of reasons. Here, we use tree-ring based hydroclimate reconstructions to discuss some best practices for paleoclimate-model comparisons, highlighting recent studies that have successfully used this approach. These analyses have improved our understanding of the Medieval-era megadroughts, ocean forcing of large scale drought patterns, and even climate change contributions to future drought risk. Additional work is needed, however, to better reconcile and formalize uncertainties across observed, modeled, and reconstructed variables. In this regard, process based forward models of proxy-systems will likely be a critical tool moving forward.
Formation of Apprenticeships in the Swedish Education System: Different Stakeholder Perspectives
ERIC Educational Resources Information Center
Andersson, Ingela; Wärvik, Gun-Britt; Thång, Per-Olof
2015-01-01
The article explores the major features of the Swedish Government's new initiative--a school based Upper Secondary Apprenticeship model. The analyses are guided by activity theory. The analysed texts are part of the parliamentary reform-making process of the 2011 Upper Secondary School reform. The analyses unfold how the Government, the Swedish…
Chapter 37: Population Trends of the Marbled Murrelet Projected From Demographic Analyses
Steven B. Beissinger
1995-01-01
A demographic model of the Marbled Murrelet is developed to explore likely population trends and factors influencing them. The model was structured to use field data on juvenile ratios, collected near the end of the breeding season and corrected for date of census, to estimate fecundity. Survivorship was estimated for the murrelet based on comparative analyses of...
Static and dynamic deflection studies of the SRM aft case-nozzle joint
NASA Technical Reports Server (NTRS)
Christian, David C.; Kos, Lawrence D.; Torres, Isaias
1989-01-01
The redesign of the joints on the solid rocket motor (SRM) has prompted the need for analyzing the behavior of the joints using several different types of analyses. The types of analyses performed include modal analysis, static analysis, transient response analysis, and base driving response analysis. The forces used in these analyses to drive the mathematical model include SRM internal chamber pressure, nozzle blowout and side forces, shuttle vehicle lift-off dynamics, SRM pressure transient rise curve, gimbal forces and moments, actuator gimbal loads, and vertical and radial bolt preloads. The math model represented the SRM from the aft base tangent point (1,823.95 in) all the way back to the nozzle, where a simplified, tuned nozzle model was attached. The new design used the radial bolts as an additional feature to reduce the gap opening at the aft dome/nozzle fixed housing interface.
Modeling and Simulation of Metallurgical Process Based on Hybrid Petri Net
NASA Astrophysics Data System (ADS)
Ren, Yujuan; Bao, Hong
2016-11-01
In order to achieve the goals of energy saving and emission reduction of iron and steel enterprises, an increasing number of modeling and simulation technologies are used to research and analyse metallurgical production process. In this paper, the basic principle of Hybrid Petri net is used to model and analyse the Metallurgical Process. Firstly, the definition of Hybrid Petri Net System of Metallurgical Process (MPHPNS) and its modeling theory are proposed. Secondly, the model of MPHPNS based on material flow is constructed. The dynamic flow of materials and the real-time change of each technological state in metallurgical process are simulated vividly by using this model. The simulation process can implement interaction between the continuous event dynamic system and the discrete event dynamic system at the same level, and play a positive role in the production decision.
SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.
Chu, Annie; Cui, Jenny; Dinov, Ivo D
2009-03-01
The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models.
NASA Astrophysics Data System (ADS)
Soeryana, E.; Fadhlina, N.; Sukono; Rusyaman, E.; Supian, S.
2017-01-01
Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on logarithmic utility function. Non constant mean analysed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analysed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyse some Islamic stocks in Indonesia. The expected result is to get the proportion of investment in each Islamic stock analysed.
Xu, Youjun; Wang, Shiwei; Hu, Qiwan; Gao, Shuaishi; Ma, Xiaomin; Zhang, Weilin; Shen, Yihang; Chen, Fangjin; Lai, Luhua; Pei, Jianfeng
2018-05-10
CavityPlus is a web server that offers protein cavity detection and various functional analyses. Using protein three-dimensional structural information as the input, CavityPlus applies CAVITY to detect potential binding sites on the surface of a given protein structure and rank them based on ligandability and druggability scores. These potential binding sites can be further analysed using three submodules, CavPharmer, CorrSite, and CovCys. CavPharmer uses a receptor-based pharmacophore modelling program, Pocket, to automatically extract pharmacophore features within cavities. CorrSite identifies potential allosteric ligand-binding sites based on motion correlation analyses between cavities. CovCys automatically detects druggable cysteine residues, which is especially useful to identify novel binding sites for designing covalent allosteric ligands. Overall, CavityPlus provides an integrated platform for analysing comprehensive properties of protein binding cavities. Such analyses are useful for many aspects of drug design and discovery, including target selection and identification, virtual screening, de novo drug design, and allosteric and covalent-binding drug design. The CavityPlus web server is freely available at http://repharma.pku.edu.cn/cavityplus or http://www.pkumdl.cn/cavityplus.
Constitutive modeling for isotropic materials (HOST)
NASA Technical Reports Server (NTRS)
Lindholm, U. S.; Chan, K. S.; Bodner, S. R.; Weber, R. M.; Walker, K. P.; Cassenti, B. N.
1985-01-01
This report presents the results of the second year of work on a problem which is part of the NASA HOST Program. Its goals are: (1) to develop and validate unified constitutive models for isotropic materials, and (2) to demonstrate their usefulness for structural analyses of hot section components of gas turbine engines. The unified models selected for development and evaluation are that of Bodner-Partom and Walker. For model evaluation purposes, a large constitutive data base is generated for a B1900 + Hf alloy by performing uniaxial tensile, creep, cyclic, stress relation, and thermomechanical fatigue (TMF) tests as well as biaxial (tension/torsion) tests under proportional and nonproportional loading over a wide range of strain rates and temperatures. Systematic approaches for evaluating material constants from a small subset of the data base are developed. Correlations of the uniaxial and biaxial tests data with the theories of Bodner-Partom and Walker are performed to establish the accuracy, range of applicability, and integability of the models. Both models are implemented in the MARC finite element computer code and used for TMF analyses. Benchmark notch round experiments are conducted and the results compared with finite-element analyses using the MARC code and the Walker model.
Integration of Engine, Plume, and CFD Analyses in Conceptual Design of Low-Boom Supersonic Aircraft
NASA Technical Reports Server (NTRS)
Li, Wu; Campbell, Richard; Geiselhart, Karl; Shields, Elwood; Nayani, Sudheer; Shenoy, Rajiv
2009-01-01
This paper documents an integration of engine, plume, and computational fluid dynamics (CFD) analyses in the conceptual design of low-boom supersonic aircraft, using a variable fidelity approach. In particular, the Numerical Propulsion Simulation System (NPSS) is used for propulsion system cycle analysis and nacelle outer mold line definition, and a low-fidelity plume model is developed for plume shape prediction based on NPSS engine data and nacelle geometry. This model provides a capability for the conceptual design of low-boom supersonic aircraft that accounts for plume effects. Then a newly developed process for automated CFD analysis is presented for CFD-based plume and boom analyses of the conceptual geometry. Five test cases are used to demonstrate the integrated engine, plume, and CFD analysis process based on a variable fidelity approach, as well as the feasibility of the automated CFD plume and boom analysis capability.
Superelement Analysis of Tile-Reinforced Composite Armor
NASA Technical Reports Server (NTRS)
Davila, Carlos G.
1998-01-01
Super-elements can greatly improve the computational efficiency of analyses of tile-reinforced structures such as the hull of the Composite Armored Vehicle. By taking advantage of the periodicity in this type of construction, super-elements can be used to simplify the task of modeling, to virtually eliminate the time required to assemble the stiffness matrices, and to reduce significantly the analysis solution time. Furthermore, super-elements are fully transferable between analyses and analysts, so that they provide a consistent method to share information and reduce duplication. This paper describes a methodology that was developed to model and analyze large upper hull components of the Composite Armored Vehicle. The analyses are based on two types of superelement models. The first type is based on element-layering, which consists of modeling a laminate by using several layers of shell elements constrained together with compatibility equations. Element layering is used to ensure the proper transverse shear deformation in the laminate rubber layer. The second type of model uses three-dimensional elements. Since no graphical pre-processor currently supports super-elements, a special technique based on master-elements was developed. Master-elements are representations of super-elements that are used in conjunction with a custom translator to write the superelement connectivities as input decks for ABAQUS.
A Model-Based Cluster Analysis of Maternal Emotion Regulation and Relations to Parenting Behavior.
Shaffer, Anne; Whitehead, Monica; Davis, Molly; Morelen, Diana; Suveg, Cynthia
2017-10-15
In a diverse community sample of mothers (N = 108) and their preschool-aged children (M age = 3.50 years), this study conducted person-oriented analyses of maternal emotion regulation (ER) based on a multimethod assessment incorporating physiological, observational, and self-report indicators. A model-based cluster analysis was applied to five indicators of maternal ER: maternal self-report, observed negative affect in a parent-child interaction, baseline respiratory sinus arrhythmia (RSA), and RSA suppression across two laboratory tasks. Model-based cluster analyses revealed four maternal ER profiles, including a group of mothers with average ER functioning, characterized by socioeconomic advantage and more positive parenting behavior. A dysregulated cluster demonstrated the greatest challenges with parenting and dyadic interactions. Two clusters of intermediate dysregulation were also identified. Implications for assessment and applications to parenting interventions are discussed. © 2017 Family Process Institute.
Elsawah, Sondoss; Guillaume, Joseph H A; Filatova, Tatiana; Rook, Josefine; Jakeman, Anthony J
2015-03-15
This paper aims to contribute to developing better ways for incorporating essential human elements in decision making processes for modelling of complex socio-ecological systems. It presents a step-wise methodology for integrating perceptions of stakeholders (qualitative) into formal simulation models (quantitative) with the ultimate goal of improving understanding and communication about decision making in complex socio-ecological systems. The methodology integrates cognitive mapping and agent based modelling. It cascades through a sequence of qualitative/soft and numerical methods comprising: (1) Interviews to elicit mental models; (2) Cognitive maps to represent and analyse individual and group mental models; (3) Time-sequence diagrams to chronologically structure the decision making process; (4) All-encompassing conceptual model of decision making, and (5) computational (in this case agent-based) Model. We apply the proposed methodology (labelled ICTAM) in a case study of viticulture irrigation in South Australia. Finally, we use strengths-weakness-opportunities-threats (SWOT) analysis to reflect on the methodology. Results show that the methodology leverages the use of cognitive mapping to capture the richness of decision making and mental models, and provides a combination of divergent and convergent analysis methods leading to the construction of an Agent Based Model. Copyright © 2014 Elsevier Ltd. All rights reserved.
Reduced size first-order subsonic and supersonic aeroelastic modeling
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1990-01-01
Various aeroelastic, aeroservoelastic, dynamic-response, and sensitivity analyses are based on a time-domain first-order (state-space) formulation of the equations of motion. The formulation of this paper is based on the minimum-state (MS) aerodynamic approximation method, which yields a low number of aerodynamic augmenting states. Modifications of the MS and the physical weighting procedures make the modeling method even more attractive. The flexibility of constraint selection is increased without increasing the approximation problem size; the accuracy of dynamic residualization of high-frequency modes is improved; and the resulting model is less sensitive to parametric changes in subsequent analyses. Applications to subsonic and supersonic cases demonstrate the generality, flexibility, accuracy, and efficiency of the method.
Digital data processing system dynamic loading analysis
NASA Technical Reports Server (NTRS)
Lagas, J. J.; Peterka, J. J.; Tucker, A. E.
1976-01-01
Simulation and analysis of the Space Shuttle Orbiter Digital Data Processing System (DDPS) are reported. The mated flight and postseparation flight phases of the space shuttle's approach and landing test configuration were modeled utilizing the Information Management System Interpretative Model (IMSIM) in a computerized simulation modeling of the ALT hardware, software, and workload. System requirements simulated for the ALT configuration were defined. Sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and the sensitivity analyses, a test design is described for adapting, parameterizing, and executing the IMSIM. Varying load and stress conditions for the model execution are given. The analyses of the computer simulation runs were documented as results, conclusions, and recommendations for DDPS improvements.
Space shuttle orbiter digital data processing system timing sensitivity analysis OFT ascent phase
NASA Technical Reports Server (NTRS)
Lagas, J. J.; Peterka, J. J.; Becker, D. A.
1977-01-01
Dynamic loads were investigated to provide simulation and analysis of the space shuttle orbiter digital data processing system (DDPS). Segments of the ascent test (OFT) configuration were modeled utilizing the information management system interpretive model (IMSIM) in a computerized simulation modeling of the OFT hardware and software workload. System requirements for simulation of the OFT configuration were defined, and sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and these sensitivity analyses, a test design was developed for adapting, parameterizing, and executing IMSIM, using varying load and stress conditions for model execution. Analyses of the computer simulation runs are documented, including results, conclusions, and recommendations for DDPS improvements.
Analyses for Debonding of Stitched Composite Sandwich Structures Using Improved Constitutive Models
NASA Technical Reports Server (NTRS)
Glaessgen, E. H.; Sleight, D. W.; Krishnamurthy, T.; Raju, I. S.
2001-01-01
A fracture mechanics analysis based on strain energy release rates is used to study the effect of stitching in bonded sandwich beam configurations. Finite elements are used to model the configurations. The stitches were modeled as discrete nonlinear spring elements with a compliance determined by experiment. The constitutive models were developed using the results of flatwise tension tests from sandwich material rather than monolithic material. The analyses show that increasing stitch stiffness, stitch density and debond length decrease strain energy release rates for a fixed applied load.
ERIC Educational Resources Information Center
Zigic, Sasha; Lemckert, Charles J.
2007-01-01
The following paper presents a computer-based learning strategy to assist in introducing and teaching water quality modelling to undergraduate civil engineering students. As part of the learning strategy, an interactive computer-based instructional (CBI) aid was specifically developed to assist students to set up, run and analyse the output from a…
Impact of implementation choices on quantitative predictions of cell-based computational models
NASA Astrophysics Data System (ADS)
Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.
2017-09-01
'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.
Linkage and related analyses of Barrett's esophagus and its associated adenocarcinomas.
Sun, Xiangqing; Elston, Robert; Falk, Gary W; Grady, William M; Faulx, Ashley; Mittal, Sumeet K; Canto, Marcia I; Shaheen, Nicholas J; Wang, Jean S; Iyer, Prasad G; Abrams, Julian A; Willis, Joseph E; Guda, Kishore; Markowitz, Sanford; Barnholtz-Sloan, Jill S; Chandar, Apoorva; Brock, Wendy; Chak, Amitabh
2016-07-01
Familial aggregation and segregation analysis studies have provided evidence of a genetic basis for esophageal adenocarcinoma (EAC) and its premalignant precursor, Barrett's esophagus (BE). We aim to demonstrate the utility of linkage analysis to identify the genomic regions that might contain the genetic variants that predispose individuals to this complex trait (BE and EAC). We genotyped 144 individuals in 42 multiplex pedigrees chosen from 1000 singly ascertained BE/EAC pedigrees, and performed both model-based and model-free linkage analyses, using S.A.G.E. and other software. Segregation models were fitted, from the data on both the 42 pedigrees and the 1000 pedigrees, to determine parameters for performing model-based linkage analysis. Model-based and model-free linkage analyses were conducted in two sets of pedigrees: the 42 pedigrees and a subset of 18 pedigrees with female affected members that are expected to be more genetically homogeneous. Genome-wide associations were also tested in these families. Linkage analyses on the 42 pedigrees identified several regions consistently suggestive of linkage by different linkage analysis methods on chromosomes 2q31, 12q23, and 4p14. A linkage on 15q26 is the only consistent linkage region identified in the 18 female-affected pedigrees, in which the linkage signal is higher than in the 42 pedigrees. Other tentative linkage signals are also reported. Our linkage study of BE/EAC pedigrees identified linkage regions on chromosomes 2, 4, 12, and 15, with some reported associations located within our linkage peaks. Our linkage results can help prioritize association tests to delineate the genetic determinants underlying susceptibility to BE and EAC.
The transition to emerging revenue models.
Harris, John M; Hemnani, Rashi
2013-04-01
A financial assessment aimed at gauging the true impact of the healthcare industry's new value-based payment models for a health system should begin with separate analyses of the following: The direct contract results, The impact of volume changes on net income, The impact of operational improvements, Net income at risk from competitor actions. The results of these four analyses then should be evaluated in combination to identify the ultimate impact of the new revenue models on the health system's bottom line.
Error Generation in CATS-Based Agents
NASA Technical Reports Server (NTRS)
Callantine, Todd
2003-01-01
This research presents a methodology for generating errors from a model of nominally preferred correct operator activities, given a particular operational context, and maintaining an explicit link to the erroneous contextual information to support analyses. It uses the Crew Activity Tracking System (CATS) model as the basis for error generation. This report describes how the process works, and how it may be useful for supporting agent-based system safety analyses. The report presents results obtained by applying the error-generation process and discusses implementation issues. The research is supported by the System-Wide Accident Prevention Element of the NASA Aviation Safety Program.
NASA Astrophysics Data System (ADS)
Hanrahan, Mary
1994-12-01
This paper presents a model for the type of classroom environment believed to facilitate scientific conceptual change. A survey based on this model contains items about students' motivational beliefs, their study approach and their perceptions of their teacher's actions and learning goal orientation. Results obtained from factor analyses, correlations and analyses of variance, based on responses from 113 students, suggest that an empowering interpersonal teacher-student relationship is related to a deep approach to learning, a positive attitude to science, and positive self-efficacy beliefs, and may be increased by a constructivist approach to teaching.
We present a multi-faceted sensitivity analysis of a spatially explicit, individual-based model (IBM) (HexSim) of a threatened species, the Northern Spotted Owl (Strix occidentalis caurina) on a national forest in Washington, USA. Few sensitivity analyses have been conducted on ...
Method of performing computational aeroelastic analyses
NASA Technical Reports Server (NTRS)
Silva, Walter A. (Inventor)
2011-01-01
Computational aeroelastic analyses typically use a mathematical model for the structural modes of a flexible structure and a nonlinear aerodynamic model that can generate a plurality of unsteady aerodynamic responses based on the structural modes for conditions defining an aerodynamic condition of the flexible structure. In the present invention, a linear state-space model is generated using a single execution of the nonlinear aerodynamic model for all of the structural modes where a family of orthogonal functions is used as the inputs. Then, static and dynamic aeroelastic solutions are generated using computational interaction between the mathematical model and the linear state-space model for a plurality of periodic points in time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milani, Gabriele, E-mail: milani@stru.polimi.it; Valente, Marco, E-mail: milani@stru.polimi.it
2014-10-06
This study presents some results of a comprehensive numerical analysis on three masonry churches damaged by the recent Emilia-Romagna (Italy) seismic events occurred in May 2012. The numerical study comprises: (a) pushover analyses conducted with a commercial code, standard nonlinear material models and two different horizontal load distributions; (b) FE kinematic limit analyses performed using a non-commercial software based on a preliminary homogenization of the masonry materials and a subsequent limit analysis with triangular elements and interfaces; (c) kinematic limit analyses conducted in agreement with the Italian code and based on the a-priori assumption of preassigned failure mechanisms, where themore » masonry material is considered unable to withstand tensile stresses. All models are capable of giving information on the active failure mechanism and the base shear at failure, which, if properly made non-dimensional with the weight of the structure, gives also an indication of the horizontal peak ground acceleration causing the collapse of the church. The results obtained from all three models indicate that the collapse is usually due to the activation of partial mechanisms (apse, façade, lateral walls, etc.). Moreover the horizontal peak ground acceleration associated to the collapse is largely lower than that required in that seismic zone by the Italian code for ordinary buildings. These outcomes highlight that structural upgrading interventions would be extremely beneficial for the considerable reduction of the seismic vulnerability of such kind of historical structures.« less
NASA Astrophysics Data System (ADS)
Jonker, C. M.; Snoep, J. L.; Treur, J.; Westerhoff, H. V.; Wijngaards, W. C. A.
Within the areas of Computational Organisation Theory and Artificial Intelligence, techniques have been developed to simulate and analyse dynamics within organisations in society. Usually these modelling techniques are applied to factories and to the internal organisation of their process flows, thus obtaining models of complex organisations at various levels of aggregation. The dynamics in living cells are often interpreted in terms of well-organised processes, a bacterium being considered a (micro)factory. This suggests that organisation modelling techniques may also benefit their analysis. Using the example of Escherichia coli it is shown how indeed agent-based organisational modelling techniques can be used to simulate and analyse E.coli's intracellular dynamics. Exploiting the abstraction levels entailed by this perspective, a concise model is obtained that is readily simulated and analysed at the various levels of aggregation, yet shows the cell's essential dynamic patterns.
NASA Astrophysics Data System (ADS)
Hoffmann, Aswin L.; den Hertog, Dick; Siem, Alex Y. D.; Kaanders, Johannes H. A. M.; Huizenga, Henk
2008-11-01
Finding fluence maps for intensity-modulated radiation therapy (IMRT) can be formulated as a multi-criteria optimization problem for which Pareto optimal treatment plans exist. To account for the dose-per-fraction effect of fractionated IMRT, it is desirable to exploit radiobiological treatment plan evaluation criteria based on the linear-quadratic (LQ) cell survival model as a means to balance the radiation benefits and risks in terms of biologic response. Unfortunately, the LQ-model-based radiobiological criteria are nonconvex functions, which make the optimization problem hard to solve. We apply the framework proposed by Romeijn et al (2004 Phys. Med. Biol. 49 1991-2013) to find transformations of LQ-model-based radiobiological functions and establish conditions under which transformed functions result in equivalent convex criteria that do not change the set of Pareto optimal treatment plans. The functions analysed are: the LQ-Poisson-based model for tumour control probability (TCP) with and without inter-patient heterogeneity in radiation sensitivity, the LQ-Poisson-based relative seriality s-model for normal tissue complication probability (NTCP), the equivalent uniform dose (EUD) under the LQ-Poisson model and the fractionation-corrected Probit-based model for NTCP according to Lyman, Kutcher and Burman. These functions differ from those analysed before in that they cannot be decomposed into elementary EUD or generalized-EUD functions. In addition, we show that applying increasing and concave transformations to the convexified functions is beneficial for the piecewise approximation of the Pareto efficient frontier.
Hollingsworth, T Déirdre; Adams, Emily R; Anderson, Roy M; Atkins, Katherine; Bartsch, Sarah; Basáñez, María-Gloria; Behrend, Matthew; Blok, David J; Chapman, Lloyd A C; Coffeng, Luc; Courtenay, Orin; Crump, Ron E; de Vlas, Sake J; Dobson, Andy; Dyson, Louise; Farkas, Hajnal; Galvani, Alison P; Gambhir, Manoj; Gurarie, David; Irvine, Michael A; Jervis, Sarah; Keeling, Matt J; Kelly-Hope, Louise; King, Charles; Lee, Bruce Y; Le Rutte, Epke A; Lietman, Thomas M; Ndeffo-Mbah, Martial; Medley, Graham F; Michael, Edwin; Pandey, Abhishek; Peterson, Jennifer K; Pinsent, Amy; Porco, Travis C; Richardus, Jan Hendrik; Reimer, Lisa; Rock, Kat S; Singh, Brajendra K; Stolk, Wilma; Swaminathan, Subramanian; Torr, Steve J; Townsend, Jeffrey; Truscott, James; Walker, Martin; Zoueva, Alexandra
2015-12-09
Quantitative analysis and mathematical models are useful tools in informing strategies to control or eliminate disease. Currently, there is an urgent need to develop these tools to inform policy to achieve the 2020 goals for neglected tropical diseases (NTDs). In this paper we give an overview of a collection of novel model-based analyses which aim to address key questions on the dynamics of transmission and control of nine NTDs: Chagas disease, visceral leishmaniasis, human African trypanosomiasis, leprosy, soil-transmitted helminths, schistosomiasis, lymphatic filariasis, onchocerciasis and trachoma. Several common themes resonate throughout these analyses, including: the importance of epidemiological setting on the success of interventions; targeting groups who are at highest risk of infection or re-infection; and reaching populations who are not accessing interventions and may act as a reservoir for infection,. The results also highlight the challenge of maintaining elimination 'as a public health problem' when true elimination is not reached. The models elucidate the factors that may be contributing most to persistence of disease and discuss the requirements for eventually achieving true elimination, if that is possible. Overall this collection presents new analyses to inform current control initiatives. These papers form a base from which further development of the models and more rigorous validation against a variety of datasets can help to give more detailed advice. At the moment, the models' predictions are being considered as the world prepares for a final push towards control or elimination of neglected tropical diseases by 2020.
NASA Technical Reports Server (NTRS)
Shiau, Jyh-Jen; Wahba, Grace; Johnson, Donald R.
1986-01-01
A new method, based on partial spline models, is developed for including specified discontinuities in otherwise smooth two- and three-dimensional objective analyses. The method is appropriate for including tropopause height information in two- and three-dimensinal temperature analyses, using the O'Sullivan-Wahba physical variational method for analysis of satellite radiance data, and may in principle be used in a combined variational analysis of observed, forecast, and climate information. A numerical method for its implementation is described and a prototype two-dimensional analysis based on simulated radiosonde and tropopause height data is shown. The method may also be appropriate for other geophysical problems, such as modeling the ocean thermocline, fronts, discontinuities, etc.
NASA Astrophysics Data System (ADS)
Zangori, Laura; Forbes, Cory T.; Schwarz, Christina V.
2015-10-01
Opportunities to generate model-based explanations are crucial for elementary students, yet are rarely foregrounded in elementary science learning environments despite evidence that early learners can reason from models when provided with scaffolding. We used a quasi-experimental research design to investigate the comparative impact of a scaffold test condition consisting of embedded physical scaffolds within a curricular modeling task on third-grade (age 8-9) students' formulation of model-based explanations for the water cycle. This condition was contrasted to the control condition where third-grade students used a curricular modeling task with no embedded physical scaffolds. Students from each condition ( n scaffold = 60; n unscaffold = 56) generated models of the water cycle before and after completion of a 10-week water unit. Results from quantitative analyses suggest that students in the scaffolded condition represented and linked more subsurface water process sequences with surface water process sequences than did students in the unscaffolded condition. However, results of qualitative analyses indicate that students in the scaffolded condition were less likely to build upon these process sequences to generate model-based explanations and experienced difficulties understanding their models as abstracted representations rather than recreations of real-world phenomena. We conclude that embedded curricular scaffolds may support students to consider non-observable components of the water cycle but, alone, may be insufficient for generation of model-based explanations about subsurface water movement.
ERIC Educational Resources Information Center
Mendonça, Paula Cristina Cardoso; Justi, Rosária
2013-01-01
Some studies related to the nature of scientific knowledge demonstrate that modelling is an inherently argumentative process. This study aims at discussing the relationship between modelling and argumentation by analysing data collected during the modelling-based teaching of ionic bonding and intermolecular interactions. The teaching activities…
Design, fabrication and test of a trace contaminant control system
NASA Technical Reports Server (NTRS)
1975-01-01
A trace contaminant control system was designed, fabricated, and evaluated to determine suitability of the system concept to future manned spacecraft. Two different models were considered. The load model initially required by the contract was based on the Space Station Prototype (SSP) general specifications SVSK HS4655, reflecting a change from a 9 man crew to a 6 man crew of the model developed in previous phases of this effort. Trade studies and a system preliminary design were accomplished based on this contaminant load, including computer analyses to define the optimum system configuration in terms of component arrangements, flow rates and component sizing. At the completion of the preliminary design effort a revised contaminant load model was developed for the SSP. Additional analyses were then conducted to define the impact of this new contaminant load model on the system configuration. A full scale foam-core mock-up with the appropriate SSP system interfaces was also fabricated.
NASA Astrophysics Data System (ADS)
Colon-Pagan, Ian; Kuo, Ying-Hwa
2008-10-01
In this study, we compare precipitable water vapor (PWV) values from ground-based GPS water vapor sensing and COSMIC radio occultation (RO) measurements over the Caribbean Sea, Gulf of Mexico, and United States regions as well as global analyses from NCEP and ECMWF models. The results show good overall agreement; however, the PWV values estimated by ground-based GPS receivers tend to have a slight dry bias for low PWV values and a slight wet bias for higher PWV values, when compared with GPS RO measurements and global analyses. An application of a student T-test indicates that there is a significant difference between both ground- and space-based GPS measured datasets. The dry bias associated with space-based GPS is attributed to the missing low altitude data, where the concentration of water vapor is large. The close agreements between space-based and global analyses are due to the fact that these global analyses assimilate space-based GPS RO data from COSMIC, and the retrieval of water vapor profiles from space-based technique requires the use of global analyses as the first guess. This work is supported by UCAR SOARS and a grant from the National Oceanic and Atmospheric Administration, Educational Partnership Program under the cooperative agreement NA06OAR4810187.
Rohwedder, J J R; Pasquini, C; Fortes, P R; Raimundo, I M; Wilk, A; Mizaikoff, B
2014-07-21
A miniaturised gas analyser is described and evaluated based on the use of a substrate-integrated hollow waveguide (iHWG) coupled to a microsized near-infrared spectrophotometer comprising a linear variable filter and an array of InGaAs detectors. This gas sensing system was applied to analyse surrogate samples of natural fuel gas containing methane, ethane, propane and butane, quantified by using multivariate regression models based on partial least square (PLS) algorithms and Savitzky-Golay 1(st) derivative data preprocessing. The external validation of the obtained models reveals root mean square errors of prediction of 0.37, 0.36, 0.67 and 0.37% (v/v), for methane, ethane, propane and butane, respectively. The developed sensing system provides particularly rapid response times upon composition changes of the gaseous sample (approximately 2 s) due the minute volume of the iHWG-based measurement cell. The sensing system developed in this study is fully portable with a hand-held sized analyser footprint, and thus ideally suited for field analysis. Last but not least, the obtained results corroborate the potential of NIR-iHWG analysers for monitoring the quality of natural gas and petrochemical gaseous products.
Braun, Alexandra C; Ilko, David; Merget, Benjamin; Gieseler, Henning; Germershaus, Oliver; Holzgrabe, Ulrike; Meinel, Lorenz
2015-08-01
This manuscript addresses the capability of compendial methods in controlling polysorbate 80 (PS80) functionality. Based on the analysis of sixteen batches, functionality related characteristics (FRC) including critical micelle concentration (CMC), cloud point, hydrophilic-lipophilic balance (HLB) value and micelle molecular weight were correlated to chemical composition including fatty acids before and after hydrolysis, content of non-esterified polyethylene glycols and sorbitan polyethoxylates, sorbitan- and isosorbide polyethoxylate fatty acid mono- and diesters, polyoxyethylene diesters, and peroxide values. Batches from some suppliers had a high variability in functionality related characteristic (FRC), questioning the ability of the current monograph in controlling these. Interestingly, the combined use of the input parameters oleic acid content and peroxide value - both of which being monographed methods - resulted in a model adequately predicting CMC. Confining the batches to those complying with specifications for peroxide value proved oleic acid content alone as being predictive for CMC. Similarly, a four parameter model based on chemical analyses alone was instrumental in predicting the molecular weight of PS80 micelles. Improved models based on analytical outcome from fingerprint analyses are also presented. A road map controlling PS80 batches with respect to FRC and based on chemical analyses alone is provided for the formulator. Copyright © 2014 Elsevier B.V. All rights reserved.
Canivez, Gary L; Watkins, Marley W; Dombrowski, Stefan C
2017-04-01
The factor structure of the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V; Wechsler, 2014a) standardization sample (N = 2,200) was examined using confirmatory factor analyses (CFA) with maximum likelihood estimation for all reported models from the WISC-V Technical and Interpretation Manual (Wechsler, 2014b). Additionally, alternative bifactor models were examined and variance estimates and model-based reliability estimates (ω coefficients) were provided. Results from analyses of the 16 primary and secondary WISC-V subtests found that all higher-order CFA models with 5 group factors (VC, VS, FR, WM, and PS) produced model specification errors where the Fluid Reasoning factor produced negative variance and were thus judged inadequate. Of the 16 models tested, the bifactor model containing 4 group factors (VC, PR, WM, and PS) produced the best fit. Results from analyses of the 10 primary WISC-V subtests also found the bifactor model with 4 group factors (VC, PR, WM, and PS) produced the best fit. Variance estimates from both 16 and 10 subtest based bifactor models found dominance of general intelligence (g) in accounting for subtest variance (except for PS subtests) and large ω-hierarchical coefficients supporting general intelligence interpretation. The small portions of variance uniquely captured by the 4 group factors and low ω-hierarchical subscale coefficients likely render the group factors of questionable interpretive value independent of g (except perhaps for PS). Present CFA results confirm the EFA results reported by Canivez, Watkins, and Dombrowski (2015); Dombrowski, Canivez, Watkins, and Beaujean (2015); and Canivez, Dombrowski, and Watkins (2015). (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Steingroever, Helen; Pachur, Thorsten; Šmíra, Martin; Lee, Michael D
2018-06-01
The Iowa Gambling Task (IGT) is one of the most popular experimental paradigms for comparing complex decision-making across groups. Most commonly, IGT behavior is analyzed using frequentist tests to compare performance across groups, and to compare inferred parameters of cognitive models developed for the IGT. Here, we present a Bayesian alternative based on Bayesian repeated-measures ANOVA for comparing performance, and a suite of three complementary model-based methods for assessing the cognitive processes underlying IGT performance. The three model-based methods involve Bayesian hierarchical parameter estimation, Bayes factor model comparison, and Bayesian latent-mixture modeling. We illustrate these Bayesian methods by applying them to test the extent to which differences in intuitive versus deliberate decision style are associated with differences in IGT performance. The results show that intuitive and deliberate decision-makers behave similarly on the IGT, and the modeling analyses consistently suggest that both groups of decision-makers rely on similar cognitive processes. Our results challenge the notion that individual differences in intuitive and deliberate decision styles have a broad impact on decision-making. They also highlight the advantages of Bayesian methods, especially their ability to quantify evidence in favor of the null hypothesis, and that they allow model-based analyses to incorporate hierarchical and latent-mixture structures.
Not Funding the Evidence-Based Model in Ohio
ERIC Educational Resources Information Center
Edlefson, Carla
2010-01-01
The purpose of this descriptive case study was to describe the implementation of Ohio's version of the Evidence-Based Model (OEBM) state school finance system in 2009. Data sources included state budget documents and analyses as well as interviews with local school officials. The new system was responsive to three policy objectives ordered by the…
Spatial scaling and multi-model inference in landscape genetics: Martes americana in northern Idaho
Tzeidle N. Wasserman; Samuel A. Cushman; Michael K. Schwartz; David O. Wallin
2010-01-01
Individual-based analyses relating landscape structure to genetic distances across complex landscapes enable rigorous evaluation of multiple alternative hypotheses linking landscape structure to gene flow. We utilize two extensions to increase the rigor of the individual-based causal modeling approach to inferring relationships between landscape patterns and gene flow...
An Example of Competence-Based Learning: Use of Maxima in Linear Algebra for Engineers
ERIC Educational Resources Information Center
Diaz, Ana; Garcia, Alfonsa; de la Villa, Agustin
2011-01-01
This paper analyses the role of Computer Algebra Systems (CAS) in a model of learning based on competences. The proposal is an e-learning model Linear Algebra course for Engineering, which includes the use of a CAS (Maxima) and focuses on problem solving. A reference model has been taken from the Spanish Open University. The proper use of CAS is…
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, Tristram O.; Le Page, Yannick LB; Huang, Maoyi
2014-06-05
Projections of land cover change generated from Integrated Assessment Models (IAM) and other economic-based models can be applied for analyses of environmental impacts at subregional and landscape scales. For those IAM and economic models that project land use at the sub-continental or regional scale, these projections must be downscaled and spatially distributed prior to use in climate or ecosystem models. Downscaling efforts to date have been conducted at the national extent with relatively high spatial resolution (30m) and at the global extent with relatively coarse spatial resolution (0.5 degree).
Numerical modelling of distributed vibration sensor based on phase-sensitive OTDR
NASA Astrophysics Data System (ADS)
Masoudi, A.; Newson, T. P.
2017-04-01
A Distributed Vibration Sensor Based on Phase-Sensitive OTDR is numerically modeled. The advantage of modeling the building blocks of the sensor individually and combining the blocks to analyse the behavior of the sensing system is discussed. It is shown that the numerical model can accurately imitate the response of the experimental setup to dynamic perturbations a signal processing procedure similar to that used to extract the phase information from sensing setup.
The Establishment of LTO Emission Inventory of Civil Aviation Airports Based on Big Data
NASA Astrophysics Data System (ADS)
Lu, Chengwei; Liu, Hefan; Song, Danlin; Yang, Xinyue; Tan, Qinwen; Hu, Xiang; Kang, Xue
2018-03-01
An estimation model on LTO emissions of civil aviation airports was developed in this paper, LTO big data was acquired by analysing the internet with Python, while the LTO emissions was dynamically calculated based on daily LTO data, an uncertainty analysis was conducted with Monte Carlo method. Through the model, the emission of LTO in Shuangliu International Airport was calculated, and the characteristics and temporal distribution of LTO in 2015 was analysed. Results indicates that compared with the traditional methods, the model established can calculate the LTO emissions from different types of airplanes more accurately. Based on the hourly LTO information of 302 valid days, it was obtained that the total number of LTO cycles in Chengdu Shuangliu International Airport was 274,645 and the annual amount of emission of SO2, NOx, VOCs, CO, PM10 and PM2.5 was estimated, and the uncertainty of the model was around 7% to 10% varies on pollutants.
Hu, Kang; Fiedler, Thorsten; Blanco, Laura; Geissen, Sven-Uwe; Zander, Simon; Prieto, David; Blanco, Angeles; Negro, Carlos; Swinnen, Nathalie
2017-11-10
A pilot-scale reverse osmosis (RO) followed behind a membrane bioreactor (MBR) was developed for the desalination to reuse wastewater in a PVC production site. The solution-diffusion-film model (SDFM) based on the solution-diffusion model (SDM) and the film theory was proposed to describe rejections of electrolyte mixtures in the MBR effluent which consists of dominant ions (Na + and Cl - ) and several trace ions (Ca 2+ , Mg 2+ , K + and SO 4 2- ). The universal global optimisation method was used to estimate the ion permeability coefficients (B) and mass transfer coefficients (K) in SDFM. Then, the membrane performance was evaluated based on the estimated parameters which demonstrated that the theoretical simulations were in line with the experimental results for the dominant ions. Moreover, an energy analysis model with the consideration of limitation imposed by the thermodynamic restriction was proposed to analyse the specific energy consumption of the pilot-scale RO system in various scenarios.
SOCR Analyses – an Instructional Java Web-based Statistical Analysis Toolkit
Chu, Annie; Cui, Jenny; Dinov, Ivo D.
2011-01-01
The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test. The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website. In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models. PMID:21546994
NASA Astrophysics Data System (ADS)
Lee, Jae Young; Park, Younggeun; Pun, San; Lee, Sung Sik; Lo, Joe F.; Lee, Luke P.
2015-06-01
Intracellular Cyt c release profiles in living human neuroblastoma undergoing amyloid β oligomer (AβO)-induced apoptosis, as a model Alzheimer's disease-associated pathogenic molecule, were analysed in a real-time manner using plasmon resonance energy transfer (PRET)-based spectroscopy.Intracellular Cyt c release profiles in living human neuroblastoma undergoing amyloid β oligomer (AβO)-induced apoptosis, as a model Alzheimer's disease-associated pathogenic molecule, were analysed in a real-time manner using plasmon resonance energy transfer (PRET)-based spectroscopy. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr02390d
ERIC Educational Resources Information Center
Putnik, Goran; Costa, Eric; Alves, Cátia; Castro, Hélio; Varela, Leonilde; Shah, Vaibhav
2016-01-01
Social network-based engineering education (SNEE) is designed and implemented as a model of Education 3.0 paradigm. SNEE represents a new learning methodology, which is based on the concept of social networks and represents an extended model of project-led education. The concept of social networks was applied in the real-life experiment,…
Integrated freight network model : a GIS-based platform for transportation analyses.
DOT National Transportation Integrated Search
2015-01-01
The models currently used to examine the behavior transportation systems are usually mode-specific. That is, they focus on a single mode (i.e. railways, highways, or waterways). The lack of : integration limits the usefulness of models to analyze the...
Human Spaceflight Architecture Model (HSFAM) Data Dictionary
NASA Technical Reports Server (NTRS)
Shishko, Robert
2016-01-01
HSFAM is a data model based on the DoDAF 2.02 data model with some for purpose extensions. These extensions are designed to permit quantitative analyses regarding stakeholder concerns about technical feasibility, configuration and interface issues, and budgetary and/or economic viability.
An approach to checking case-crossover analyses based on equivalence with time-series methods.
Lu, Yun; Symons, James Morel; Geyh, Alison S; Zeger, Scott L
2008-03-01
The case-crossover design has been increasingly applied to epidemiologic investigations of acute adverse health effects associated with ambient air pollution. The correspondence of the design to that of matched case-control studies makes it inferentially appealing for epidemiologic studies. Case-crossover analyses generally use conditional logistic regression modeling. This technique is equivalent to time-series log-linear regression models when there is a common exposure across individuals, as in air pollution studies. Previous methods for obtaining unbiased estimates for case-crossover analyses have assumed that time-varying risk factors are constant within reference windows. In this paper, we rely on the connection between case-crossover and time-series methods to illustrate model-checking procedures from log-linear model diagnostics for time-stratified case-crossover analyses. Additionally, we compare the relative performance of the time-stratified case-crossover approach to time-series methods under 3 simulated scenarios representing different temporal patterns of daily mortality associated with air pollution in Chicago, Illinois, during 1995 and 1996. Whenever a model-be it time-series or case-crossover-fails to account appropriately for fluctuations in time that confound the exposure, the effect estimate will be biased. It is therefore important to perform model-checking in time-stratified case-crossover analyses rather than assume the estimator is unbiased.
Human-arm-and-hand-dynamic model with variability analyses for a stylus-based haptic interface.
Fu, Michael J; Cavuşoğlu, M Cenk
2012-12-01
Haptic interface research benefits from accurate human arm models for control and system design. The literature contains many human arm dynamic models but lacks detailed variability analyses. Without accurate measurements, variability is modeled in a very conservative manner, leading to less than optimal controller and system designs. This paper not only presents models for human arm dynamics but also develops inter- and intrasubject variability models for a stylus-based haptic device. Data from 15 human subjects (nine male, six female, ages 20-32) were collected using a Phantom Premium 1.5a haptic device for system identification. In this paper, grip-force-dependent models were identified for 1-3-N grip forces in the three spatial axes. Also, variability due to human subjects and grip-force variation were modeled as both structured and unstructured uncertainties. For both forms of variability, the maximum variation, 95 %, and 67 % confidence interval limits were examined. All models were in the frequency domain with force as input and position as output. The identified models enable precise controllers targeted to a subset of possible human operator dynamics.
Studies of MGS TES and MPF MET Data
NASA Technical Reports Server (NTRS)
Barnes, Jeff R.
2003-01-01
The work supported by this grant was divided into two broad areas: (1) mesoscale modeling of atmospheric circulations and analyses of Pathfinder, Viking, and other Mars data, and (2) analyses of MGS TES temperature data. The mesoscale modeling began with the development of a suitable Mars mesoscale model based upon the terrestrial MM5 model, which was then applied to the simulation of the meteorological observations at the Pathfinder and Viking Lander 1 sites during northern summer. This extended study served a dual purpose: to validate the new mesoscale model with the best of the available in-situ data, and to use the model to aid in the interpretation of the surface meteorological data.
Ng, Edmond S-W; Diaz-Ordaz, Karla; Grieve, Richard; Nixon, Richard M; Thompson, Simon G; Carpenter, James R
2016-10-01
Multilevel models provide a flexible modelling framework for cost-effectiveness analyses that use cluster randomised trial data. However, there is a lack of guidance on how to choose the most appropriate multilevel models. This paper illustrates an approach for deciding what level of model complexity is warranted; in particular how best to accommodate complex variance-covariance structures, right-skewed costs and missing data. Our proposed models differ according to whether or not they allow individual-level variances and correlations to differ across treatment arms or clusters and by the assumed cost distribution (Normal, Gamma, Inverse Gaussian). The models are fitted by Markov chain Monte Carlo methods. Our approach to model choice is based on four main criteria: the characteristics of the data, model pre-specification informed by the previous literature, diagnostic plots and assessment of model appropriateness. This is illustrated by re-analysing a previous cost-effectiveness analysis that uses data from a cluster randomised trial. We find that the most useful criterion for model choice was the deviance information criterion, which distinguishes amongst models with alternative variance-covariance structures, as well as between those with different cost distributions. This strategy for model choice can help cost-effectiveness analyses provide reliable inferences for policy-making when using cluster trials, including those with missing data. © The Author(s) 2013.
NASA Astrophysics Data System (ADS)
Yondo, Raul; Andrés, Esther; Valero, Eusebio
2018-01-01
Full scale aerodynamic wind tunnel testing, numerical simulation of high dimensional (full-order) aerodynamic models or flight testing are some of the fundamental but complex steps in the various design phases of recent civil transport aircrafts. Current aircraft aerodynamic designs have increase in complexity (multidisciplinary, multi-objective or multi-fidelity) and need to address the challenges posed by the nonlinearity of the objective functions and constraints, uncertainty quantification in aerodynamic problems or the restrained computational budgets. With the aim to reduce the computational burden and generate low-cost but accurate models that mimic those full order models at different values of the design variables, Recent progresses have witnessed the introduction, in real-time and many-query analyses, of surrogate-based approaches as rapid and cheaper to simulate models. In this paper, a comprehensive and state-of-the art survey on common surrogate modeling techniques and surrogate-based optimization methods is given, with an emphasis on models selection and validation, dimensionality reduction, sensitivity analyses, constraints handling or infill and stopping criteria. Benefits, drawbacks and comparative discussions in applying those methods are described. Furthermore, the paper familiarizes the readers with surrogate models that have been successfully applied to the general field of fluid dynamics, but not yet in the aerospace industry. Additionally, the review revisits the most popular sampling strategies used in conducting physical and simulation-based experiments in aircraft aerodynamic design. Attractive or smart designs infrequently used in the field and discussions on advanced sampling methodologies are presented, to give a glance on the various efficient possibilities to a priori sample the parameter space. Closing remarks foster on future perspectives, challenges and shortcomings associated with the use of surrogate models by aircraft industrial aerodynamicists, despite their increased interest among the research communities.
Arranging ISO 13606 archetypes into a knowledge base using UML connectors.
Kopanitsa, Georgy
2014-01-01
To enable the efficient reuse of standard based medical data we propose to develop a higher-level information model that will complement the archetype model of ISO 13606. This model will make use of the relationships that are specified in UML to connect medical archetypes into a knowledge base within a repository. UML connectors were analysed for their ability to be applied in the implementation of a higher-level model that will establish relationships between archetypes. An information model was developed using XML Schema notation. The model allows linking different archetypes of one repository into a knowledge base. Presently it supports several relationships and will be advanced in future.
Lee, Inhan; Williams, Christopher R.; Athey, Brian D.; Baker, James R.
2010-01-01
Molecular dynamics simulations of nano-therapeutics as a final product and of all intermediates in the process of generating a multi-functional nano-therapeutic based on a poly(amidoamine) (PAMAM) dendrimer were performed along with chemical analyses of each of them. The actual structures of the dendrimers were predicted, based on potentiometric titration, gel permeation chromatography, and NMR. The chemical analyses determined the numbers of functional molecules, based on the actual structure of the dendrimer. Molecular dynamics simulations calculated the configurations of the intermediates and the radial distributions of functional molecules, based on their numbers. This interactive process between the simulation results and the chemical analyses provided a further strategy to design the next reaction steps and to gain insight into the products at each chemical reaction step. PMID:20700476
DOE Office of Scientific and Technical Information (OSTI.GOV)
FINSTERLE, STEFAN; JUNG, YOOJIN; KOWALSKY, MICHAEL
2016-09-15
iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional, multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. iTOUGH2 performs sensitivity analyses, data-worth analyses, parameter estimation, and uncertainty propagation analyses in geosciences and reservoir engineering and other application areas. iTOUGH2 supports a number of different combinations of fluids and components (equation-of-state (EOS) modules). In addition, the optimization routines implemented in iTOUGH2 can also be used for sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files using the PEST protocol. iTOUGH2 solves the inverse problem bymore » minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative-free, gradient-based, and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlo simulations for uncertainty propagation analyses. A detailed residual and error analysis is provided. This upgrade includes (a) global sensitivity analysis methods, (b) dynamic memory allocation (c) additional input features and output analyses, (d) increased forward simulation capabilities, (e) parallel execution on multicore PCs and Linux clusters, and (f) bug fixes. More details can be found at http://esd.lbl.gov/iTOUGH2.« less
Huynh, Jasmine-Yan; Winefield, Anthony H; Xanthopoulou, Despoina; Metzer, Jacques C
2012-09-01
This study examined the role of burnout and connectedness in the job demands-resources (JD-R) model among palliative care volunteers. It was hypothesized that (a) exhaustion mediates the relationship between demands and depression, and between demands and retention; (b) cynicism mediates the relationship between resources and retention; and (c) connectedness mediates the relationship between resources and retention. Hypotheses were tested in 2 separate analyses: structural equation modeling (SEM) and path analyses. The first was based on volunteer self-reports (N = 204), while the second analysis concerned matched data from volunteers and their family members (N = 99). While strong support was found for cynicism and connectedness as mediators in both types of analyses, this was not altogether the case for exhaustion. Implications of these findings for the JD-R model and volunteer organizations are discussed.
Computer-Based Model Calibration and Uncertainty Analysis: Terms and Concepts
2015-07-01
uncertainty analyses throughout the lifecycle of planning, designing, and operating of Civil Works flood risk management projects as described in...value 95% of the time. In the frequentist approach to PE, model parameters area regarded as having true values, and their estimate is based on the...in catchment models. 1. Evaluating parameter uncertainty. Water Resources Research 19(5):1151–1172. Lee, P. M. 2012. Bayesian statistics: An
What is needed to eliminate new pediatric HIV infections: The contribution of model-based analyses
Doherty, Katie; Ciaranello, Andrea
2013-01-01
Purpose of Review Computer simulation models can identify key clinical, operational, and economic interventions that will be needed to achieve the elimination of new pediatric HIV infections. In this review, we summarize recent findings from model-based analyses of strategies for prevention of mother-to-child HIV transmission (MTCT). Recent Findings In order to achieve elimination of MTCT (eMTCT), model-based studies suggest that scale-up of services will be needed in several domains: uptake of services and retention in care (the PMTCT “cascade”), interventions to prevent HIV infections in women and reduce unintended pregnancies (the “four-pronged approach”), efforts to support medication adherence through long periods of pregnancy and breastfeeding, and strategies to make breastfeeding safer and/or shorter. Models also project the economic resources that will be needed to achieve these goals in the most efficient ways to allocate limited resources for eMTCT. Results suggest that currently recommended PMTCT regimens (WHO Option A, Option B, and Option B+) will be cost-effective in most settings. Summary Model-based results can guide future implementation science, by highlighting areas in which additional data are needed to make informed decisions and by outlining critical interventions that will be necessary in order to eliminate new pediatric HIV infections. PMID:23743788
NASA Astrophysics Data System (ADS)
Appel, Marius; Lahn, Florian; Buytaert, Wouter; Pebesma, Edzer
2018-04-01
Earth observation (EO) datasets are commonly provided as collection of scenes, where individual scenes represent a temporal snapshot and cover a particular region on the Earth's surface. Using these data in complex spatiotemporal modeling becomes difficult as soon as data volumes exceed a certain capacity or analyses include many scenes, which may spatially overlap and may have been recorded at different dates. In order to facilitate analytics on large EO datasets, we combine and extend the geospatial data abstraction library (GDAL) and the array-based data management and analytics system SciDB. We present an approach to automatically convert collections of scenes to multidimensional arrays and use SciDB to scale computationally intensive analytics. We evaluate the approach in three study cases on national scale land use change monitoring with Landsat imagery, global empirical orthogonal function analysis of daily precipitation, and combining historical climate model projections with satellite-based observations. Results indicate that the approach can be used to represent various EO datasets and that analyses in SciDB scale well with available computational resources. To simplify analyses of higher-dimensional datasets as from climate model output, however, a generalization of the GDAL data model might be needed. All parts of this work have been implemented as open-source software and we discuss how this may facilitate open and reproducible EO analyses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
G. Keating; W.Statham
2004-02-12
The purpose of this model report is to provide documentation of the conceptual and mathematical model (ASHPLUME) for atmospheric dispersal and subsequent deposition of ash on the land surface from a potential volcanic eruption at Yucca Mountain, Nevada. This report also documents the ash (tephra) redistribution conceptual model. The ASHPLUME conceptual model accounts for incorporation and entrainment of waste fuel particles associated with a hypothetical volcanic eruption through the Yucca Mountain repository and downwind transport of contaminated tephra. The ASHPLUME mathematical model describes the conceptual model in mathematical terms to allow for prediction of radioactive waste/ash deposition on the groundmore » surface given that the hypothetical eruptive event occurs. This model report also describes the conceptual model for tephra redistribution from a basaltic cinder cone. Sensitivity analyses and model validation activities for the ash dispersal and redistribution models are also presented. Analyses documented in this model report will improve and clarify the previous documentation of the ASHPLUME mathematical model and its application to the Total System Performance Assessment (TSPA) for the License Application (TSPA-LA) igneous scenarios. This model report also documents the redistribution model product outputs based on analyses to support the conceptual model.« less
ITOUGH2(UNIX). Inverse Modeling for TOUGH2 Family of Multiphase Flow Simulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finsterle, S.
1999-03-01
ITOUGH2 provides inverse modeling capabilities for the TOUGH2 family of numerical simulators for non-isothermal multiphase flows in fractured-porous media. The ITOUGH2 can be used for estimating parameters by automatic modeling calibration, for sensitivity analyses, and for uncertainity propagation analyses (linear and Monte Carlo simulations). Any input parameter to the TOUGH2 simulator can be estimated based on any type of observation for which a corresponding TOUGH2 output is calculated. ITOUGH2 solves a non-linear least-squares problem using direct or gradient-based minimization algorithms. A detailed residual and error analysis is performed, which includes the evaluation of model identification criteria. ITOUGH2 can also bemore » run in forward mode, solving subsurface flow problems related to nuclear waste isolation, oil, gas, and geothermal resevoir engineering, and vadose zone hydrology.« less
A Systematic Planning for Science Laboratory Instruction: Research-Based Evidence
ERIC Educational Resources Information Center
Balta, Nuri
2015-01-01
The aim of this study is to develop an instructional design model for science laboratory instruction. Well-known ID models were analysed and Dick and Carey model was imitated to produce a science laboratory instructional design (SLID) model. In order to validate the usability of the designed model, the views of 34 high school teachers related to…
A conflict model for the international hazardous waste disposal dispute.
Hu, Kaixian; Hipel, Keith W; Fang, Liping
2009-12-15
A multi-stage conflict model is developed to analyze international hazardous waste disposal disputes. More specifically, the ongoing toxic waste conflicts are divided into two stages consisting of the dumping prevention and dispute resolution stages. The modeling and analyses, based on the methodology of graph model for conflict resolution (GMCR), are used in both stages in order to grasp the structure and implications of a given conflict from a strategic viewpoint. Furthermore, a specific case study is investigated for the Ivory Coast hazardous waste conflict. In addition to the stability analysis, sensitivity and attitude analyses are conducted to capture various strategic features of this type of complicated dispute.
Further Results of Soft-Inplane Tiltrotor Aeromechanics Investigation Using Two Multibody Analyses
NASA Technical Reports Server (NTRS)
Masarati, Pierangelo; Quaranta, Giuseppe; Piatak, David J.; Singleton, Jeffrey D.
2004-01-01
This investigation focuses on the development of multibody analytical models to predict the dynamic response, aeroelastic stability, and blade loading of a soft-inplane tiltrotor wind-tunnel model. Comprehensive rotorcraft-based multibody analyses enable modeling of the rotor system to a high level of detail such that complex mechanics and nonlinear effects associated with control system geometry and joint deadband may be considered. The influence of these and other nonlinear effects on the aeromechanical behavior of the tiltrotor model are examined. A parametric study of the design parameters which may have influence on the aeromechanics of the soft-inplane rotor system are also included in this investigation.
Modeling and Simulations for the High Flux Isotope Reactor Cycle 400
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ilas, Germina; Chandler, David; Ade, Brian J
2015-03-01
A concerted effort over the past few years has been focused on enhancing the core model for the High Flux Isotope Reactor (HFIR), as part of a comprehensive study for HFIR conversion from high-enriched uranium (HEU) to low-enriched uranium (LEU) fuel. At this time, the core model used to perform analyses in support of HFIR operation is an MCNP model for the beginning of Cycle 400, which was documented in detail in a 2005 technical report. A HFIR core depletion model that is based on current state-of-the-art methods and nuclear data was needed to serve as reference for the designmore » of an LEU fuel for HFIR. The recent enhancements in modeling and simulations for HFIR that are discussed in the present report include: (1) revision of the 2005 MCNP model for the beginning of Cycle 400 to improve the modeling data and assumptions as necessary based on appropriate primary reference sources HFIR drawings and reports; (2) improvement of the fuel region model, including an explicit representation for the involute fuel plate geometry that is characteristic to HFIR fuel; and (3) revision of the Monte Carlo-based depletion model for HFIR in use since 2009 but never documented in detail, with the development of a new depletion model for the HFIR explicit fuel plate representation. The new HFIR models for Cycle 400 are used to determine various metrics of relevance to reactor performance and safety assessments. The calculated metrics are compared, where possible, with measurement data from preconstruction critical experiments at HFIR, data included in the current HFIR safety analysis report, and/or data from previous calculations performed with different methods or codes. The results of the analyses show that the models presented in this report provide a robust and reliable basis for HFIR analyses.« less
Preventing Road Rage by Modelling the Car-following and the Safety Distance Model
NASA Astrophysics Data System (ADS)
Lan, Si; Fang, Ni; Zhao, Huanming; Ye, Shiqi
2017-11-01
Starting from the different behaviours of the driver’s lane change, the car-following model based on the distance and speed and the safety distance model are established in this paper, so as to analyse the impact on traffic flow and safety, helping solve the phenomenon of road anger.
NASA Astrophysics Data System (ADS)
Rutishauser, This; Stöckli, Reto; Jeanneret, François; Peñuelas, Josep
2010-05-01
Changes in the seasonality of life cycles of plants as recorded in phenological observations have been widely analysed at the species level with data available for many decades back in time. At the same time, seasonality changes in satellite-based observations and prognostic phenology models comprise information at the pixel-size or landscape scale. Change analysis of satellite-based records is restricted due to relatively short satellite records that further include gaps while model-based analyses are biased due to current model deficiencies., At 30 selected sites across Europe, we analysed three different sources of plant seasonality during the 1971-2000 period. Data consisted of (1) species-specific development stages of flowering and leave-out with different species observed at each site. (2) We used a synthetic phenological metric that integrates the common interannual phenological signal across all species at one site. (3) We estimated daily Leaf Area Index with a prognostic phenology model. The prior uncertainties of the model's empirical parameter space are constrained by assimilating the Fraction of Photosynthetically Active Radiation absorbed by vegetation (FPAR) and Leaf Area Index (LAI) from the MODerate Resolution Imaging Spectroradiometer (MODIS). We extracted the day of year when the 25%, 50% and 75% thresholds were passed each spring. The question arises how the three phenological signals compare and correlate across climate zones in Europe. Is there a match between single species observations, species-based ground-observed metrics and the landscape-scale prognostic model? Are there single key-species across Europe that best represent a landscape scale measure from the prognostic model? Can one source substitute another and serve as proxy-data? What can we learn from potential mismatches? Focusing on changes in spring this contribution presents first results of an ongoing comparison study from a number of European test sites that will be extended to the pan-European phenological database Cost725 and PEP725.
Entrance and exit region friction factor models for annular seal analysis. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Elrod, David Alan
1988-01-01
The Mach number definition and boundary conditions in Nelson's nominally-centered, annular gas seal analysis are revised. A method is described for determining the wall shear stress characteristics of an annular gas seal experimentally. Two friction factor models are developed for annular seal analysis; one model is based on flat-plate flow theory; the other uses empirical entrance and exit region friction factors. The friction factor predictions of the models are compared to experimental results. Each friction model is used in an annular gas seal analysis. The seal characteristics predicted by the two seal analyses are compared to experimental results and to the predictions of Nelson's analysis. The comparisons are for smooth-rotor seals with smooth and honeycomb stators. The comparisons show that the analysis which uses empirical entrance and exit region shear stress models predicts the static and stability characteristics of annular gas seals better than the other analyses. The analyses predict direct stiffness poorly.
Bridging the divide: a model-data approach to Polar and Alpine microbiology.
Bradley, James A; Anesio, Alexandre M; Arndt, Sandra
2016-03-01
Advances in microbial ecology in the cryosphere continue to be driven by empirical approaches including field sampling and laboratory-based analyses. Although mathematical models are commonly used to investigate the physical dynamics of Polar and Alpine regions, they are rarely applied in microbial studies. Yet integrating modelling approaches with ongoing observational and laboratory-based work is ideally suited to Polar and Alpine microbial ecosystems given their harsh environmental and biogeochemical characteristics, simple trophic structures, distinct seasonality, often difficult accessibility, geographical expansiveness and susceptibility to accelerated climate changes. In this opinion paper, we explain how mathematical modelling ideally complements field and laboratory-based analyses. We thus argue that mathematical modelling is a powerful tool for the investigation of these extreme environments and that fully integrated, interdisciplinary model-data approaches could help the Polar and Alpine microbiology community address some of the great research challenges of the 21st century (e.g. assessing global significance and response to climate change). However, a better integration of field and laboratory work with model design and calibration/validation, as well as a stronger focus on quantitative information is required to advance models that can be used to make predictions and upscale processes and fluxes beyond what can be captured by observations alone. © FEMS 2016.
Displacement-based back-analysis of the model parameters of the Nuozhadu high earth-rockfill dam.
Wu, Yongkang; Yuan, Huina; Zhang, Bingyin; Zhang, Zongliang; Yu, Yuzhen
2014-01-01
The parameters of the constitutive model, the creep model, and the wetting model of materials of the Nuozhadu high earth-rockfill dam were back-analyzed together based on field monitoring displacement data by employing an intelligent back-analysis method. In this method, an artificial neural network is used as a substitute for time-consuming finite element analysis, and an evolutionary algorithm is applied for both network training and parameter optimization. To avoid simultaneous back-analysis of many parameters, the model parameters of the three main dam materials are decoupled and back-analyzed separately in a particular order. Displacement back-analyses were performed at different stages of the construction period, with and without considering the creep and wetting deformations. Good agreement between the numerical results and the monitoring data was obtained for most observation points, which implies that the back-analysis method and decoupling method are effective for solving complex problems with multiple models and parameters. The comparison of calculation results based on different sets of back-analyzed model parameters indicates the necessity of taking the effects of creep and wetting into consideration in the numerical analyses of high earth-rockfill dams. With the resulting model parameters, the stress and deformation distributions at completion are predicted and analyzed.
Bridging the divide: a model-data approach to Polar and Alpine microbiology
Bradley, James A.; Anesio, Alexandre M.; Arndt, Sandra
2016-01-01
Advances in microbial ecology in the cryosphere continue to be driven by empirical approaches including field sampling and laboratory-based analyses. Although mathematical models are commonly used to investigate the physical dynamics of Polar and Alpine regions, they are rarely applied in microbial studies. Yet integrating modelling approaches with ongoing observational and laboratory-based work is ideally suited to Polar and Alpine microbial ecosystems given their harsh environmental and biogeochemical characteristics, simple trophic structures, distinct seasonality, often difficult accessibility, geographical expansiveness and susceptibility to accelerated climate changes. In this opinion paper, we explain how mathematical modelling ideally complements field and laboratory-based analyses. We thus argue that mathematical modelling is a powerful tool for the investigation of these extreme environments and that fully integrated, interdisciplinary model-data approaches could help the Polar and Alpine microbiology community address some of the great research challenges of the 21st century (e.g. assessing global significance and response to climate change). However, a better integration of field and laboratory work with model design and calibration/validation, as well as a stronger focus on quantitative information is required to advance models that can be used to make predictions and upscale processes and fluxes beyond what can be captured by observations alone. PMID:26832206
Zero, Victoria H.; Barocas, Adi; Jochimsen, Denim M.; Pelletier, Agnès; Giroux-Bougard, Xavier; Trumbo, Daryl R.; Castillo, Jessica A.; Evans Mack, Diane; Linnell, Mark A.; Pigg, Rachel M.; Hoisington-Lopez, Jessica; Spear, Stephen F.; Murphy, Melanie A.; Waits, Lisette P.
2017-01-01
The persistence of small populations is influenced by genetic structure and functional connectivity. We used two network-based approaches to understand the persistence of the northern Idaho ground squirrel (Urocitellus brunneus) and the southern Idaho ground squirrel (U. endemicus), two congeners of conservation concern. These graph theoretic approaches are conventionally applied to social or transportation networks, but here are used to study population persistence and connectivity. Population graph analyses revealed that local extinction rapidly reduced connectivity for the southern species, while connectivity for the northern species could be maintained following local extinction. Results from gravity models complemented those of population graph analyses, and indicated that potential vegetation productivity and topography drove connectivity in the northern species. For the southern species, development (roads) and small-scale topography reduced connectivity, while greater potential vegetation productivity increased connectivity. Taken together, the results of the two network-based methods (population graph analyses and gravity models) suggest the need for increased conservation action for the southern species, and that management efforts have been effective at maintaining habitat quality throughout the current range of the northern species. To prevent further declines, we encourage the continuation of management efforts for the northern species, whereas conservation of the southern species requires active management and additional measures to curtail habitat fragmentation. Our combination of population graph analyses and gravity models can inform conservation strategies of other species exhibiting patchy distributions. PMID:28659969
Zero, Victoria H; Barocas, Adi; Jochimsen, Denim M; Pelletier, Agnès; Giroux-Bougard, Xavier; Trumbo, Daryl R; Castillo, Jessica A; Evans Mack, Diane; Linnell, Mark A; Pigg, Rachel M; Hoisington-Lopez, Jessica; Spear, Stephen F; Murphy, Melanie A; Waits, Lisette P
2017-01-01
The persistence of small populations is influenced by genetic structure and functional connectivity. We used two network-based approaches to understand the persistence of the northern Idaho ground squirrel ( Urocitellus brunneus) and the southern Idaho ground squirrel ( U. endemicus ), two congeners of conservation concern. These graph theoretic approaches are conventionally applied to social or transportation networks, but here are used to study population persistence and connectivity. Population graph analyses revealed that local extinction rapidly reduced connectivity for the southern species, while connectivity for the northern species could be maintained following local extinction. Results from gravity models complemented those of population graph analyses, and indicated that potential vegetation productivity and topography drove connectivity in the northern species. For the southern species, development (roads) and small-scale topography reduced connectivity, while greater potential vegetation productivity increased connectivity. Taken together, the results of the two network-based methods (population graph analyses and gravity models) suggest the need for increased conservation action for the southern species, and that management efforts have been effective at maintaining habitat quality throughout the current range of the northern species. To prevent further declines, we encourage the continuation of management efforts for the northern species, whereas conservation of the southern species requires active management and additional measures to curtail habitat fragmentation. Our combination of population graph analyses and gravity models can inform conservation strategies of other species exhibiting patchy distributions.
Cellular-based modeling of oscillatory dynamics in brain networks.
Skinner, Frances K
2012-08-01
Oscillatory, population activities have long been known to occur in our brains during different behavioral states. We know that many different cell types exist and that they contribute in distinct ways to the generation of these activities. I review recent papers that involve cellular-based models of brain networks, most of which include theta, gamma and sharp wave-ripple activities. To help organize the modeling work, I present it from a perspective of three different types of cellular-based modeling: 'Generic', 'Biophysical' and 'Linking'. Cellular-based modeling is taken to encompass the four features of experiment, model development, theory/analyses, and model usage/computation. The three modeling types are shown to include these features and interactions in different ways. Copyright © 2012 Elsevier Ltd. All rights reserved.
Frederix, Gerardus W J; van Hasselt, Johan G C; Schellens, Jan H M; Hövels, Anke M; Raaijmakers, Jan A M; Huitema, Alwin D R; Severens, Johan L
2014-01-01
Structural uncertainty relates to differences in model structure and parameterization. For many published health economic analyses in oncology, substantial differences in model structure exist, leading to differences in analysis outcomes and potentially impacting decision-making processes. The objectives of this analysis were (1) to identify differences in model structure and parameterization for cost-effectiveness analyses (CEAs) comparing tamoxifen and anastrazole for adjuvant breast cancer (ABC) treatment; and (2) to quantify the impact of these differences on analysis outcome metrics. The analysis consisted of four steps: (1) review of the literature for identification of eligible CEAs; (2) definition and implementation of a base model structure, which included the core structural components for all identified CEAs; (3) definition and implementation of changes or additions in the base model structure or parameterization; and (4) quantification of the impact of changes in model structure or parameterizations on the analysis outcome metrics life-years gained (LYG), incremental costs (IC) and the incremental cost-effectiveness ratio (ICER). Eleven CEA analyses comparing anastrazole and tamoxifen as ABC treatment were identified. The base model consisted of the following health states: (1) on treatment; (2) off treatment; (3) local recurrence; (4) metastatic disease; (5) death due to breast cancer; and (6) death due to other causes. The base model estimates of anastrazole versus tamoxifen for the LYG, IC and ICER were 0.263 years, €3,647 and €13,868/LYG, respectively. In the published models that were evaluated, differences in model structure included the addition of different recurrence health states, and associated transition rates were identified. Differences in parameterization were related to the incidences of recurrence, local recurrence to metastatic disease, and metastatic disease to death. The separate impact of these model components on the LYG ranged from 0.207 to 0.356 years, while incremental costs ranged from €3,490 to €3,714 and ICERs ranged from €9,804/LYG to €17,966/LYG. When we re-analyzed the published CEAs in our framework by including their respective model properties, the LYG ranged from 0.207 to 0.383 years, IC ranged from €3,556 to €3,731 and ICERs ranged from €9,683/LYG to €17,570/LYG. Differences in model structure and parameterization lead to substantial differences in analysis outcome metrics. This analysis supports the need for more guidance regarding structural uncertainty and the use of standardized disease-specific models for health economic analyses of adjuvant endocrine breast cancer therapies. The developed approach in the current analysis could potentially serve as a template for further evaluations of structural uncertainty and development of disease-specific models.
Three-dimensional earthquake analysis of roller-compacted concrete dams
NASA Astrophysics Data System (ADS)
Kartal, M. E.
2012-07-01
Ground motion effect on a roller-compacted concrete (RCC) dams in the earthquake zone should be taken into account for the most critical conditions. This study presents three-dimensional earthquake response of a RCC dam considering geometrical non-linearity. Besides, material and connection non-linearity are also taken into consideration in the time-history analyses. Bilinear and multilinear kinematic hardening material models are utilized in the materially non-linear analyses for concrete and foundation rock respectively. The contraction joints inside the dam blocks and dam-foundation-reservoir interaction are modeled by the contact elements. The hydrostatic and hydrodynamic pressures of the reservoir water are modeled with the fluid finite elements based on the Lagrangian approach. The gravity and hydrostatic pressure effects are employed as initial condition before the strong ground motion. In the earthquake analyses, viscous dampers are defined in the finite element model to represent infinite boundary conditions. According to numerical solutions, horizontal displacements increase under hydrodynamic pressure. Besides, those also increase in the materially non-linear analyses of the dam. In addition, while the principle stress components by the hydrodynamic pressure effect the reservoir water, those decrease in the materially non-linear time-history analyses.
A sediment graph model based on SCS-CN method
NASA Astrophysics Data System (ADS)
Singh, P. K.; Bhunya, P. K.; Mishra, S. K.; Chaube, U. C.
2008-01-01
SummaryThis paper proposes new conceptual sediment graph models based on coupling of popular and extensively used methods, viz., Nash model based instantaneous unit sediment graph (IUSG), soil conservation service curve number (SCS-CN) method, and Power law. These models vary in their complexity and this paper tests their performance using data of the Nagwan watershed (area = 92.46 km 2) (India). The sensitivity of total sediment yield and peak sediment flow rate computations to model parameterisation is analysed. The exponent of the Power law, β, is more sensitive than other model parameters. The models are found to have substantial potential for computing sediment graphs (temporal sediment flow rate distribution) as well as total sediment yield.
Gamal El-Dien, Omnia; Ratcliffe, Blaise; Klápště, Jaroslav; Porth, Ilga; Chen, Charles; El-Kassaby, Yousry A.
2016-01-01
The open-pollinated (OP) family testing combines the simplest known progeny evaluation and quantitative genetics analyses as candidates’ offspring are assumed to represent independent half-sib families. The accuracy of genetic parameter estimates is often questioned as the assumption of “half-sibling” in OP families may often be violated. We compared the pedigree- vs. marker-based genetic models by analysing 22-yr height and 30-yr wood density for 214 white spruce [Picea glauca (Moench) Voss] OP families represented by 1694 individuals growing on one site in Quebec, Canada. Assuming half-sibling, the pedigree-based model was limited to estimating the additive genetic variances which, in turn, were grossly overestimated as they were confounded by very minor dominance and major additive-by-additive epistatic genetic variances. In contrast, the implemented genomic pairwise realized relationship models allowed the disentanglement of additive from all nonadditive factors through genetic variance decomposition. The marker-based models produced more realistic narrow-sense heritability estimates and, for the first time, allowed estimating the dominance and epistatic genetic variances from OP testing. In addition, the genomic models showed better prediction accuracies compared to pedigree models and were able to predict individual breeding values for new individuals from untested families, which was not possible using the pedigree-based model. Clearly, the use of marker-based relationship approach is effective in estimating the quantitative genetic parameters of complex traits even under simple and shallow pedigree structure. PMID:26801647
Owens, Douglas K; Whitlock, Evelyn P; Henderson, Jillian; Pignone, Michael P; Krist, Alex H; Bibbins-Domingo, Kirsten; Curry, Susan J; Davidson, Karina W; Ebell, Mark; Gillman, Matthew W; Grossman, David C; Kemper, Alex R; Kurth, Ann E; Maciosek, Michael; Siu, Albert L; LeFevre, Michael L
2016-10-04
The U.S. Preventive Services Task Force (USPSTF) develops evidence-based recommendations about preventive care based on comprehensive systematic reviews of the best available evidence. Decision models provide a complementary, quantitative approach to support the USPSTF as it deliberates about the evidence and develops recommendations for clinical and policy use. This article describes the rationale for using modeling, an approach to selecting topics for modeling, and how modeling may inform recommendations about clinical preventive services. Decision modeling is useful when clinical questions remain about how to target an empirically established clinical preventive service at the individual or program level or when complex determinations of magnitude of net benefit, overall or among important subpopulations, are required. Before deciding whether to use decision modeling, the USPSTF assesses whether the benefits and harms of the preventive service have been established empirically, assesses whether there are key issues about applicability or implementation that modeling could address, and then defines the decision problem and key questions to address through modeling. Decision analyses conducted for the USPSTF are expected to follow best practices for modeling. For chosen topics, the USPSTF assesses the strengths and limitations of the systematically reviewed evidence and the modeling analyses and integrates the results of each to make preventive service recommendations.
NASA Astrophysics Data System (ADS)
Furió-Más, Carlos; Calatayud, María Luisa; Guisasola, Jenaro; Furió-Gómez, Cristina
2005-09-01
This paper investigates the views of science and scientific activity that can be found in chemistry textbooks and heard from teachers when acid base reactions are introduced to grade 12 and university chemistry students. First, the main macroscopic and microscopic conceptual models are developed. Second, we attempt to show how the existence of views of science in textbooks and of chemistry teachers contributes to an impoverished image of chemistry. A varied design has been elaborated to analyse some epistemological deficiencies in teaching acid base reactions. Textbooks have been analysed and teachers have been interviewed. The results obtained show that the teaching process does not emphasize the macroscopic presentation of acids and bases. Macroscopic and microscopic conceptual models involved in the explanation of acid base processes are mixed in textbooks and by teachers. Furthermore, the non-problematic introduction of concepts, such as the hydrolysis concept, and the linear, cumulative view of acid base theories (Arrhenius and Brönsted) were detected.
Cost-effectiveness of prucalopride in the treatment of chronic constipation in the Netherlands
Nuijten, Mark J. C.; Dubois, Dominique J.; Joseph, Alain; Annemans, Lieven
2015-01-01
Objective: To assess the cost-effectiveness of prucalopride vs. continued laxative treatment for chronic constipation in patients in the Netherlands in whom laxatives have failed to provide adequate relief. Methods: A Markov model was developed to estimate the cost-effectiveness of prucalopride in patients with chronic constipation receiving standard laxative treatment from the perspective of Dutch payers in 2011. Data sources included published prucalopride clinical trials, published Dutch price/tariff lists, and national population statistics. The model simulated the clinical and economic outcomes associated with prucalopride vs. standard treatment and had a cycle length of 1 month and a follow-up time of 1 year. Response to treatment was defined as the proportion of patients who achieved “normal bowel function”. One-way and probabilistic sensitivity analyses were conducted to test the robustness of the base case. Results: In the base case analysis, the cost of prucalopride relative to continued laxative treatment was € 9015 per quality-adjusted life-year (QALY). Extensive sensitivity analyses and scenario analyses confirmed that the base case cost-effectiveness estimate was robust. One-way sensitivity analyses showed that the model was most sensitive in response to prucalopride; incremental cost-effectiveness ratios ranged from € 6475 to 15,380 per QALY. Probabilistic sensitivity analyses indicated that there is a greater than 80% probability that prucalopride would be cost-effective compared with continued standard treatment, assuming a willingness-to-pay threshold of € 20,000 per QALY from a Dutch societal perspective. A scenario analysis was performed for women only, which resulted in a cost-effectiveness ratio of € 7773 per QALY. Conclusion: Prucalopride was cost-effective in a Dutch patient population, as well as in a women-only subgroup, who had chronic constipation and who obtained inadequate relief from laxatives. PMID:25926794
The Air Quality Model Evaluation International Initiative ...
This presentation provides an overview of the Air Quality Model Evaluation International Initiative (AQMEII). It contains a synopsis of the three phases of AQMEII, including objectives, logistics, and timelines. It also provides a number of examples of analyses conducted through AQMEII with a particular focus on past and future analyses of deposition. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.
A Review of Recent Aeroelastic Analysis Methods for Propulsion at NASA Lewis Research Center
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Bakhle, Milind A.; Srivastava, R.; Mehmed, Oral; Stefko, George L.
1993-01-01
This report reviews aeroelastic analyses for propulsion components (propfans, compressors and turbines) being developed and used at NASA LeRC. These aeroelastic analyses include both structural and aerodynamic models. The structural models include a typical section, a beam (with and without disk flexibility), and a finite-element blade model (with plate bending elements). The aerodynamic models are based on the solution of equations ranging from the two-dimensional linear potential equation to the three-dimensional Euler equations for multibladed configurations. Typical calculated results are presented for each aeroelastic model. Suggestions for further research are made. Many of the currently available aeroelastic models and analysis methods are being incorporated in a unified computer program, APPLE (Aeroelasticity Program for Propulsion at LEwis).
NASA Technical Reports Server (NTRS)
Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John
2011-01-01
A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.
Analysis and visualization of disease courses in a semantically-enabled cancer registry.
Esteban-Gil, Angel; Fernández-Breis, Jesualdo Tomás; Boeker, Martin
2017-09-29
Regional and epidemiological cancer registries are important for cancer research and the quality management of cancer treatment. Many technological solutions are available to collect and analyse data for cancer registries nowadays. However, the lack of a well-defined common semantic model is a problem when user-defined analyses and data linking to external resources are required. The objectives of this study are: (1) design of a semantic model for local cancer registries; (2) development of a semantically-enabled cancer registry based on this model; and (3) semantic exploitation of the cancer registry for analysing and visualising disease courses. Our proposal is based on our previous results and experience working with semantic technologies. Data stored in a cancer registry database were transformed into RDF employing a process driven by OWL ontologies. The semantic representation of the data was then processed to extract semantic patient profiles, which were exploited by means of SPARQL queries to identify groups of similar patients and to analyse the disease timelines of patients. Based on the requirements analysis, we have produced a draft of an ontology that models the semantics of a local cancer registry in a pragmatic extensible way. We have implemented a Semantic Web platform that allows transforming and storing data from cancer registries in RDF. This platform also permits users to formulate incremental user-defined queries through a graphical user interface. The query results can be displayed in several customisable ways. The complex disease timelines of individual patients can be clearly represented. Different events, e.g. different therapies and disease courses, are presented according to their temporal and causal relations. The presented platform is an example of the parallel development of ontologies and applications that take advantage of semantic web technologies in the medical field. The semantic structure of the representation renders it easy to analyse key figures of the patients and their evolution at different granularity levels.
Cowell, Robert G
2018-05-04
Current models for single source and mixture samples, and probabilistic genotyping software based on them used for analysing STR electropherogram data, assume simple probability distributions, such as the gamma distribution, to model the allelic peak height variability given the initial amount of DNA prior to PCR amplification. Here we illustrate how amplicon number distributions, for a model of the process of sample DNA collection and PCR amplification, may be efficiently computed by evaluating probability generating functions using discrete Fourier transforms. Copyright © 2018 Elsevier B.V. All rights reserved.
Comparative Analyses of Zebrafish Anxiety-Like Behavior Using Conflict-Based Novelty Tests.
Kysil, Elana V; Meshalkina, Darya A; Frick, Erin E; Echevarria, David J; Rosemberg, Denis B; Maximino, Caio; Lima, Monica Gomes; Abreu, Murilo S; Giacomini, Ana C; Barcellos, Leonardo J G; Song, Cai; Kalueff, Allan V
2017-06-01
Modeling of stress and anxiety in adult zebrafish (Danio rerio) is increasingly utilized in neuroscience research and central nervous system (CNS) drug discovery. Representing the most commonly used zebrafish anxiety models, the novel tank test (NTT) focuses on zebrafish diving in response to potentially threatening stimuli, whereas the light-dark test (LDT) is based on fish scototaxis (innate preference for dark vs. bright areas). Here, we systematically evaluate the utility of these two tests, combining meta-analyses of published literature with comparative in vivo behavioral and whole-body endocrine (cortisol) testing. Overall, the NTT and LDT behaviors demonstrate a generally good cross-test correlation in vivo, whereas meta-analyses of published literature show that both tests have similar sensitivity to zebrafish anxiety-like states. Finally, NTT evokes higher levels of cortisol, likely representing a more stressful procedure than LDT. Collectively, our study reappraises NTT and LDT for studying anxiety-like states in zebrafish, and emphasizes their developing utility for neurobehavioral research. These findings can help optimize drug screening procedures by choosing more appropriate models for testing anxiolytic or anxiogenic drugs.
Xuan, Ziming; Chaloupka, Frank J; Blanchette, Jason G; Nguyen, Thien H; Heeren, Timothy C; Nelson, Toben F; Naimi, Timothy S
2015-03-01
U.S. studies contribute heavily to the literature about the tax elasticity of demand for alcohol, and most U.S. studies have relied upon specific excise (volume-based) taxes for beer as a proxy for alcohol taxes. The purpose of this paper was to compare this conventional alcohol tax measure with more comprehensive tax measures (incorporating multiple tax and beverage types) in analyses of the relationship between alcohol taxes and adult binge drinking prevalence in U.S. states. Data on U.S. state excise, ad valorem and sales taxes from 2001 to 2010 were obtained from the Alcohol Policy Information System and other sources. For 510 state-year strata, we developed a series of weighted tax-per-drink measures that incorporated various combinations of tax and beverage types, and related these measures to state-level adult binge drinking prevalence data from the Behavioral Risk Factor Surveillance System surveys. In analyses pooled across all years, models using the combined tax measure explained approximately 20% of state binge drinking prevalence, and documented more negative tax elasticity (-0.09, P = 0.02 versus -0.005, P = 0.63) and price elasticity (-1.40, P < 0.01 versus -0.76, P = 0.15) compared with models using only the volume-based tax. In analyses stratified by year, the R-squares for models using the beer combined tax measure were stable across the study period (P = 0.11), while the R-squares for models rely only on volume-based tax declined (P < 0.0). Compared with volume-based tax measures, combined tax measures (i.e. those incorporating volume-based tax and value-based taxes) yield substantial improvement in model fit and find more negative tax elasticity and price elasticity predicting adult binge drinking prevalence in U.S. states. © 2014 Society for the Study of Addiction.
Xuan, Ziming; Chaloupka, Frank J.; Blanchette, Jason G.; Nguyen, Thien H.; Heeren, Timothy C.; Nelson, Toben F.; Naimi, Timothy S.
2015-01-01
Aims U.S. studies contribute heavily to the literature about the tax elasticity of demand for alcohol, and most U.S. studies have relied upon specific excise (volume-based) taxes for beer as a proxy for alcohol taxes. The purpose of this paper was to compare this conventional alcohol tax measure with more comprehensive tax measures (incorporating multiple tax and beverage types) in analyses of the relationship between alcohol taxes and adult binge drinking prevalence in U.S. states. Design Data on U.S. state excise, ad valorem and sales taxes from 2001 to 2010 were obtained from the Alcohol Policy Information System and other sources. For 510 state-year strata, we developed a series of weighted tax-per-drink measures that incorporated various combinations of tax and beverage types, and related these measures to state-level adult binge drinking prevalence data from the Behavioral Risk Factor Surveillance System surveys. Findings In analyses pooled across all years, models using the combined tax measure explained approximately 20% of state binge drinking prevalence, and documented more negative tax elasticity (−0.09, P=0.02 versus −0.005, P=0.63) and price elasticity (−1.40, P<0.01 versus −0.76, P=0.15) compared with models using only the volume-based tax. In analyses stratified by year, the R-squares for models using the beer combined tax measure were stable across the study period (P=0.11), while the R-squares for models rely only on volume-based tax declined (P<0.01). Conclusions Compared with volume-based tax measures, combined tax measures (i.e. those incorporating volume-based tax and value-based taxes) yield substantial improvement in model fit and find more negative tax elasticity and price elasticity predicting adult binge drinking prevalence in U.S. states. PMID:25428795
Case Study of a Gifted and Talented Catholic Dominican Nun
ERIC Educational Resources Information Center
Lavin, Angela
2017-01-01
The case of a gifted and talented Catholic Dominican nun is described and analysed in the context of Renzulli's Three-Ring Conception of Giftedness and Gagne's Differentiated Model of Giftedness and Talent. Using qualitative methods, semi-structured interviews of relevant individuals were conducted and analysed. Based on the conclusions of this…
Akhlaghi, Tohid
2014-01-01
Evaluation of the accuracy of the pseudostatic approach is governed by the accuracy with which the simple pseudostatic inertial forces represent the complex dynamic inertial forces that actually exist in an earthquake. In this study, the Upper San Fernando and Kitayama earth dams, which have been designed using the pseudostatic approach and damaged during the 1971 San Fernando and 1995 Kobe earthquakes, were investigated and analyzed. The finite element models of the dams were prepared based on the detailed available data and results of in situ and laboratory material tests. Dynamic analyses were conducted to simulate the earthquake-induced deformations of the dams using the computer program Plaxis code. Then the pseudostatic seismic coefficient used in the design and analyses of the dams were compared with the seismic coefficients obtained from dynamic analyses of the simulated model as well as the other available proposed pseudostatic correlations. Based on the comparisons made, the accuracy and reliability of the pseudostatic seismic coefficients are evaluated and discussed. PMID:24616636
Analyses and forecasts with LAWS winds
NASA Technical Reports Server (NTRS)
Wang, Muyin; Paegle, Jan
1994-01-01
Horizontal fluxes of atmospheric water vapor are studied for summer months during 1989 and 1992 over North and South America based on analyses from European Center for Medium Range Weather Forecasts, US National Meteorological Center, and United Kingdom Meteorological Office. The calculations are performed over 20 deg by 20 deg box-shaped midlatitude domains located to the east of the Rocky Mountains in North America, and to the east of the Andes Mountains in South America. The fluxes are determined from operational center gridded analyses of wind and moisture. Differences in the monthly mean moisture flux divergence determined from these analyses are as large as 7 cm/month precipitable water equivalent over South America, and 3 cm/month over North America. Gridded analyses at higher spatial and temporal resolution exhibit better agreement in the moisture budget study. However, significant discrepancies of the moisture flux divergence computed from different gridded analyses still exist. The conclusion is more pessimistic than Rasmusson's estimate based on station data. Further analysis reveals that the most significant sources of error result from model surface elevation fields, gaps in the data archive, and uncertainties in the wind and specific humidity analyses. Uncertainties in the wind analyses are the most important problem. The low-level jets, in particular, are substantially different in the different data archives. Part of the reason for this may be due to the way the different analysis models parameterized physical processes affecting low-level jets. The results support the inference that the noise/signal ratio of the moisture budget may be improved more rapidly by providing better wind observations and analyses than by providing better moisture data.
DOT National Transportation Integrated Search
2014-01-01
Central to effective roadway design is the ability to understand how drivers behave as they traverse a segment of : roadway. While simple and complex microscopic models have been used over the years to analyse driver behaviour, : most models: 1.) inc...
Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.
Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg
2009-11-01
G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.
An industrial information integration approach to in-orbit spacecraft
NASA Astrophysics Data System (ADS)
Du, Xiaoning; Wang, Hong; Du, Yuhao; Xu, Li Da; Chaudhry, Sohail; Bi, Zhuming; Guo, Rong; Huang, Yongxuan; Li, Jisheng
2017-01-01
To operate an in-orbit spacecraft, the spacecraft status has to be monitored autonomously by collecting and analysing real-time data, and then detecting abnormities and malfunctions of system components. To develop an information system for spacecraft state detection, we investigate the feasibility of using ontology-based artificial intelligence in the system development. We propose a new modelling technique based on the semantic web, agent, scenarios and ontologies model. In modelling, the subjects of astronautics fields are classified, corresponding agents and scenarios are defined, and they are connected by the semantic web to analyse data and detect failures. We introduce the modelling methodologies and the resulted framework of the status detection information system in this paper. We discuss system components as well as their interactions in details. The system has been prototyped and tested to illustrate its feasibility and effectiveness. The proposed modelling technique is generic which can be extended and applied to the system development of other large-scale and complex information systems.
Prediction models for Arabica coffee beverage quality based on aroma analyses and chemometrics.
Ribeiro, J S; Augusto, F; Salva, T J G; Ferreira, M M C
2012-11-15
In this work, soft modeling based on chemometric analyses of coffee beverage sensory data and the chromatographic profiles of volatile roasted coffee compounds is proposed to predict the scores of acidity, bitterness, flavor, cleanliness, body, and overall quality of the coffee beverage. A partial least squares (PLS) regression method was used to construct the models. The ordered predictor selection (OPS) algorithm was applied to select the compounds for the regression model of each sensory attribute in order to take only significant chromatographic peaks into account. The prediction errors of these models, using 4 or 5 latent variables, were equal to 0.28, 0.33, 0.35, 0.33, 0.34 and 0.41, for each of the attributes and compatible with the errors of the mean scores of the experts. Thus, the results proved the feasibility of using a similar methodology in on-line or routine applications to predict the sensory quality of Brazilian Arabica coffee. Copyright © 2012 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Areepattamannil, Shaljan
2012-01-01
The author sought to investigate the effects of inquiry-based science instruction on science achievement and interest in science of 5,120 adolescents from 85 schools in Qatar. Results of hierarchical linear modeling analyses revealed the substantial positive effects of science teaching and learning with a focus on model or applications and…
An analytical model with flexible accuracy for deep submicron DCVSL cells
NASA Astrophysics Data System (ADS)
Valiollahi, Sepideh; Ardeshir, Gholamreza
2018-07-01
Differential cascoded voltage switch logic (DCVSL) cells are among the best candidates of circuit designers for a wide range of applications due to advantages such as low input capacitance, high switching speed, small area and noise-immunity; nevertheless, a proper model has not yet been developed to analyse them. This paper analyses deep submicron DCVSL cells based on a flexible accuracy-simplicity trade-off including the following key features: (1) the model is capable of producing closed-form expressions with an acceptable accuracy; (2) model equations can be solved numerically to offer higher accuracy; (3) the short-circuit currents occurring in high-low/low-high transitions are accounted in analysis and (4) the changes in the operating modes of transistors during transitions together with an efficient submicron I-V model, which incorporates the most important non-ideal short-channel effects, are considered. The accuracy of the proposed model is validated in IBM 0.13 µm CMOS technology through comparisons with the accurate physically based BSIM3 model. The maximum error caused by analytical solutions is below 10%, while this amount is below 7% for numerical solutions.
Climate Model Diagnostic Analyzer
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Pan, Lei; Zhai, Chengxing; Tang, Benyang; Kubar, Terry; Zhang, Zia; Wang, Wei
2015-01-01
The comprehensive and innovative evaluation of climate models with newly available global observations is critically needed for the improvement of climate model current-state representation and future-state predictability. A climate model diagnostic evaluation process requires physics-based multi-variable analyses that typically involve large-volume and heterogeneous datasets, making them both computation- and data-intensive. With an exploratory nature of climate data analyses and an explosive growth of datasets and service tools, scientists are struggling to keep track of their datasets, tools, and execution/study history, let alone sharing them with others. In response, we have developed a cloud-enabled, provenance-supported, web-service system called Climate Model Diagnostic Analyzer (CMDA). CMDA enables the physics-based, multivariable model performance evaluations and diagnoses through the comprehensive and synergistic use of multiple observational data, reanalysis data, and model outputs. At the same time, CMDA provides a crowd-sourcing space where scientists can organize their work efficiently and share their work with others. CMDA is empowered by many current state-of-the-art software packages in web service, provenance, and semantic search.
Residual Strength Analyses of Riveted Lap-Splice Joints
NASA Technical Reports Server (NTRS)
Seshadri, B. R.; Newman, J. C., Jr.
2000-01-01
The objective of this paper was to analyze the crack-linkup behavior in riveted-stiffened lap-splice joint panels with small multiple-site damage (MSD) cracks at several adjacent rivet holes. Analyses are based on the STAGS (STructural Analysis of General Shells) code with the critical crack-tip-opening angle (CTOA) fracture criterion. To account for high constraint around a crack front, the "plane strain core" option in STAGS was used. The importance of modeling rivet flexibility with fastener elements that accurately model load transfer across the joint is discussed. Fastener holes are not modeled but rivet connectivity is accounted for by attaching rivets to the sheet on one side of the cracks that simulated both the rivet diameter and MSD cracks. Residual strength analyses made on 2024-T3 alloy (1.6-mm thick) riveted-lap-splice joints with a lead crack and various size MSD cracks were compared with test data from Boeing Airplane Company. Analyses were conducted for both restrained and unrestrained buckling conditions. Comparison of results from these analyses and results from lap-splice-joint test panels, which were partially restrained against buckling indicate that the test results were bounded by the failure loads predicted by the analyses with restrained and unrestrained conditions.
Wang, Yi-Shan; Potts, Jonathan R
2017-03-07
Recent advances in animal tracking have allowed us to uncover the drivers of movement in unprecedented detail. This has enabled modellers to construct ever more realistic models of animal movement, which aid in uncovering detailed patterns of space use in animal populations. Partial differential equations (PDEs) provide a popular tool for mathematically analysing such models. However, their construction often relies on simplifying assumptions which may greatly affect the model outcomes. Here, we analyse the effect of various PDE approximations on the analysis of some simple movement models, including a biased random walk, central-place foraging processes and movement in heterogeneous landscapes. Perhaps the most commonly-used PDE method dates back to a seminal paper of Patlak from 1953. However, our results show that this can be a very poor approximation in even quite simple models. On the other hand, more recent methods, based on transport equation formalisms, can provide more accurate results, as long as the kernel describing the animal's movement is sufficiently smooth. When the movement kernel is not smooth, we show that both the older and newer methods can lead to quantitatively misleading results. Our detailed analysis will aid future researchers in the appropriate choice of PDE approximation for analysing models of animal movement. Copyright © 2017 Elsevier Ltd. All rights reserved.
The effect of noise-induced variance on parameter recovery from reaction times.
Vadillo, Miguel A; Garaizar, Pablo
2016-03-31
Technical noise can compromise the precision and accuracy of the reaction times collected in psychological experiments, especially in the case of Internet-based studies. Although this noise seems to have only a small impact on traditional statistical analyses, its effects on model fit to reaction-time distributions remains unexplored. Across four simulations we study the impact of technical noise on parameter recovery from data generated from an ex-Gaussian distribution and from a Ratcliff Diffusion Model. Our results suggest that the impact of noise-induced variance tends to be limited to specific parameters and conditions. Although we encourage researchers to adopt all measures to reduce the impact of noise on reaction-time experiments, we conclude that the typical amount of noise-induced variance found in these experiments does not pose substantial problems for statistical analyses based on model fitting.
Bossier, Han; Seurinck, Ruth; Kühn, Simone; Banaschewski, Tobias; Barker, Gareth J.; Bokde, Arun L. W.; Martinot, Jean-Luc; Lemaitre, Herve; Paus, Tomáš; Millenet, Sabina; Moerkerke, Beatrijs
2018-01-01
Given the increasing amount of neuroimaging studies, there is a growing need to summarize published results. Coordinate-based meta-analyses use the locations of statistically significant local maxima with possibly the associated effect sizes to aggregate studies. In this paper, we investigate the influence of key characteristics of a coordinate-based meta-analysis on (1) the balance between false and true positives and (2) the activation reliability of the outcome from a coordinate-based meta-analysis. More particularly, we consider the influence of the chosen group level model at the study level [fixed effects, ordinary least squares (OLS), or mixed effects models], the type of coordinate-based meta-analysis [Activation Likelihood Estimation (ALE) that only uses peak locations, fixed effects, and random effects meta-analysis that take into account both peak location and height] and the amount of studies included in the analysis (from 10 to 35). To do this, we apply a resampling scheme on a large dataset (N = 1,400) to create a test condition and compare this with an independent evaluation condition. The test condition corresponds to subsampling participants into studies and combine these using meta-analyses. The evaluation condition corresponds to a high-powered group analysis. We observe the best performance when using mixed effects models in individual studies combined with a random effects meta-analysis. Moreover the performance increases with the number of studies included in the meta-analysis. When peak height is not taken into consideration, we show that the popular ALE procedure is a good alternative in terms of the balance between type I and II errors. However, it requires more studies compared to other procedures in terms of activation reliability. Finally, we discuss the differences, interpretations, and limitations of our results. PMID:29403344
Challenges and opportunities in analysing students modelling
NASA Astrophysics Data System (ADS)
Blanco-Anaya, Paloma; Justi, Rosária; Díaz de Bustamante, Joaquín
2017-02-01
Modelling-based teaching activities have been designed and analysed from distinct theoretical perspectives. In this paper, we use one of them - the model of modelling diagram (MMD) - as an analytical tool in a regular classroom context. This paper examines the challenges that arise when the MMD is used as an analytical tool to characterise the modelling process experienced by students working in small groups aiming at creating and testing a model of a sedimentary basin from the information provided. The study was conducted in a regular Biology and Geology classroom (16-17 years old students). Data was collected through video recording of the classes, along with written reports and the material models made by each group. The results show the complexity of adapting MMD at two levels: the group modelling and the actual requirements for the activity. Our main challenges were to gather the modelling process of each individual and the group, as well as to identify, from students' speech, which stage of modelling they were performing at a given time. When facing such challenges, we propose some changes in the MMD so that it can be properly used to analyse students performing modelling activities in groups.
Roze, S; Liens, D; Palmer, A; Berger, W; Tucker, D; Renaudin, C
2006-12-01
The aim of this study was to describe a health economic model developed to project lifetime clinical and cost outcomes of lipid-modifying interventions in patients not reaching target lipid levels and to assess the validity of the model. The internet-based, computer simulation model is made up of two decision analytic sub-models, the first utilizing Monte Carlo simulation, and the second applying Markov modeling techniques. Monte Carlo simulation generates a baseline cohort for long-term simulation by assigning an individual lipid profile to each patient, and applying the treatment effects of interventions under investigation. The Markov model then estimates the long-term clinical (coronary heart disease events, life expectancy, and quality-adjusted life expectancy) and cost outcomes up to a lifetime horizon, based on risk equations from the Framingham study. Internal and external validation analyses were performed. The results of the model validation analyses, plotted against corresponding real-life values from Framingham, 4S, AFCAPS/TexCAPS, and a meta-analysis by Gordon et al., showed that the majority of values were close to the y = x line, which indicates a perfect fit. The R2 value was 0.9575 and the gradient of the regression line was 0.9329, both very close to the perfect fit (= 1). Validation analyses of the computer simulation model suggest the model is able to recreate the outcomes from published clinical studies and would be a valuable tool for the evaluation of new and existing therapy options for patients with persistent dyslipidemia.
The Research of Tax Text Categorization based on Rough Set
NASA Astrophysics Data System (ADS)
Liu, Bin; Xu, Guang; Xu, Qian; Zhang, Nan
To solve the problem of effective of categorization of text data in taxation system, the paper analyses the text data and the size calculation of key issues first, then designs text categorization based on rough set model.
This report provides detailed comparisons and sensitivity analyses of three candidate models, MESOPLUME, MESOPUFF, and MESOGRID. This was not a validation study; there was no suitable regional air quality data base for the Four Corners area. Rather, the models have been evaluated...
Implications of random variation in the Stand Prognosis Model
David A. Hamilton
1991-01-01
Although the Stand Prognosis Model has several stochastic components, features have been included in the model in an attempt to minimize run-to-run variation attributable to these stochastic components. This has led many users to assume that comparisons of management alternatives could be made based on a single run of the model for each alternative. Recent analyses...
Impact of lateral boundary conditions on regional analyses
NASA Astrophysics Data System (ADS)
Chikhar, Kamel; Gauthier, Pierre
2017-04-01
Regional and global climate models are usually validated by comparison to derived observations or reanalyses. Using a model in data assimilation results in a direct comparison to observations to produce its own analyses that may reveal systematic errors. In this study, regional analyses over North America are produced based on the fifth-generation Canadian Regional Climate Model (CRCM5) combined with the variational data assimilation system of the Meteorological Service of Canada (MSC). CRCM5 is driven at its boundaries by global analyses from ERA-interim or produced with the global configuration of the CRCM5. Assimilation cycles for the months of January and July 2011 revealed systematic errors in winter through large values in the mean analysis increments. This bias is attributed to the coupling of the lateral boundary conditions of the regional model with the driving data particularly over the northern boundary where a rapidly changing large scale circulation created significant cross-boundary flows. Increasing the time frequency of the lateral driving and applying a large-scale spectral nudging improved significantly the circulation through the lateral boundaries which translated in a much better agreement with observations.
The November 1, 2017 issue of Cancer Research is dedicated to a collection of computational resource papers in genomics, proteomics, animal models, imaging, and clinical subjects for non-bioinformaticists looking to incorporate computing tools into their work. Scientists at Pacific Northwest National Laboratory have developed P-MartCancer, an open, web-based interactive software tool that enables statistical analyses of peptide or protein data generated from mass-spectrometry (MS)-based global proteomics experiments.
Maru, Shoko; Byrnes, Joshua; Whitty, Jennifer A; Carrington, Melinda J; Stewart, Simon; Scuffham, Paul A
2015-02-01
The reported cost effectiveness of cardiovascular disease management programs (CVD-MPs) is highly variable, potentially leading to different funding decisions. This systematic review evaluates published modeled analyses to compare study methods and quality. Articles were included if an incremental cost-effectiveness ratio (ICER) or cost-utility ratio (ICUR) was reported, it is a multi-component intervention designed to manage or prevent a cardiovascular disease condition, and it addressed all domains specified in the American Heart Association Taxonomy for Disease Management. Nine articles (reporting 10 clinical outcomes) were included. Eight cost-utility and two cost-effectiveness analyses targeted hypertension (n=4), coronary heart disease (n=2), coronary heart disease plus stoke (n=1), heart failure (n=2) and hyperlipidemia (n=1). Study perspectives included the healthcare system (n=5), societal and fund holders (n=1), a third party payer (n=3), or was not explicitly stated (n=1). All analyses were modeled based on interventions of one to two years' duration. Time horizon ranged from two years (n=1), 10 years (n=1) and lifetime (n=8). Model structures included Markov model (n=8), 'decision analytic models' (n=1), or was not explicitly stated (n=1). Considerable variation was observed in clinical and economic assumptions and reporting practices. Of all ICERs/ICURs reported, including those of subgroups (n=16), four were above a US$50,000 acceptability threshold, six were below and six were dominant. The majority of CVD-MPs was reported to have favorable economic outcomes, but 25% were at unacceptably high cost for the outcomes. Use of standardized reporting tools should increase transparency and inform what drives the cost-effectiveness of CVD-MPs. © The European Society of Cardiology 2014.
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-01-01
Background It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. Results This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. Conclusion SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes. PMID:18706080
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-08-15
It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.
An Overview of the Semi-Span Super-Sonic Transport (S4T) Wind-Tunnel Model Program
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Perry, Boyd, III; Florance, James R.; Sanetrik, Mark D.; Wieseman, Carol D.; Stevens, William L.; Funk, Christie J.; Christhilf, David M.; Coulson, David A.
2012-01-01
A summary of computational and experimental aeroelastic (AE) and aeroservoelastic (ASE) results for the Semi-Span Super-Sonic Transport (S4T) wind-tunnel model is presented. A broad range of analyses and multiple AE and ASE wind-tunnel tests of the S4T wind-tunnel model have been performed in support of the ASE element in the Supersonics Program, part of the NASA Fundamental Aeronautics Program. This paper is intended to be an overview of multiple papers that comprise a special S4T technical session. Along those lines, a brief description of the design and hardware of the S4T wind-tunnel model will be presented. Computational results presented include linear and nonlinear aeroelastic analyses, and rapid aeroelastic analyses using CFD-based reduced-order models (ROMs). A brief survey of some of the experimental results from two open-loop and two closed-loop wind-tunnel tests performed at the NASA Langley Transonic Dynamics Tunnel (TDT) will be presented as well.
Unified constitutive models for high-temperature structural applications
NASA Technical Reports Server (NTRS)
Lindholm, U. S.; Chan, K. S.; Bodner, S. R.; Weber, R. M.; Walker, K. P.
1988-01-01
Unified constitutive models are characterized by the use of a single inelastic strain rate term for treating all aspects of inelastic deformation, including plasticity, creep, and stress relaxation under monotonic or cyclic loading. The structure of this class of constitutive theory pertinent for high temperature structural applications is first outlined and discussed. The effectiveness of the unified approach for representing high temperature deformation of Ni-base alloys is then evaluated by extensive comparison of experimental data and predictions of the Bodner-Partom and the Walker models. The use of the unified approach for hot section structural component analyses is demonstrated by applying the Walker model in finite element analyses of a benchmark notch problem and a turbine blade problem.
Hierarchical Coloured Petrinet Based Healthcare Infrastructure Interdependency Model
NASA Astrophysics Data System (ADS)
Nivedita, N.; Durbha, S.
2014-11-01
To ensure a resilient Healthcare Critical Infrastructure, understanding the vulnerabilities and analysing the interdependency on other critical infrastructures is important. To model this critical infrastructure and its dependencies, Hierarchal Coloured petri net modelling approach for simulating the vulnerability of Healthcare Critical infrastructure in a disaster situation is studied.. The model enables to analyse and understand various state changes, which occur when there is a disruption or damage to any of the Critical Infrastructure, and its cascading nature. It also enables to explore optimal paths for evacuation during the disaster. The simulation environment can be used to understand and highlight various vulnerabilities of Healthcare Critical Infrastructure during a flood disaster scenario; minimize consequences; and enable timely, efficient response.
NASA Technical Reports Server (NTRS)
Eder, D.
1992-01-01
Parametric models were constructed for Earth-based laser powered electric orbit transfer from low Earth orbit to geosynchronous orbit. These models were used to carry out performance, cost/benefit, and sensitivity analyses of laser-powered transfer systems including end-to-end life cycle cost analyses for complete systems. Comparisons with conventional orbit transfer systems were made indicating large potential cost savings for laser-powered transfer. Approximate optimization was done to determine best parameter values for the systems. Orbit transfer flights simulations were conducted to explore effects of parameters not practical to model with a spreadsheet. The simulations considered view factors that determine when power can be transferred from ground stations to an orbit transfer vehicle and conducted sensitivity analyses for numbers of ground stations, Isp including dual-Isp transfers, and plane change profiles. Optimal steering laws were used for simultaneous altitude and plane change. Viewing geometry and low-thrust orbit raising were simultaneously simulated. A very preliminary investigation of relay mirrors was made.
The Problem of Auto-Correlation in Parasitology
Pollitt, Laura C.; Reece, Sarah E.; Mideo, Nicole; Nussey, Daniel H.; Colegrave, Nick
2012-01-01
Explaining the contribution of host and pathogen factors in driving infection dynamics is a major ambition in parasitology. There is increasing recognition that analyses based on single summary measures of an infection (e.g., peak parasitaemia) do not adequately capture infection dynamics and so, the appropriate use of statistical techniques to analyse dynamics is necessary to understand infections and, ultimately, control parasites. However, the complexities of within-host environments mean that tracking and analysing pathogen dynamics within infections and among hosts poses considerable statistical challenges. Simple statistical models make assumptions that will rarely be satisfied in data collected on host and parasite parameters. In particular, model residuals (unexplained variance in the data) should not be correlated in time or space. Here we demonstrate how failure to account for such correlations can result in incorrect biological inference from statistical analysis. We then show how mixed effects models can be used as a powerful tool to analyse such repeated measures data in the hope that this will encourage better statistical practices in parasitology. PMID:22511865
Characterizing Uncertainty and Variability in PBPK Models ...
Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variability; such characterization for PBPK model predictions represents a continuing challenge to both modelers and users. Current practices show significant progress in specifying deterministic biological models and the non-deterministic (often statistical) models, estimating their parameters using diverse data sets from multiple sources, and using them to make predictions and characterize uncertainty and variability. The International Workshop on Uncertainty and Variability in PBPK Models, held Oct 31-Nov 2, 2006, sought to identify the state-of-the-science in this area and recommend priorities for research and changes in practice and implementation. For the short term, these include: (1) multidisciplinary teams to integrate deterministic and non-deterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through more complete documentation of the model structure(s) and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include: (1) theoretic and practical methodological impro
Analysing and controlling the tax evasion dynamics via majority-vote model
NASA Astrophysics Data System (ADS)
Lima, F. W. S.
2010-09-01
Within the context of agent-based Monte-Carlo simulations, we study the well-known majority-vote model (MVM) with noise applied to tax evasion on simple square lattices, Voronoi-Delaunay random lattices, Barabasi-Albert networks, and Erdös-Rényi random graphs. In the order to analyse and to control the fluctuations for tax evasion in the economics model proposed by Zaklan, MVM is applied in the neighborhod of the noise critical qc to evolve the Zaklan model. The Zaklan model had been studied recently using the equilibrium Ising model. Here we show that the Zaklan model is robust because this can be studied using equilibrium dynamics of Ising model also through the nonequilibrium MVM and on various topologies cited above giving the same behavior regardless of dynamic or topology used here.
Varadhan, Ravi; Wang, Sue-Jane
2016-01-01
Treatment effect heterogeneity is a well-recognized phenomenon in randomized controlled clinical trials. In this paper, we discuss subgroup analyses with prespecified subgroups of clinical or biological importance. We explore various alternatives to the naive (the traditional univariate) subgroup analyses to address the issues of multiplicity and confounding. Specifically, we consider a model-based Bayesian shrinkage (Bayes-DS) and a nonparametric, empirical Bayes shrinkage approach (Emp-Bayes) to temper the optimism of traditional univariate subgroup analyses; a standardization approach (standardization) that accounts for correlation between baseline covariates; and a model-based maximum likelihood estimation (MLE) approach. The Bayes-DS and Emp-Bayes methods model the variation in subgroup-specific treatment effect rather than testing the null hypothesis of no difference between subgroups. The standardization approach addresses the issue of confounding in subgroup analyses. The MLE approach is considered only for comparison in simulation studies as the “truth” since the data were generated from the same model. Using the characteristics of a hypothetical large outcome trial, we perform simulation studies and articulate the utilities and potential limitations of these estimators. Simulation results indicate that Bayes-DS and Emp-Bayes can protect against optimism present in the naïve approach. Due to its simplicity, the naïve approach should be the reference for reporting univariate subgroup-specific treatment effect estimates from exploratory subgroup analyses. Standardization, although it tends to have a larger variance, is suggested when it is important to address the confounding of univariate subgroup effects due to correlation between baseline covariates. The Bayes-DS approach is available as an R package (DSBayes). PMID:26485117
Read, Emma K; Vallevand, Andrea; Farrell, Robin M
2016-01-01
This paper describes the development and evaluation of training intended to enhance students' performance on their first live-animal ovariohysterectomy (OVH). Cognitive task analysis informed a seven-page lab manual, 30-minute video, and 46-item OVH checklist (categorized into nine surgery components and three phases of surgery). We compared two spay simulator models (higher-fidelity silicone versus lower-fidelity cloth and foam). Third-year veterinary students were randomly assigned to a training intervention: lab manual and video only; lab manual, video, and $675 silicone-based model; lab manual, video, and $64 cloth and foam model. We then assessed transfer of training to a live-animal OVH. Chi-square analyses determined statistically significant differences between the interventions on four of nine surgery components, all three phases of surgery, and overall score. Odds ratio analyses indicated that training with a spay model improved the odds of attaining an excellent or good rating on 25 of 46 checklist items, six of nine surgery components, all three phases of surgery, and the overall score. Odds ratio analyses comparing the spay models indicated an advantage for the $675 silicon-based model on only 6 of 46 checklist items, three of nine surgery components, and one phase of surgery. Training with a spay model improved performance when compared to training with a manual and video only. Results suggested that training with a lower-fidelity/cost model might be as effective when compared to a higher-fidelity/cost model. Further research is required to investigate simulator fidelity and costs on transfer of training to the operational environment.
NASA Technical Reports Server (NTRS)
Sharp, John R.; Kittredge, Ken; Schunk, Richard G.
2003-01-01
As part of the aero-thermodynamics team supporting the Columbia Accident Investigation Board (CAB), the Marshall Space Flight Center was asked to perform engineering analyses of internal flows in the port wing. The aero-thermodynamics team was split into internal flow and external flow teams with the support being divided between shorter timeframe engineering methods and more complex computational fluid dynamics. In order to gain a rough order of magnitude type of knowledge of the internal flow in the port wing for various breach locations and sizes (as theorized by the CAB to have caused the Columbia re-entry failure), a bulk venting model was required to input boundary flow rates and pressures to the computational fluid dynamics (CFD) analyses. This paper summarizes the modeling that was done by MSFC in Thermal Desktop. A venting model of the entire Orbiter was constructed in FloCAD based on Rockwell International s flight substantiation analyses and the STS-107 reentry trajectory. Chemical equilibrium air thermodynamic properties were generated for SINDA/FLUINT s fluid property routines from a code provided by Langley Research Center. In parallel, a simplified thermal mathematical model of the port wing, including the Thermal Protection System (TPS), was based on more detailed Shuttle re-entry modeling previously done by the Dryden Flight Research Center. Once the venting model was coupled with the thermal model of the wing structure with chemical equilibrium air properties, various breach scenarios were assessed in support of the aero-thermodynamics team. The construction of the coupled model and results are presented herein.
NASA Technical Reports Server (NTRS)
Pulkkinen, A.; Rastaetter, L.; Kuznetsova, M.; Singer, H.; Balch, C.; Weimer, D.; Toth, G.; Ridley, A.; Gombosi, T.; Wiltberger, M.;
2013-01-01
In this paper we continue the community-wide rigorous modern space weather model validation efforts carried out within GEM, CEDAR and SHINE programs. In this particular effort, in coordination among the Community Coordinated Modeling Center (CCMC), NOAA Space Weather Prediction Center (SWPC), modelers, and science community, we focus on studying the models' capability to reproduce observed ground magnetic field fluctuations, which are closely related to geomagnetically induced current phenomenon. One of the primary motivations of the work is to support NOAA SWPC in their selection of the next numerical model that will be transitioned into operations. Six geomagnetic events and 12 geomagnetic observatories were selected for validation.While modeled and observed magnetic field time series are available for all 12 stations, the primary metrics analysis is based on six stations that were selected to represent the high-latitude and mid-latitude locations. Events-based analysis and the corresponding contingency tables were built for each event and each station. The elements in the contingency table were then used to calculate Probability of Detection (POD), Probability of False Detection (POFD) and Heidke Skill Score (HSS) for rigorous quantification of the models' performance. In this paper the summary results of the metrics analyses are reported in terms of POD, POFD and HSS. More detailed analyses can be carried out using the event by event contingency tables provided as an online appendix. An online interface built at CCMC and described in the supporting information is also available for more detailed time series analyses.
An efficient temporal database design method based on EER
NASA Astrophysics Data System (ADS)
Liu, Zhi; Huang, Jiping; Miao, Hua
2007-12-01
Many existing methods of modeling temporal information are based on logical model, which makes relational schema optimization more difficult and more complicated. In this paper, based on the conventional EER model, the author attempts to analyse and abstract temporal information in the phase of conceptual modelling according to the concrete requirement to history information. Then a temporal data model named BTEER is presented. BTEER not only retains all designing ideas and methods of EER which makes BTEER have good upward compatibility, but also supports the modelling of valid time and transaction time effectively at the same time. In addition, BTEER can be transformed to EER easily and automatically. It proves in practice, this method can model the temporal information well.
Using Toulmin Analysis to Analyse an Instructor's Proof Presentation in Abstract Algebra
ERIC Educational Resources Information Center
Fukawa-Connelly, Timothy
2014-01-01
This paper provides a method for analysing undergraduate teaching of proof-based courses using Toulmin's model (1969) of argumentation. It presents a case study of one instructor's presentation of proofs. The analysis shows that the instructor presents different levels of detail in different proofs; thus, the students have an inconsistent set of…
A Bullet Fired in Dry Water: An Investigative Activity to Learn Hydrodynamics Concepts
ERIC Educational Resources Information Center
Leitão, Ulisses Azevedo; dos Anjos Pinheiro da Silva, Antonio; Trindade do Nascimento, Natália Cristina; da Cruz Gervásio, Lilian Mara Benedita
2017-01-01
In this paper we report an investigative activity on hydrodynamics, in the context of an inquiry-based learning project. The aim is to analyse the experiment of a bullet shot underwater. Using "Tracker," a video analysing and modelling software, the displacement of the bullet was measured as function of time, processing a slow motion…
This presentation reviews the status and progress in forecasting particulate matter distributions. The shortcomings in representation of particulate matter formation in current atmospheric chemistry/transport models are presented based on analyses and detailed comparisons with me...
An evaluation of the Regional Mercury Cycling Model (R-MCM, a steady-state fate and transport model used to simulate mercury concentrations in lakes) is presented based on its application to a series of 91 lakes in Vermont and New Hampshire. Visual and statistical analyses are pr...
Force Project Technology Presentation to the NRCC
2014-02-04
Functional Bridge components Smart Odometer Adv Pretreatment Smart Bridge Multi-functional Gap Crossing Fuel Automated Tracking System Adv...comprehensive matrix of candidate composite material systems and textile reinforcement architectures via modeling/analyses and testing. Product(s...Validated Dynamic Modeling tool based on parametric study using material models to reliably predict the textile mechanics of the hose
Optimization-Based Model Fitting for Latent Class and Latent Profile Analyses
ERIC Educational Resources Information Center
Huang, Guan-Hua; Wang, Su-Mei; Hsu, Chung-Chu
2011-01-01
Statisticians typically estimate the parameters of latent class and latent profile models using the Expectation-Maximization algorithm. This paper proposes an alternative two-stage approach to model fitting. The first stage uses the modified k-means and hierarchical clustering algorithms to identify the latent classes that best satisfy the…
[Factor structure of the German version of the BIS/BAS Scales in a population-based sample].
Müller, A; Smits, D; Claes, L; de Zwaan, M
2013-02-01
The Behavioural Inhibition System/Behavioural Activation System Scale (BIS/BAS-Scales) developed by Carver and White 1 is a self-rating instrument to assess the dispositional sensitivity to punishment and reward. The present work aims to examine the factor structure of the German version of the BIS/BAS-Scales. In a large German population-based sample (n = 1881) the model fit of several factor models was tested by using confirmatory factor analyses. The best model fit was found for the 5-factor model with two BIS (anxiety, fear) and three BAS (drive, reward responsiveness, fun seeking) scales, whereas the BIS-fear, the BAS-reward responsiveness, and the BAS-fun seeking subscales showed low internal consistency. The BIS/BAS scales were negatively correlated with age, and women reported higher BIS subscale scores than men. Confirmatory factor analyses suggest a 5-factor model. However, due to the low internal reliability of some of the subscales the use of this model is questionable. © Georg Thieme Verlag KG Stuttgart · New York.
Kopelman, Naama M; Mayzel, Jonathan; Jakobsson, Mattias; Rosenberg, Noah A; Mayrose, Itay
2015-09-01
The identification of the genetic structure of populations from multilocus genotype data has become a central component of modern population-genetic data analysis. Application of model-based clustering programs often entails a number of steps, in which the user considers different modelling assumptions, compares results across different predetermined values of the number of assumed clusters (a parameter typically denoted K), examines multiple independent runs for each fixed value of K, and distinguishes among runs belonging to substantially distinct clustering solutions. Here, we present Clumpak (Cluster Markov Packager Across K), a method that automates the postprocessing of results of model-based population structure analyses. For analysing multiple independent runs at a single K value, Clumpak identifies sets of highly similar runs, separating distinct groups of runs that represent distinct modes in the space of possible solutions. This procedure, which generates a consensus solution for each distinct mode, is performed by the use of a Markov clustering algorithm that relies on a similarity matrix between replicate runs, as computed by the software Clumpp. Next, Clumpak identifies an optimal alignment of inferred clusters across different values of K, extending a similar approach implemented for a fixed K in Clumpp and simplifying the comparison of clustering results across different K values. Clumpak incorporates additional features, such as implementations of methods for choosing K and comparing solutions obtained by different programs, models, or data subsets. Clumpak, available at http://clumpak.tau.ac.il, simplifies the use of model-based analyses of population structure in population genetics and molecular ecology. © 2015 John Wiley & Sons Ltd.
Fernández-Mazuecos, Mario; Vargas, Pablo
2013-06-01
· The role of Quaternary climatic shifts in shaping the distribution of Linaria elegans, an Iberian annual plant, was investigated using species distribution modelling and molecular phylogeographical analyses. Three hypotheses are proposed to explain the Quaternary history of its mountain ring range. · The distribution of L. elegans was modelled using the maximum entropy method and projected to the last interglacial and to the last glacial maximum (LGM) using two different paleoclimatic models: the Community Climate System Model (CCSM) and the Model for Interdisciplinary Research on Climate (MIROC). Two nuclear and three plastid DNA regions were sequenced for 24 populations (119 individuals sampled). Bayesian phylogenetic, phylogeographical, dating and coalescent-based population genetic analyses were conducted. · Molecular analyses indicated the existence of northern and southern glacial refugia and supported two routes of post-glacial recolonization. These results were consistent with the LGM distribution as inferred under the CCSM paleoclimatic model (but not under the MIROC model). Isolation between two major refugia was dated back to the Riss or Mindel glaciations, > 100 kyr before present (bp). · The Atlantic distribution of inferred refugia suggests that the oceanic (buffered)-continental (harsh) gradient may have played a key and previously unrecognized role in determining Quaternary distribution shifts of Mediterranean plants. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.
NASA Astrophysics Data System (ADS)
Yang, Huanhuan; Gunzburger, Max
2017-06-01
Simulation-based optimization of acoustic liner design in a turbofan engine nacelle for noise reduction purposes can dramatically reduce the cost and time needed for experimental designs. Because uncertainties are inevitable in the design process, a stochastic optimization algorithm is posed based on the conditional value-at-risk measure so that an ideal acoustic liner impedance is determined that is robust in the presence of uncertainties. A parallel reduced-order modeling framework is developed that dramatically improves the computational efficiency of the stochastic optimization solver for a realistic nacelle geometry. The reduced stochastic optimization solver takes less than 500 seconds to execute. In addition, well-posedness and finite element error analyses of the state system and optimization problem are provided.
Masiak, Marek; Loza, Bartosz
2004-01-01
A lot of inconsistencies across dimensional studies of schizophrenia(s) are being unveiled. These problems are strongly related to the methodological aspects of collecting data and specific statistical analyses. Psychiatrists have developed lots of psychopathological models derived from analytic studies based on SAPS/SANS (the Scale for the Assessment of Positive Symptoms/the Scale for the Assessment of Negative Symptoms) and PANSS (The Positive and Negative Syndrome Scale). The unique validation of parallel two independent factor models was performed--ascribed to the same illness and based on different diagnostic scales--to investigate indirect methodological causes of clinical discrepancies. 100 newly admitted patients (mean age--33.5, 18-45, males--64, females--36, hospitalised on average 5.15 times) with paranoid schizophrenia (according to ICD-10) were scored and analysed using PANSS and SAPS/SANS during psychotic exacerbation. All patients were treated with neuroleptics of various kinds with 410mg equivalents of chlorpromazine (atypicals:typicals --> 41:59). Factor analyses were applied to basic results (with principal component analysis, normalised varimax rotation). Investing the cross-model validity, canonical analysis was applied. Models of schizophrenia varied from 3 to 5 factors. PANSS model included: positive, negative, disorganisation, cognitive and depressive components and SAPS/SANS model was dominated by positive, negative and disorganisation factors. The SAPS/SANS accounted for merely 48% of the PANSS common variances. The SAPS/SANS combined measurement preferentially (67% of canonical variance) targeted positive-negative dichotomy. Respectively, PANSS shared positive-negative phenomenology in 35% of its own variance. The general concept of five-dimensionality in paranoid schizophrenia looks clinically more heuristic and statistically more stabilised.
Rautenberg, Tamlyn Anne; Zerwes, Ute; Lee, Way Seah
2018-01-01
Objective To perform cost utility (CU) and budget impact (BI) analyses augmented by scenario analyses of critical model structure components to evaluate racecadotril as adjuvant to oral rehydration solution (ORS) for children under 5 years with acute diarrhea in Malaysia. Methods A CU model was adapted to evaluate racecadotril plus ORS vs ORS alone for acute diarrhea in children younger than 5 years from a Malaysian public payer’s perspective. A bespoke BI analysis was undertaken in addition to detailed scenario analyses with respect to critical model structure components. Results According to the CU model, the intervention is less costly and more effective than comparator for the base case with a dominant incremental cost-effectiveness ratio of −RM 1,272,833/quality-adjusted life year (USD −312,726/quality-adjusted life year) in favor of the intervention. According to the BI analysis (assuming an increase of 5% market share per year for racecadotril+ORS for 5 years), the total cumulative incremental percentage reduction in health care expenditure for diarrhea in children is 0.136578%, resulting in a total potential cumulative cost savings of −RM 73,193,603 (USD −17,983,595) over a 5-year period. Results hold true across a range of plausible scenarios focused on critical model components. Conclusion Adjuvant racecadotril vs ORS alone is potentially cost-effective from a Malaysian public payer perspective subject to the assumptions and limitations of the model. BI analysis shows that this translates into potential cost savings for the Malaysian public health care system. Results hold true at evidence-based base case values and over a range of alternate scenarios. PMID:29588606
A simulation-based approach for estimating premining water quality: Red Mountain Creek, Colorado
Runkel, Robert L.; Kimball, Briant A; Walton-Day, Katherine; Verplanck, Philip L.
2007-01-01
Regulatory agencies are often charged with the task of setting site-specific numeric water quality standards for impaired streams. This task is particularly difficult for streams draining highly mineralized watersheds with past mining activity. Baseline water quality data obtained prior to mining are often non-existent and application of generic water quality standards developed for unmineralized watersheds is suspect given the geology of most watersheds affected by mining. Various approaches have been used to estimate premining conditions, but none of the existing approaches rigorously consider the physical and geochemical processes that ultimately determine instream water quality. An approach based on simulation modeling is therefore proposed herein. The approach utilizes synoptic data that provide spatially-detailed profiles of concentration, streamflow, and constituent load along the study reach. This field data set is used to calibrate a reactive stream transport model that considers the suite of physical and geochemical processes that affect constituent concentrations during instream transport. A key input to the model is the quality and quantity of waters entering the study reach. This input is based on chemical analyses available from synoptic sampling and observed increases in streamflow along the study reach. Given the calibrated model, additional simulations are conducted to estimate premining conditions. In these simulations, the chemistry of mining-affected sources is replaced with the chemistry of waters that are thought to be unaffected by mining (proximal, premining analogues). The resultant simulations provide estimates of premining water quality that reflect both the reduced loads that were present prior to mining and the processes that affect these loads as they are transported downstream. This simulation-based approach is demonstrated using data from Red Mountain Creek, Colorado, a small stream draining a heavily-mined watershed. Model application to the premining problem for Red Mountain Creek is based on limited field reconnaissance and chemical analyses; additional field work and analyses may be needed to develop definitive, quantitative estimates of premining water quality.
De Angelis, Vittoria; De Martino, Federico; Moerel, Michelle; Santoro, Roberta; Hausfeld, Lars; Formisano, Elia
2017-11-13
Pitch is a perceptual attribute related to the fundamental frequency (or periodicity) of a sound. So far, the cortical processing of pitch has been investigated mostly using synthetic sounds. However, the complex harmonic structure of natural sounds may require different mechanisms for the extraction and analysis of pitch. This study investigated the neural representation of pitch in human auditory cortex using model-based encoding and decoding analyses of high field (7 T) functional magnetic resonance imaging (fMRI) data collected while participants listened to a wide range of real-life sounds. Specifically, we modeled the fMRI responses as a function of the sounds' perceived pitch height and salience (related to the fundamental frequency and the harmonic structure respectively), which we estimated with a computational algorithm of pitch extraction (de Cheveigné and Kawahara, 2002). First, using single-voxel fMRI encoding, we identified a pitch-coding region in the antero-lateral Heschl's gyrus (HG) and adjacent superior temporal gyrus (STG). In these regions, the pitch representation model combining height and salience predicted the fMRI responses comparatively better than other models of acoustic processing and, in the right hemisphere, better than pitch representations based on height/salience alone. Second, we assessed with model-based decoding that multi-voxel response patterns of the identified regions are more informative of perceived pitch than the remainder of the auditory cortex. Further multivariate analyses showed that complementing a multi-resolution spectro-temporal sound representation with pitch produces a small but significant improvement to the decoding of complex sounds from fMRI response patterns. In sum, this work extends model-based fMRI encoding and decoding methods - previously employed to examine the representation and processing of acoustic sound features in the human auditory system - to the representation and processing of a relevant perceptual attribute such as pitch. Taken together, the results of our model-based encoding and decoding analyses indicated that the pitch of complex real life sounds is extracted and processed in lateral HG/STG regions, at locations consistent with those indicated in several previous fMRI studies using synthetic sounds. Within these regions, pitch-related sound representations reflect the modulatory combination of height and the salience of the pitch percept. Copyright © 2017 Elsevier Inc. All rights reserved.
Akrami, Mohammad; Qian, Zhihui; Zou, Zhemin; Howard, David; Nester, Chris J; Ren, Lei
2018-04-01
The objective of this study was to develop and validate a subject-specific framework for modelling the human foot. This was achieved by integrating medical image-based finite element modelling, individualised multi-body musculoskeletal modelling and 3D gait measurements. A 3D ankle-foot finite element model comprising all major foot structures was constructed based on MRI of one individual. A multi-body musculoskeletal model and 3D gait measurements for the same subject were used to define loading and boundary conditions. Sensitivity analyses were used to investigate the effects of key modelling parameters on model predictions. Prediction errors of average and peak plantar pressures were below 10% in all ten plantar regions at five key gait events with only one exception (lateral heel, in early stance, error of 14.44%). The sensitivity analyses results suggest that predictions of peak plantar pressures are moderately sensitive to material properties, ground reaction forces and muscle forces, and significantly sensitive to foot orientation. The maximum region-specific percentage change ratios (peak stress percentage change over parameter percentage change) were 1.935-2.258 for ground reaction forces, 1.528-2.727 for plantar flexor muscles and 4.84-11.37 for foot orientations. This strongly suggests that loading and boundary conditions need to be very carefully defined based on personalised measurement data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobromir Panayotov; Andrew Grief; Brad J. Merrill
'Fusion for Energy' (F4E) develops designs and implements the European Test Blanket Systems (TBS) in ITER - Helium-Cooled Lithium-Lead (HCLL) and Helium-Cooled Pebble-Bed (HCPB). Safety demonstration is an essential element for the integration of TBS in ITER and accident analyses are one of its critical segments. A systematic approach to the accident analyses had been acquired under the F4E contract on TBS safety analyses. F4E technical requirements and AMEC and INL efforts resulted in the development of a comprehensive methodology for fusion breeding blanket accident analyses. It addresses the specificity of the breeding blankets design, materials and phenomena and atmore » the same time is consistent with the one already applied to ITER accident analyses. Methodology consists of several phases. At first the reference scenarios are selected on the base of FMEA studies. In the second place elaboration of the accident analyses specifications we use phenomena identification and ranking tables to identify the requirements to be met by the code(s) and TBS models. Thus the limitations of the codes are identified and possible solutions to be built into the models are proposed. These include among others the loose coupling of different codes or code versions in order to simulate multi-fluid flows and phenomena. The code selection and issue of the accident analyses specifications conclude this second step. Furthermore the breeding blanket and ancillary systems models are built on. In this work challenges met and solutions used in the development of both MELCOR and RELAP5 codes models of HCLL and HCPB TBSs will be shared. To continue the developed models are qualified by comparison with finite elements analyses, by code to code comparison and sensitivity studies. Finally, the qualified models are used for the execution of the accident analyses of specific scenario. When possible the methodology phases will be illustrated in the paper by limited number of tables and figures. Description of each phase and its results in detail as well the methodology applications to EU HCLL and HCPB TBSs will be published in separate papers. The developed methodology is applicable to accident analyses of other TBSs to be tested in ITER and as well to DEMO breeding blankets.« less
Show me the data: advances in multi-model benchmarking, assimilation, and forecasting
NASA Astrophysics Data System (ADS)
Dietze, M.; Raiho, A.; Fer, I.; Cowdery, E.; Kooper, R.; Kelly, R.; Shiklomanov, A. N.; Desai, A. R.; Simkins, J.; Gardella, A.; Serbin, S.
2016-12-01
Researchers want their data to inform carbon cycle predictions, but there are considerable bottlenecks between data collection and the use of data to calibrate and validate earth system models and inform predictions. This talk highlights recent advancements in the PEcAn project aimed at it making it easier for individual researchers to confront models with their own data: (1) The development of an easily extensible site-scale benchmarking system aimed at ensuring that models capture process rather than just reproducing pattern; (2) Efficient emulator-based Bayesian parameter data assimilation to constrain model parameters; (3) A novel, generalized approach to ensemble data assimilation to estimate carbon pools and fluxes and quantify process error; (4) automated processing and downscaling of CMIP climate scenarios to support forecasts that include driver uncertainty; (5) a large expansion in the number of models supported, with new tools for conducting multi-model and multi-site analyses; and (6) a network-based architecture that allows analyses to be shared with model developers and other collaborators. Application of these methods is illustrated with data across a wide range of time scales, from eddy-covariance to forest inventories to tree rings to paleoecological pollen proxies.
Chen, G; Wu, F Y; Liu, Z C; Yang, K; Cui, F
2015-08-01
Subject-specific finite element (FE) models can be generated from computed tomography (CT) datasets of a bone. A key step is assigning material properties automatically onto finite element models, which remains a great challenge. This paper proposes a node-based assignment approach and also compares it with the element-based approach in the literature. Both approaches were implemented using ABAQUS. The assignment procedure is divided into two steps: generating the data file of the image intensity of a bone in a MATLAB program and reading the data file into ABAQUS via user subroutines. The node-based approach assigns the material properties to each node of the finite element mesh, while the element-based approach assigns the material properties directly to each integration point of an element. Both approaches are independent from the type of elements. A number of FE meshes are tested and both give accurate solutions; comparatively the node-based approach involves less programming effort. The node-based approach is also independent from the type of analyses; it has been tested on the nonlinear analysis of a Sawbone femur. The node-based approach substantially improves the level of automation of the assignment procedure of bone material properties. It is the simplest and most powerful approach that is applicable to many types of analyses and elements. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Krell, Moritz; Walzer, Christine; Hergert, Susann; Krüger, Dirk
2017-09-01
As part of their professional competencies, science teachers need an elaborate meta-modelling knowledge as well as modelling skills in order to guide and monitor modelling practices of their students. However, qualitative studies about (pre-service) science teachers' modelling practices are rare. This study provides a category system which is suitable to analyse and to describe pre-service science teachers' modelling activities and to infer modelling strategies. The category system was developed based on theoretical considerations and was inductively refined within the methodological frame of qualitative content analysis. For the inductive refinement, modelling practices of pre-service teachers (n = 4) have been video-taped and analysed. In this study, one case was selected to demonstrate the application of the category system to infer modelling strategies. The contribution of this study for science education research and science teacher education is discussed.
Lyon, Aaron R; Pullmann, Michael D; Dorsey, Shannon; Martin, Prerna; Grigore, Alexandra A; Becker, Emily M; Jensen-Doss, Amanda
2018-05-11
Measurement-based care (MBC) is an increasingly popular, evidence-based practice, but there are no tools with established psychometrics to evaluate clinician use of MBC practices in mental health service delivery. The current study evaluated the reliability, validity, and factor structure of scores generated from a brief, standardized tool to measure MBC practices, the Current Assessment Practice Evaluation-Revised (CAPER). Survey data from a national sample of 479 mental health clinicians were used to conduct exploratory and confirmatory factor analyses, as well as reliability and validity analyses (e.g., relationships between CAPER subscales and clinician MBC attitudes). Analyses revealed competing two- and three-factor models. Regardless of the model used, scores from CAPER subscales demonstrated good reliability and convergent and divergent validity with MBC attitudes in the expected directions. The CAPER appears to be a psychometrically sound tool for assessing clinician MBC practices. Future directions for development and application of the tool are discussed.
Multiple linear regression models are often used to predict levels of fecal indicator bacteria (FIB) in recreational swimming waters based on independent variables (IVs) such as meteorologic, hydrodynamic, and water-quality measures. The IVs used for these analyses are traditiona...
A General Accelerated Degradation Model Based on the Wiener Process.
Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning
2016-12-06
Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.
A General Accelerated Degradation Model Based on the Wiener Process
Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning
2016-01-01
Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses. PMID:28774107
Plants and pixels: Comparing phenologies from the ground and from space (Invited)
NASA Astrophysics Data System (ADS)
Rutishauser, T.; Stoekli, R.; Jeanneret, F.; Peñuelas, J.
2010-12-01
Changes in the seasonality of life cycles of plants as recorded in phenological observations have been widely analysed at the species level with data available for many decades back in time. At the same time, seasonality changes in satellite-based observations and prognostic phenology models comprise information at the pixel-size or landscape scale. Change analysis of satellite-based records is restricted due to relatively short satellite records that further include gaps while model-based analyses are biased due to current model deficiencies. At 30 selected sites across Europe, we analysed three different sources of plant seasonality during the 1971-2000 period. Data consisted of (1) species-specific development stages of flowering and leave-out with different species observed at each site. (2) We used a synthetic phenological metric that integrates the common interannual phenological signal across all species at one site. (3) We estimated daily Leaf Area Index with a prognostic phenology model. The prior uncertainties of the model’s empirical parameter space are constrained by assimilating the Fraction of Photosynthetically Active Radiation absorbed by vegetation (FPAR) and Leaf Area Index (LAI) from the MODerate Resolution Imaging Spectroradiometer (MODIS). We extracted the day of year when the 25%, 50% and 75% thresholds were passed each spring. The question arises how the three phenological signals compare and correlate across climate zones in Europe. Is there a match between single species observations, species-based ground-observed metrics and the landscape-scale prognostic model? Are there single key-species across Europe that best represent a landscape scale measure from the prognostic model? Can one source substitute another and serve as proxy-data? What can we learn from potential mismatches? Focusing on changes in spring this contribution presents first results of an ongoing comparison study from a number of European test sites that will be extended to the pan-European phenological database Cost725 and PEP725.
NASA Technical Reports Server (NTRS)
Xue, Yan; Balmaseda, Magdalena A.; Boyer, Tim; Ferry, Nicolas; Good, Simon; Ishikawa, Ichiro; Rienecker, Michele; Rosati, Tony; Yin, Yonghong; Kumar, Arun
2012-01-01
Upper ocean heat content (HC) is one of the key indicators of climate variability on many time-scales extending from seasonal to interannual to long-term climate trends. For example, HC in the tropical Pacific provides information on thermocline anomalies that is critical for the longlead forecast skill of ENSO. Since HC variability is also associated with SST variability, a better understanding and monitoring of HC variability can help us understand and forecast SST variability associated with ENSO and other modes such as Indian Ocean Dipole (IOD), Pacific Decadal Oscillation (PDO), Tropical Atlantic Variability (TAV) and Atlantic Multidecadal Oscillation (AMO). An accurate ocean initialization of HC anomalies in coupled climate models could also contribute to skill in decadal climate prediction. Errors, and/or uncertainties, in the estimation of HC variability can be affected by many factors including uncertainties in surface forcings, ocean model biases, and deficiencies in data assimilation schemes. Changes in observing systems can also leave an imprint on the estimated variability. The availability of multiple operational ocean analyses (ORA) that are routinely produced by operational and research centers around the world provides an opportunity to assess uncertainties in HC analyses, to help identify gaps in observing systems as they impact the quality of ORAs and therefore climate model forecasts. A comparison of ORAs also gives an opportunity to identify deficiencies in data assimilation schemes, and can be used as a basis for development of real-time multi-model ensemble HC monitoring products. The OceanObs09 Conference called for an intercomparison of ORAs and use of ORAs for global ocean monitoring. As a follow up, we intercompared HC variations from ten ORAs -- two objective analyses based on in-situ data only and eight model analyses based on ocean data assimilation systems. The mean, annual cycle, interannual variability and longterm trend of HC have been analyzed
A Cyber-Attack Detection Model Based on Multivariate Analyses
NASA Astrophysics Data System (ADS)
Sakai, Yuto; Rinsaka, Koichiro; Dohi, Tadashi
In the present paper, we propose a novel cyber-attack detection model based on two multivariate-analysis methods to the audit data observed on a host machine. The statistical techniques used here are the well-known Hayashi's quantification method IV and cluster analysis method. We quantify the observed qualitative audit event sequence via the quantification method IV, and collect similar audit event sequence in the same groups based on the cluster analysis. It is shown in simulation experiments that our model can improve the cyber-attack detection accuracy in some realistic cases where both normal and attack activities are intermingled.
NASA Technical Reports Server (NTRS)
Hou, Arthur Y.; Zhang, Sara Q.; Reale, Oreste
2003-01-01
We describe a variational continuous assimilation (VCA) algorithm for assimilating tropical rainfall data using moisture and temperature tendency corrections as the control variable to offset model deficiencies. For rainfall assimilation, model errors are of special concern since model-predicted precipitation is based on parameterized moist physics, which can have substantial systematic errors. This study examines whether a VCA scheme using the forecast model as a weak constraint offers an effective pathway to precipitation assimilation. The particular scheme we exarnine employs a '1+1' dimension precipitation observation operator based on a 6-h integration of a column model of moist physics from the Goddard Earth Observing System (GEOS) global data assimilation system DAS). In earlier studies, we tested a simplified version of this scheme and obtained improved monthly-mean analyses and better short-range forecast skills. This paper describes the full implementation ofthe 1+1D VCA scheme using background and observation error statistics, and examines how it may improve GEOS analyses and forecasts of prominent tropical weather systems such as hurricanes. Parallel assimilation experiments with and without rainfall data for Hurricanes Bonnie and Floyd show that assimilating 6-h TMI and SSM/I surfice rain rates leads to more realistic storm features in the analysis, which, in turn, provide better initial conditions for 5-day storm track prediction and precipitation forecast. These results provide evidence that addressing model deficiencies in moisture tendency may be crucial to making effective use of precipitation information in data assimilation.
The Systems Biology Markup Language (SBML) Level 3 Package: Flux Balance Constraints.
Olivier, Brett G; Bergmann, Frank T
2015-09-04
Constraint-based modeling is a well established modelling methodology used to analyze and study biological networks on both a medium and genome scale. Due to their large size, genome scale models are typically analysed using constraint-based optimization techniques. One widely used method is Flux Balance Analysis (FBA) which, for example, requires a modelling description to include: the definition of a stoichiometric matrix, an objective function and bounds on the values that fluxes can obtain at steady state. The Flux Balance Constraints (FBC) Package extends SBML Level 3 and provides a standardized format for the encoding, exchange and annotation of constraint-based models. It includes support for modelling concepts such as objective functions, flux bounds and model component annotation that facilitates reaction balancing. The FBC package establishes a base level for the unambiguous exchange of genome-scale, constraint-based models, that can be built upon by the community to meet future needs (e. g. by extending it to cover dynamic FBC models).
The Systems Biology Markup Language (SBML) Level 3 Package: Flux Balance Constraints.
Olivier, Brett G; Bergmann, Frank T
2015-06-01
Constraint-based modeling is a well established modelling methodology used to analyze and study biological networks on both a medium and genome scale. Due to their large size, genome scale models are typically analysed using constraint-based optimization techniques. One widely used method is Flux Balance Analysis (FBA) which, for example, requires a modelling description to include: the definition of a stoichiometric matrix, an objective function and bounds on the values that fluxes can obtain at steady state. The Flux Balance Constraints (FBC) Package extends SBML Level 3 and provides a standardized format for the encoding, exchange and annotation of constraint-based models. It includes support for modelling concepts such as objective functions, flux bounds and model component annotation that facilitates reaction balancing. The FBC package establishes a base level for the unambiguous exchange of genome-scale, constraint-based models, that can be built upon by the community to meet future needs (e. g. by extending it to cover dynamic FBC models).
Empirical analyses of plant-climate relationships for the western United States
Gerald E. Rehfeldt; Nicholas L. Crookston; Marcus V. Warwell; Jeffrey S. Evans
2006-01-01
The Random Forests multiple-regression tree was used to model climate profiles of 25 biotic communities of the western United States and nine of their constituent species. Analyses of the communities were based on a gridded sample of ca. 140,000 points, while those for the species used presence-absence data from ca. 120,000 locations. Independent variables included 35...
Finite element analyses of two dimensional, anisotropic heat transfer in wood
John F. Hunt; Hongmei Gu
2004-01-01
The anisotropy of wood creates a complex problem for solving heat and mass transfer problems that require analyses be based on fundamental material properties of the wood structure. Inputting basic orthogonal properties of the wood material alone are not sufficient for accurate modeling because wood is a combination of porous fiber cells that are aligned and mis-...
2012-01-01
Background A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. Methods We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). Results The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. Conclusions The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint. PMID:22962944
Adrion, Christine; Mansmann, Ulrich
2012-09-10
A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint.
ERIC Educational Resources Information Center
González-Cutre, David; Ferriz, Roberto; Beltrán-Carrillo, Vicente J.; Andrés-Fabra, José A.; Montero-Carretero, Carlos; Cervelló, Eduardo; Moreno-Murcia, Juan Antonio
2014-01-01
The aim of this study was to analyse the effects of a school-based intervention to promote physical activity, utilising the postulates of the trans-contextual model of motivation. The study examined two separate classes of elementary school students (mean age 11.28?years), one of which served as the control group (n?=?26) and the other as the…
Kinikoglu, Beste
2017-12-01
Tissue engineered full-thickness human skin substitutes have various applications in the clinic and in the laboratory, such as in the treatment of burns or deep skin defects, and as reconstructed human skin models in the safety testing of drugs and cosmetics and in the fundamental study of skin biology and pathology. So far, different approaches have been proposed for the generation of reconstructed skin, each with its own advantages and disadvantages. Here, the classic tissue engineering approach, based on cell-seeded polymeric scaffolds, is compared with the less-studied cell self-assembly approach, where the cells are coaxed to synthesise their own extracellular matrix (ECM). The resulting full-thickness human skin substitutes were analysed by means of histological and immunohistochemical analyses. It was found that both the scaffold-free and the scaffold-based skin equivalents successfully mimicked the functionality and morphology of native skin, with complete epidermal differentiation (as determined by the expression of filaggrin), the presence of a continuous basement membrane expressing collagen VII, and new ECM deposition by dermal fibroblasts. On the other hand, the scaffold-free model had a thicker epidermis and a significantly higher number of Ki67-positive proliferative cells, indicating a higher capacity for self-renewal, as compared to the scaffold-based model. 2017 FRAME.
Unnikrishnan, Ginu U.; Morgan, Elise F.
2011-01-01
Inaccuracies in the estimation of material properties and errors in the assignment of these properties into finite element models limit the reliability, accuracy, and precision of quantitative computed tomography (QCT)-based finite element analyses of the vertebra. In this work, a new mesh-independent, material mapping procedure was developed to improve the quality of predictions of vertebral mechanical behavior from QCT-based finite element models. In this procedure, an intermediate step, called the material block model, was introduced to determine the distribution of material properties based on bone mineral density, and these properties were then mapped onto the finite element mesh. A sensitivity study was first conducted on a calibration phantom to understand the influence of the size of the material blocks on the computed bone mineral density. It was observed that varying the material block size produced only marginal changes in the predictions of mineral density. Finite element (FE) analyses were then conducted on a square column-shaped region of the vertebra and also on the entire vertebra in order to study the effect of material block size on the FE-derived outcomes. The predicted values of stiffness for the column and the vertebra decreased with decreasing block size. When these results were compared to those of a mesh convergence analysis, it was found that the influence of element size on vertebral stiffness was less than that of the material block size. This mapping procedure allows the material properties in a finite element study to be determined based on the block size required for an accurate representation of the material field, while the size of the finite elements can be selected independently and based on the required numerical accuracy of the finite element solution. The mesh-independent, material mapping procedure developed in this study could be particularly helpful in improving the accuracy of finite element analyses of vertebroplasty and spine metastases, as these analyses typically require mesh refinement at the interfaces between distinct materials. Moreover, the mapping procedure is not specific to the vertebra and could thus be applied to many other anatomic sites. PMID:21823740
Denis Valle; Benjamin Baiser; Christopher W. Woodall; Robin Chazdon; Jerome Chave
2014-01-01
We propose a novel multivariate method to analyse biodiversity data based on the Latent Dirichlet Allocation (LDA) model. LDA, a probabilistic model, reduces assemblages to sets of distinct component communities. It produces easily interpretable results, can represent abrupt and gradual changes in composition, accommodates missing data and allows for coherent estimates...
Garnier, Cédric; Mounier, Stéphane; Benaïm, Jean Yves
2004-10-01
Natural organic matter (NOM) behaviour towards proton is an important parameter to understand NOM fate in the environment. Moreover, it is necessary to determine NOM acid-base properties before investigating trace metals complexation by natural organic matter. This work focuses on the possibility to determine these acid-base properties by accurate and simple titrations, even at low organic matter concentrations. So, the experiments were conducted on concentrated and diluted solutions of extracted humic and fulvic acid from Laurentian River, on concentrated and diluted model solutions of well-known simple molecules (acetic and phenolic acids), and on natural samples from the Seine river (France) which are not pre-concentrated. Titration experiments were modelled by a 6 acidic-sites discrete model, except for the model solutions. The modelling software used, called PROSECE (Programme d'Optimisation et de SpEciation Chimique dans l'Environnement), has been developed in our laboratory, is based on the mass balance equilibrium resolution. The results obtained on extracted organic matter and model solutions point out a threshold value for a confident determination of the studied organic matter acid-base properties. They also show an aberrant decreasing carboxylic/phenolic ratio with increasing sample dilution. This shift is neither due to any conformational effect, since it is also observed on model solutions, nor to ionic strength variations which is controlled during all experiments. On the other hand, it could be the result of an electrode troubleshooting occurring at basic pH values, which effect is amplified at low total concentration of acidic sites. So, in our conditions, the limit for a correct modelling of NOM acid-base properties is defined as 0.04 meq of total analysed acidic sites concentration. As for the analysed natural samples, due to their high acidic sites content, it is possible to model their behaviour despite the low organic carbon concentration.
NASA Astrophysics Data System (ADS)
Bamberger, Yael M.; Davis, Elizabeth A.
2013-01-01
This paper focuses on students' ability to transfer modelling performances across content areas, taking into consideration their improvement of content knowledge as a result of a model-based instruction. Sixty-five sixth grade students of one science teacher in an urban public school in the Midwestern USA engaged in scientific modelling practices that were incorporated into a curriculum focused on the nature of matter. Concept-process models were embedded in the curriculum, as well as emphasis on meta-modelling knowledge and modelling practices. Pre-post test items that required drawing scientific models of smell, evaporation, and friction were analysed. The level of content understanding was coded and scored, as were the following elements of modelling performance: explanation, comparativeness, abstraction, and labelling. Paired t-tests were conducted to analyse differences in students' pre-post tests scores on content knowledge and on each element of the modelling performances. These are described in terms of the amount of transfer. Students significantly improved in their content knowledge for the smell and the evaporation models, but not for the friction model, which was expected as that topic was not taught during the instruction. However, students significantly improved in some of their modelling performances for all the three models. This improvement serves as evidence that the model-based instruction can help students acquire modelling practices that they can apply in a new content area.
Supersonic unstalled flutter. [aerodynamic loading of thin airfoils induced by cascade motion
NASA Technical Reports Server (NTRS)
Adamczyk, J. J.; Goldstein, M. E.; Hartmann, M. J.
1978-01-01
Flutter analyses were developed to predict the onset of supersonic unstalled flutter of a cascade of two-dimensional airfoils. The first of these analyzes the onset of supersonic flutter at low levels of aerodynamic loading (i.e., backpressure), while the second examines the occurrence of supersonic flutter at moderate levels of aerodynamic loading. Both of these analyses are based on the linearized unsteady inviscid equations of gas dynamics to model the flow field surrounding the cascade. These analyses are utilized in a parametric study to show the effects of cascade geometry, inlet Mach number, and backpressure on the onset of single and multi degree of freedom unstalled supersonic flutter. Several of the results are correlated against experimental qualitative observation to validate the models.
[Modeling in value-based medicine].
Neubauer, A S; Hirneiss, C; Kampik, A
2010-03-01
Modeling plays an important role in value-based medicine (VBM). It allows decision support by predicting potential clinical and economic consequences, frequently combining different sources of evidence. Based on relevant publications and examples focusing on ophthalmology the key economic modeling methods are explained and definitions are given. The most frequently applied model types are decision trees, Markov models, and discrete event simulation (DES) models. Model validation includes besides verifying internal validity comparison with other models (external validity) and ideally validation of its predictive properties. The existing uncertainty with any modeling should be clearly stated. This is true for economic modeling in VBM as well as when using disease risk models to support clinical decisions. In economic modeling uni- and multivariate sensitivity analyses are usually applied; the key concepts here are tornado plots and cost-effectiveness acceptability curves. Given the existing uncertainty, modeling helps to make better informed decisions than without this additional information.
ERIC Educational Resources Information Center
Zhang, Jiabei
2011-01-01
The purpose of this study was to analyze quantitative needs for more adapted physical education (APE) teachers based on both market- and prevalence-based models. The market-based need for more APE teachers was examined based on APE teacher positions funded, while the prevalence-based need for additional APE teachers was analyzed based on students…
Dieckmann, P; Rall, M; Ostergaard, D
2009-01-01
We describe how simulation and incident reporting can be used in combination to make the interaction between people, (medical) technology and organisation safer for patients and users. We provide the background rationale for our conceptual ideas and apply the concepts to the analysis of an actual incident report. Simulation can serve as a laboratory to analyse such cases and to create relevant and effective training scenarios based on such analyses. We will describe a methodological framework for analysing simulation scenarios in a way that allows discovering and discussing mismatches between conceptual models of the device design and mental models users hold about the device and its use. We further describe how incident reporting systems can be used as one source of data to conduct the necessary needs analyses - both for training and further needs for closer analysis of specific devices or some of their special features or modes during usability analyses.
DOT National Transportation Integrated Search
1997-06-01
This report describes analysis tools to predict shift under high-speed vehicle- : track interaction. The analysis approach is based on two fundamental models : developed (as part of this research); the first model computes the track lateral : residua...
Exploring product supply across age classes and forest types
Robert C. Abt; Karen J. Lee; Gerardo Pacheco
1995-01-01
Timber supply modeling has evolved from examining inventory sustainability based on growth/drain relationships to sophisticated inventory and supply models. These analyses have consistently recognized regional, ownership (public/private), and species group (hardwood/softwood) differences. Recognition of product differences is fundamental to market analysis which...
POLYNOMIAL-BASED DISAGGREGATION OF HOURLY RAINFALL FOR CONTINUOUS HYDROLOGIC SIMULATION
Hydrologic modeling of urban watersheds for designs and analyses of stormwater conveyance facilities can be performed in either an event-based or continuous fashion. Continuous simulation requires, among other things, the use of a time series of rainfall amounts. However, for urb...
A brain-region-based meta-analysis method utilizing the Apriori algorithm.
Niu, Zhendong; Nie, Yaoxin; Zhou, Qian; Zhu, Linlin; Wei, Jieyao
2016-05-18
Brain network connectivity modeling is a crucial method for studying the brain's cognitive functions. Meta-analyses can unearth reliable results from individual studies. Meta-analytic connectivity modeling is a connectivity analysis method based on regions of interest (ROIs) which showed that meta-analyses could be used to discover brain network connectivity. In this paper, we propose a new meta-analysis method that can be used to find network connectivity models based on the Apriori algorithm, which has the potential to derive brain network connectivity models from activation information in the literature, without requiring ROIs. This method first extracts activation information from experimental studies that use cognitive tasks of the same category, and then maps the activation information to corresponding brain areas by using the automatic anatomical label atlas, after which the activation rate of these brain areas is calculated. Finally, using these brain areas, a potential brain network connectivity model is calculated based on the Apriori algorithm. The present study used this method to conduct a mining analysis on the citations in a language review article by Price (Neuroimage 62(2):816-847, 2012). The results showed that the obtained network connectivity model was consistent with that reported by Price. The proposed method is helpful to find brain network connectivity by mining the co-activation relationships among brain regions. Furthermore, results of the co-activation relationship analysis can be used as a priori knowledge for the corresponding dynamic causal modeling analysis, possibly achieving a significant dimension-reducing effect, thus increasing the efficiency of the dynamic causal modeling analysis.
Quantitative Agent Based Model of User Behavior in an Internet Discussion Forum
Sobkowicz, Pawel
2013-01-01
The paper presents an agent based simulation of opinion evolution, based on a nonlinear emotion/information/opinion (E/I/O) individual dynamics, to an actual Internet discussion forum. The goal is to reproduce the results of two-year long observations and analyses of the user communication behavior and of the expressed opinions and emotions, via simulations using an agent based model. The model allowed to derive various characteristics of the forum, including the distribution of user activity and popularity (outdegree and indegree), the distribution of length of dialogs between the participants, their political sympathies and the emotional content and purpose of the comments. The parameters used in the model have intuitive meanings, and can be translated into psychological observables. PMID:24324606
NASA Astrophysics Data System (ADS)
Stauffer, David R.
1990-01-01
The application of dynamic relationships to the analysis problem for the atmosphere is extended to use a full-physics limited-area mesoscale model as the dynamic constraint. A four-dimensional data assimilation (FDDA) scheme based on Newtonian relaxation or "nudging" is developed and evaluated in the Penn State/National Center for Atmospheric Research (PSU/NCAR) mesoscale model, which is used here as a dynamic-analysis tool. The thesis is to determine what assimilation strategies and what meterological fields (mass, wind or both) have the greatest positive impact on the 72-h numerical simulations (dynamic analyses) of two mid-latitude, real-data cases. The basic FDDA methodology is tested in a 10-layer version of the model with a bulk-aerodynamic (single-layer) representation of the planetary boundary layer (PBL), and refined in a 15-layer version of the model by considering the effects of data assimilation within a multi-layer PBL scheme. As designed, the model solution can be relaxed toward either gridded analyses ("analysis nudging"), or toward the actual observations ("obs nudging"). The data used for assimilation include standard 12-hourly rawinsonde data, and also 3-hourly mesoalpha-scale surface data which are applied within the model's multi-layer PBL. Continuous assimilation of standard-resolution rawinsonde data into the 10-layer model successfully reduced large-scale amplitude and phase errors while the model realistically simulated mesoscale structures poorly defined or absent in the rawinsonde analyses and in the model simulations without FDDA. Nudging the model fields directly toward the rawinsonde observations generally produced results comparable to nudging toward gridded analyses. This obs -nudging technique is especially attractive for the assimilation of high-frequency, asynoptic data. Assimilation of 3-hourly surface wind and moisture data into the 15-layer FDDA system was most effective for improving the simulated precipitation fields because a significant portion of the vertically integrated moisture convergence often occurs in the PBL. Overall, the best dynamic analyses for the PBL, mass, wind and precipitation fields were obtained by nudging toward analyses of rawinsonde wind, temperature and moisture (the latter uses a weaker nudging coefficient) above the model PBL and toward analyses of surface-layer wind and moisture within the model PBL.
NASA Astrophysics Data System (ADS)
Nicolotti, Orazio; Giangreco, Ilenia; Miscioscia, Teresa Fabiola; Convertino, Marino; Leonetti, Francesco; Pisani, Leonardo; Carotti, Angelo
2010-02-01
A series of 27 benzamidine inhibitors covering a wide range of biological activity and chemical diversity was analysed to derive a Linear Interaction Energy in Continuum Electrostatics (LIECE) model for analysing the thrombin inhibitory activity. The main interactions occurring at the thrombin binding site and the preferred binding conformations of inhibitors were explicitly biased by including into the LIECE model 10 compounds extracted from X-ray solved thrombin-inhibitor complexes available from the Protein Data Bank (PDB). Supported by a robust statistics ( r 2 = 0.698; q 2 = 0.662), the LIECE model was successful in predicting the inhibitory activity for about 76% of compounds ( r ext 2 ≥ 0.600) from a larger external test set encompassing 88 known thrombin inhibitors and, more importantly, in retrieving, at high sensitivity and with better performance than docking and shape-based methods, active compounds from a thrombin combinatorial library of 10240 mimetic chemical products. The herein proposed LIECE model has the potential for successfully driving the design of novel thrombin inhibitors with benzamidine and/or benzamidine-like chemical structure.
Risley, John C.; Granato, Gregory E.
2014-01-01
6. An analysis of the use of grab sampling and nonstochastic upstream modeling methods was done to evaluate the potential effects on modeling outcomes. Additional analyses using surrogate water-quality datasets for the upstream basin and highway catchment were provided for six Oregon study sites to illustrate the risk-based information that SELDM will produce. These analyses show that the potential effects of highway runoff on receiving-water quality downstream of the outfall depends on the ratio of drainage areas (dilution), the quality of the receiving water upstream of the highway, and the concentration of the criteria of the constituent of interest. These analyses also show that the probability of exceeding a water-quality criterion may depend on the input statistics used, thus careful selection of representative values is important.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peace, Gerald; Goering, Timothy James; Miller, Mark Laverne
2007-01-01
A probabilistic performance assessment has been conducted to evaluate the fate and transport of radionuclides (americium-241, cesium-137, cobalt-60, plutonium-238, plutonium-239, radium-226, radon-222, strontium-90, thorium-232, tritium, uranium-238), heavy metals (lead and cadmium), and volatile organic compounds (VOCs) at the Mixed Waste Landfill (MWL). Probabilistic analyses were performed to quantify uncertainties inherent in the system and models for a 1,000-year period, and sensitivity analyses were performed to identify parameters and processes that were most important to the simulated performance metrics. Comparisons between simulated results and measured values at the MWL were made to gain confidence in the models and perform calibrations whenmore » data were available. In addition, long-term monitoring requirements and triggers were recommended based on the results of the quantified uncertainty and sensitivity analyses.« less
ERIC Educational Resources Information Center
Machado, Juliana; Braga, Marco Antônio Barbosa
2016-01-01
A characterization of the modelling process in science is proposed for science education, based on Mario Bunge's ideas about the construction of models in science. Galileo's "Dialogues" are analysed as a potentially fruitful starting point to implement strategies aimed at modelling in the classroom in the light of that proposal. It is…
Cognitive and Neural Bases of Skilled Performance.
1987-10-04
advantage is that this method is not computationally demanding, and model -specific analyses such as high -precision source localization with realistic...and a two- < " high -threshold model satisfy theoretical and pragmatic independence. Discrimination and bias measures from these two models comparing...recognition memory of patients with dementing diseases, amnesics, and normal controls. We found the two- high -threshold model to be more sensitive Lloyd
The Competence of Modelling in Learning Chemical Change: A Study with Secondary School Students
ERIC Educational Resources Information Center
Oliva, José Mª; del Mar Aragón, María; Cuesta, Josefa
2015-01-01
The competence of modelling as part of learning about chemical change is analysed in a sample of 35 secondary students, ages 14-15 years, during their study of a curricular unit on this topic. The teaching approach followed is model based, with frequent use of analogies and mechanical models (fruits and bowls, Lego pieces, balls of plasticine,…
Yi, Zhenzhen; Strüder-Kypke, Michaela; Hu, Xiaozhong; Lin, Xiaofeng; Song, Weibo
2014-02-01
In order to assess how dataset-selection for multi-gene analyses affects the accuracy of inferred phylogenetic trees in ciliates, we chose five genes and the genus Paramecium, one of the most widely used model protist genera, and compared tree topologies of the single- and multi-gene analyses. Our empirical study shows that: (1) Using multiple genes improves phylogenetic accuracy, even when their one-gene topologies are in conflict with each other. (2) The impact of missing data on phylogenetic accuracy is ambiguous: resolution power and topological similarity, but not number of represented taxa, are the most important criteria of a dataset for inclusion in concatenated analyses. (3) As an example, we tested the three classification models of the genus Paramecium with a multi-gene based approach, and only the monophyly of the subgenus Paramecium is supported. Copyright © 2013 Elsevier Inc. All rights reserved.
Medium- and long-term electric power demand forecasting based on the big data of smart city
NASA Astrophysics Data System (ADS)
Wei, Zhanmeng; Li, Xiyuan; Li, Xizhong; Hu, Qinghe; Zhang, Haiyang; Cui, Pengjie
2017-08-01
Based on the smart city, this paper proposed a new electric power demand forecasting model, which integrates external data such as meteorological information, geographic information, population information, enterprise information and economic information into the big database, and uses an improved algorithm to analyse the electric power demand and provide decision support for decision makers. The data mining technology is used to synthesize kinds of information, and the information of electric power customers is analysed optimally. The scientific forecasting is made based on the trend of electricity demand, and a smart city in north-eastern China is taken as a sample.
Liang, Li-Jung; Weiss, Robert E; Redelings, Benjamin; Suchard, Marc A
2009-10-01
Statistical analyses of phylogenetic data culminate in uncertain estimates of underlying model parameters. Lack of additional data hinders the ability to reduce this uncertainty, as the original phylogenetic dataset is often complete, containing the entire gene or genome information available for the given set of taxa. Informative priors in a Bayesian analysis can reduce posterior uncertainty; however, publicly available phylogenetic software specifies vague priors for model parameters by default. We build objective and informative priors using hierarchical random effect models that combine additional datasets whose parameters are not of direct interest but are similar to the analysis of interest. We propose principled statistical methods that permit more precise parameter estimates in phylogenetic analyses by creating informative priors for parameters of interest. Using additional sequence datasets from our lab or public databases, we construct a fully Bayesian semiparametric hierarchical model to combine datasets. A dynamic iteratively reweighted Markov chain Monte Carlo algorithm conveniently recycles posterior samples from the individual analyses. We demonstrate the value of our approach by examining the insertion-deletion (indel) process in the enolase gene across the Tree of Life using the phylogenetic software BALI-PHY; we incorporate prior information about indels from 82 curated alignments downloaded from the BAliBASE database.
NASA Astrophysics Data System (ADS)
Kim, Euiyoung; Cho, Maenghyo
2017-11-01
In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.
Akam, Thomas; Costa, Rui; Dayan, Peter
2015-12-01
The recently developed 'two-step' behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine constraints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects' investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifications to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing successive trials can lead to erroneous conclusions. We confirm the power of a suggested correction to the analysis that can alleviate this problem. We then consider model-free reinforcement learning strategies that exploit correlations between where rewards are obtained and which actions have high expected value. These generate behaviour that appears model-based under these, and also more sophisticated, analyses. Exploiting the full potential of the two-step task as a tool for behavioural neuroscience requires an understanding of these issues.
Akam, Thomas; Costa, Rui; Dayan, Peter
2015-01-01
The recently developed ‘two-step’ behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine constraints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects’ investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifications to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing successive trials can lead to erroneous conclusions. We confirm the power of a suggested correction to the analysis that can alleviate this problem. We then consider model-free reinforcement learning strategies that exploit correlations between where rewards are obtained and which actions have high expected value. These generate behaviour that appears model-based under these, and also more sophisticated, analyses. Exploiting the full potential of the two-step task as a tool for behavioural neuroscience requires an understanding of these issues. PMID:26657806
Numerical Analyses for Low Reynolds Flow in a Ventricular Assist Device.
Lopes, Guilherme; Bock, Eduardo; Gómez, Luben
2017-06-01
Scientific and technological advances in blood pump developments have been driven by their importance in cardiac patient treatments and in the expansion of life quality in assisted people. To improve and optimize the design and development, numerical tools were incorporated into the analyses of these mechanisms and have become indispensable in their advances. This study analyzes the flow behavior with low impeller Reynolds number, for which there is no consensus on the full development of turbulence in ventricular assist devices (VAD). For supporting analyses, computational numerical simulations were carried out in different scenarios with the same rotation speed. Two modeling approaches were applied: laminar flow and turbulent flow with the standard, RNG and realizable κ - ε; the standard and SST κ - ω models; and Spalart-Allmaras models. The results agree with the literature for VAD and the range for transient flows in stirred tanks with an impeller Reynolds number around 2800 for the tested scenarios. The turbulent models were compared, and it is suggested, based on the expected physical behavior, the use of κ-ε RNG, standard and SST κ-ω, and Spalart-Allmaras models to numerical analyses for low impeller Reynolds numbers according to the tested flow scenarios. © 2016 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Jeng, Frank F.
2007-01-01
Development of analysis guidelines for Exploration Life Support (ELS) technology tests was completed. The guidelines were developed based on analysis experiences gained from supporting Environmental Control and Life Support System (ECLSS) technology development in air revitalization systems and water recovery systems. Analyses are vital during all three phases of the ELS technology test: pre-test, during test and post test. Pre-test analyses of a test system help define hardware components, predict system and component performances, required test duration, sampling frequencies of operation parameters, etc. Analyses conducted during tests could verify the consistency of all the measurements and the performance of the test system. Post test analyses are an essential part of the test task. Results of post test analyses are an important factor in judging whether the technology development is a successful one. In addition, development of a rigorous model for a test system is an important objective of any new technology development. Test data analyses, especially post test data analyses, serve to verify the model. Test analyses have supported development of many ECLSS technologies. Some test analysis tasks in ECLSS technology development are listed in the Appendix. To have effective analysis support for ECLSS technology tests, analysis guidelines would be a useful tool. These test guidelines were developed based on experiences gained through previous analysis support of various ECLSS technology tests. A comment on analysis from an experienced NASA ECLSS manager (1) follows: "Bad analysis was one that bent the test to prove that the analysis was right to begin with. Good analysis was one that directed where the testing should go and also bridged the gap between the reality of the test facility and what was expected on orbit."
Theory and analysis of a large field polarization imaging system with obliquely incident light.
Lu, Xiaotian; Jin, Weiqi; Li, Li; Wang, Xia; Qiu, Su; Liu, Jing
2018-02-05
Polarization imaging technology provides information about not only the irradiance of a target but also the polarization degree and angle of polarization, which indicates extensive application potential. However, polarization imaging theory is based on paraxial optics. When a beam of obliquely incident light passes an analyser, the direction of light propagation is not perpendicular to the surface of the analyser and the applicability of the traditional paraxial optical polarization imaging theory is challenged. This paper investigates a theoretical model of a polarization imaging system with obliquely incident light and establishes a polarization imaging transmission model with a large field of obliquely incident light. In an imaging experiment with an integrating sphere light source and rotatable polarizer, the polarization imaging transmission model is verified and analysed for two cases of natural light and linearly polarized light incidence. Although the results indicate that the theoretical model is consistent with the experimental results, the theoretical model distinctly differs from the traditional paraxial approximation model. The results prove the accuracy and necessity of the theoretical model and the theoretical guiding significance for theoretical and systematic research of large field polarization imaging.
NASA Astrophysics Data System (ADS)
Lefever, K.; van der A, R.; Baier, F.; Christophe, Y.; Errera, Q.; Eskes, H.; Flemming, J.; Inness, A.; Jones, L.; Lambert, J.-C.; Langerock, B.; Schultz, M. G.; Stein, O.; Wagner, A.; Chabrillat, S.
2015-03-01
This paper evaluates and discusses the quality of the stratospheric ozone analyses delivered in near real time by the MACC (Monitoring Atmospheric Composition and Climate) project during the 3-year period between September 2009 and September 2012. Ozone analyses produced by four different chemical data assimilation (CDA) systems are examined and compared: the Integrated Forecast System coupled to the Model for OZone And Related chemical Tracers (IFS-MOZART); the Belgian Assimilation System for Chemical ObsErvations (BASCOE); the Synoptic Analysis of Chemical Constituents by Advanced Data Assimilation (SACADA); and the Data Assimilation Model based on Transport Model version 3 (TM3DAM). The assimilated satellite ozone retrievals differed for each system; SACADA and TM3DAM assimilated only total ozone observations, BASCOE assimilated profiles for ozone and some related species, while IFS-MOZART assimilated both types of ozone observations. All analyses deliver total column values that agree well with ground-based observations (biases < 5%) and have a realistic seasonal cycle, except for BASCOE analyses, which underestimate total ozone in the tropics all year long by 7 to 10%, and SACADA analyses, which overestimate total ozone in polar night regions by up to 30%. The validation of the vertical distribution is based on independent observations from ozonesondes and the ACE-FTS (Atmospheric Chemistry Experiment - Fourier Transform Spectrometer) satellite instrument. It cannot be performed with TM3DAM, which is designed only to deliver analyses of total ozone columns. Vertically alternating positive and negative biases are found in the IFS-MOZART analyses as well as an overestimation of 30 to 60% in the polar lower stratosphere during polar ozone depletion events. SACADA underestimates lower stratospheric ozone by up to 50% during these events above the South Pole and overestimates it by approximately the same amount in the tropics. The three-dimensional (3-D) analyses delivered by BASCOE are found to have the best quality among the three systems resolving the vertical dimension, with biases not exceeding 10% all year long, at all stratospheric levels and in all latitude bands, except in the tropical lowermost stratosphere. The northern spring 2011 period is studied in more detail to evaluate the ability of the analyses to represent the exceptional ozone depletion event, which happened above the Arctic in March 2011. Offline sensitivity tests are performed during this month and indicate that the differences between the forward models or the assimilation algorithms are much less important than the characteristics of the assimilated data sets. They also show that IFS-MOZART is able to deliver realistic analyses of ozone both in the troposphere and in the stratosphere, but this requires the assimilation of observations from nadir-looking instruments as well as the assimilation of profiles, which are well resolved vertically and extend into the lowermost stratosphere.
NASA Astrophysics Data System (ADS)
Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio
The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and numerical analyses based on the finite-difference time-domain (FDTD) method. We consider the anatomical European human phantoms and plane-wave in the 2GHz mobile phone frequency band. The measured whole-body averaged SAR results obtained by the proposed method are compared with the results of the FDTD analyses.
Scripting MODFLOW model development using Python and FloPy
Bakker, Mark; Post, Vincent E. A.; Langevin, Christian D.; Hughes, Joseph D.; White, Jeremy; Starn, Jeffrey; Fienen, Michael N.
2016-01-01
Graphical user interfaces (GUIs) are commonly used to construct and postprocess numerical groundwater flow and transport models. Scripting model development with the programming language Python is presented here as an alternative approach. One advantage of Python is that there are many packages available to facilitate the model development process, including packages for plotting, array manipulation, optimization, and data analysis. For MODFLOW-based models, the FloPy package was developed by the authors to construct model input files, run the model, and read and plot simulation results. Use of Python with the available scientific packages and FloPy facilitates data exploration, alternative model evaluations, and model analyses that can be difficult to perform with GUIs. Furthermore, Python scripts are a complete, transparent, and repeatable record of the modeling process. The approach is introduced with a simple FloPy example to create and postprocess a MODFLOW model. A more complicated capture-fraction analysis with a real-world model is presented to demonstrate the types of analyses that can be performed using Python and FloPy.
Simulating land use changes in the Upper Narew catchment using the RegCM model
NASA Astrophysics Data System (ADS)
Liszewska, Malgorzata; Osuch, Marzena; Romanowicz, Renata
2010-05-01
Catchment hydrology is influenced by climate forcing in the form of precipitation, temperature, evapotranspiration and human interactions such as land use and water management practices. The difficulty in separating different causes of change in a hydrological regime results from the complexity of interactions between those three factors and catchment responses and the uncertainty and scarcity of available observations. This paper describes an application of a regional climate model to simulate the variability in precipitation, temperature, evaporation and discharge under different land use parameterizations, using the Upper Narew catchment (north-east Poland) as a case study. We use RegCM3 model, developed at the International Centre for Theoretical Physics, Trieste, Italy. The model's dynamic core is based on the hydrostatic version of the NCAR/PSU Mesoscale Model version 5 (primitive equations, hydrostatic, compressible, sigma-vertical coordinate). The physical input includes radiation transfer, large-scale and convective precipitation, Planetary Boundary Layer, biosphere. The RegCM3 model has options to interface with a variety of re-analyses and GCM boundary conditions, and can thus be used for scenario assessments. The variability of hydrological conditions in response to regional climate model projections is modeled using an integrated Data Based Mechanistic (DBM) rainfall-flow/flow-routing model of the Upper River Narew catchment. The modelling tool developed is formulated in the MATLAB-SIMULINK language. The basic system structure includes rainfall-flow and flow routing modules, based on a Stochastic Transfer Function (STF) approach combined with a nonlinear transformation of rainfall into effective rainfall. We analyse the signal resulting from modified land use in a given region. 10 month-long runs have been performed from February to November for the period of 1991-2000 based on the NCEP re-analyses. The land use data have been taken from the GLCC dataset and the Corine Land Cover programme (http://dataservice.eea.europa.eu/, GIOS, Poland). Simulations taking into account land use modifications in the catchment are compared with the reference simulations under no change in land use in the region. In the second part of the paper we discuss the application of the RegCM3 model in two climate change scenarios (SRES A2 and B1). The study is a contribution to the LUWR programme (http://luwr.igf.edu.pl).
Effect of Shear Deformation and Continuity on Delamination Modelling with Plate Elements
NASA Technical Reports Server (NTRS)
Glaessgen, E. H.; Riddell, W. T.; Raju, I. S.
1998-01-01
The effects of several critical assumptions and parameters on the computation of strain energy release rates for delamination and debond configurations modeled with plate elements have been quantified. The method of calculation is based on the virtual crack closure technique (VCCT), and models that model the upper and lower surface of the delamination or debond with two-dimensional (2D) plate elements rather than three-dimensional (3D) solid elements. The major advantages of the plate element modeling technique are a smaller model size and simpler geometric modeling. Specific issues that are discussed include: constraint of translational degrees of freedom, rotational degrees of freedom or both in the neighborhood of the crack tip; element order and assumed shear deformation; and continuity of material properties and section stiffness in the vicinity of the debond front, Where appropriate, the plate element analyses are compared with corresponding two-dimensional plane strain analyses.
NASA Astrophysics Data System (ADS)
Di Ludovico, Donato; D'Ovidio, Gino
2017-10-01
This paper refers to an interdisciplinary planning research approach that aims to combine urban aspects related to a territorial spatial development with transport requirements connected to an efficiency and sustainable mobility. The proposed research method is based on “Territorial Frames” (TFs) model that derived from an original interpretation of the local context divided into a summation of territorial settlement fabrics characterized in terms of spatial tile, morphology and mobility axes. The TFs, with their own autonomous, different size and structure, are used as the main plot, able to assemble the settlement systems and their posturbane forms. With a view to polycentric and spatial development, the research method allows us to analyse the completeness of the TFs and their connective potential, in order to locate the missing/inefficient elements of the transportation network and planning other TFs essential to support economic and social development processes of the most isolated and disadvantaged inland areas. Finally, a case study of the Italian Median Macroregion configuration based on TFs model approach is proposed, analysed and discussed.
Blueprint XAS: a Matlab-based toolbox for the fitting and analysis of XAS spectra.
Delgado-Jaime, Mario Ulises; Mewis, Craig Philip; Kennepohl, Pierre
2010-01-01
Blueprint XAS is a new Matlab-based program developed to fit and analyse X-ray absorption spectroscopy (XAS) data, most specifically in the near-edge region of the spectrum. The program is based on a methodology that introduces a novel background model into the complete fit model and that is capable of generating any number of independent fits with minimal introduction of user bias [Delgado-Jaime & Kennepohl (2010), J. Synchrotron Rad. 17, 119-128]. The functions and settings on the five panels of its graphical user interface are designed to suit the needs of near-edge XAS data analyzers. A batch function allows for the setting of multiple jobs to be run with Matlab in the background. A unique statistics panel allows the user to analyse a family of independent fits, to evaluate fit models and to draw statistically supported conclusions. The version introduced here (v0.2) is currently a toolbox for Matlab. Future stand-alone versions of the program will also incorporate several other new features to create a full package of tools for XAS data processing.
NASA Astrophysics Data System (ADS)
Hubert, G.; Federico, C. A.; Pazianotto, M. T.; Gonzales, O. L.
2016-02-01
In this paper are described the ACROPOL and OPD high-altitude stations devoted to characterize the atmospheric radiation fields. The ACROPOL platform, located at the summit of the Pic du Midi in the French Pyrenees at 2885 m above sea level, exploits since May 2011 some scientific equipment, including a BSS neutron spectrometer, detectors based on semiconductor and scintillators. In the framework of a IEAv and ONERA collaboration, a second neutron spectrometer was simultaneously exploited since February 2015 at the summit of the Pico dos Dias in Brazil at 1864 m above the sea level. The both high station platforms allow for investigating the long period dynamics to analyze the spectral variation of cosmic-ray- induced neutron and effects of local and seasonal changes, but also the short term dynamics during solar flare events. This paper presents long and short-term analyses, including measurement and modeling investigations considering the both high altitude stations data. The modeling approach, based on ATMORAD computational platform, was used to link the both station measurements.
A simple geometrical model describing shapes of soap films suspended on two rings
NASA Astrophysics Data System (ADS)
Herrmann, Felix J.; Kilvington, Charles D.; Wildenberg, Rebekah L.; Camacho, Franco E.; Walecki, Wojciech J.; Walecki, Peter S.; Walecki, Eve S.
2016-09-01
We measured and analysed the stability of two types of soap films suspended on two rings using the simple conical frusta-based model, where we use common definition of conical frustum as a portion of a cone that lies between two parallel planes cutting it. Using frusta-based we reproduced very well-known results for catenoid surfaces with and without a central disk. We present for the first time a simple conical frusta based spreadsheet model of the soap surface. This very simple, elementary, geometrical model produces results surprisingly well matching the experimental data and known exact analytical solutions. The experiment and the spreadsheet model can be used as a powerful teaching tool for pre-calculus and geometry students.
Elander, J; Richardson, C; Morris, J; Robinson, G; Schofield, M B
2017-09-01
Motivational and behavioural models of adjustment to chronic pain make different predictions about change processes, which can be tested in longitudinal analyses. We examined changes in motivation, coping and acceptance among 78 men with chronic haemophilia-related joint pain. Using cross-lagged regression analyses of changes from baseline to 6 months as predictors of changes from 6 to 12 months, with supplementary structural equation modelling, we tested two models in which motivational changes influence behavioural changes, and one in which behavioural changes influence motivational changes. Changes in motivation to self-manage pain influenced later changes in pain coping, consistent with the motivational model of pain self-management, and also influenced later changes in activity engagement, the behavioural component of pain acceptance. Changes in activity engagement influenced later changes in pain willingness, consistent with the behavioural model of pain acceptance. Based on the findings, a combined model of changes in pain self-management and acceptance is proposed, which could guide combined interventions based on theories of motivation, coping and acceptance in chronic pain. This study adds longitudinal evidence about sequential change processes; a test of the motivational model of pain self-management; and tests of behavioural versus motivational models of pain acceptance. © 2017 European Pain Federation - EFIC®.
Alienating Curriculum Work in Australian Vocational Education and Training
ERIC Educational Resources Information Center
Hodge, Steven
2016-01-01
Competency-based training (CBT) is a curriculum model employed in educational sectors, professions and industries around the world. A significant feature of the model is its permeability to control by interests outside education. In this article, a "Neoliberal" version of CBT is described and analysed in the context of Australian…
EVALUATION AND SENSITIVITY ANALYSES RESULTS OF THE MESOPUFF II MODEL WITH CAPTEX MEASUREMENTS
The MESOPUFF II regional Lagrangian puff model has been evaluated and tested against measurements from the Cross-Appalachian Tracer Experiment (CAPTEX) data base in an effort to assess its abilIty to simulate the transport and dispersion of a nonreactive, nondepositing tracer plu...
Binquet, C; Abrahamowicz, M; Mahboubi, A; Jooste, V; Faivre, J; Bonithon-Kopp, C; Quantin, C
2008-12-30
Flexible survival models, which avoid assumptions about hazards proportionality (PH) or linearity of continuous covariates effects, bring the issues of model selection to a new level of complexity. Each 'candidate covariate' requires inter-dependent decisions regarding (i) its inclusion in the model, and representation of its effects on the log hazard as (ii) either constant over time or time-dependent (TD) and, for continuous covariates, (iii) either loglinear or non-loglinear (NL). Moreover, 'optimal' decisions for one covariate depend on the decisions regarding others. Thus, some efficient model-building strategy is necessary.We carried out an empirical study of the impact of the model selection strategy on the estimates obtained in flexible multivariable survival analyses of prognostic factors for mortality in 273 gastric cancer patients. We used 10 different strategies to select alternative multivariable parametric as well as spline-based models, allowing flexible modeling of non-parametric (TD and/or NL) effects. We employed 5-fold cross-validation to compare the predictive ability of alternative models.All flexible models indicated significant non-linearity and changes over time in the effect of age at diagnosis. Conventional 'parametric' models suggested the lack of period effect, whereas more flexible strategies indicated a significant NL effect. Cross-validation confirmed that flexible models predicted better mortality. The resulting differences in the 'final model' selected by various strategies had also impact on the risk prediction for individual subjects.Overall, our analyses underline (a) the importance of accounting for significant non-parametric effects of covariates and (b) the need for developing accurate model selection strategies for flexible survival analyses. Copyright 2008 John Wiley & Sons, Ltd.
Novel Multiscale Modeling Tool Applied to Pseudomonas aeruginosa Biofilm Formation
Biggs, Matthew B.; Papin, Jason A.
2013-01-01
Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet) as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM) and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool. PMID:24147108
Novel multiscale modeling tool applied to Pseudomonas aeruginosa biofilm formation.
Biggs, Matthew B; Papin, Jason A
2013-01-01
Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet) as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM) and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool.
Using Toulmin analysis to analyse an instructor's proof presentation in abstract algebra
NASA Astrophysics Data System (ADS)
Fukawa-connelly, Timothy
2014-01-01
This paper provides a method for analysing undergraduate teaching of proof-based courses using Toulmin's model (1969) of argumentation. It presents a case study of one instructor's presentation of proofs. The analysis shows that the instructor presents different levels of detail in different proofs; thus, the students have an inconsistent set of written models for their work. Similarly, the analysis shows that the details the instructor says aloud differ from what she writes down. Although her verbal commentary provides additional detail and appears to have pedagogical value, for instance, by modelling thinking that supports proof writing, this value might be better realized if she were to change her teaching practices.
Economic evaluation of ezetimibe treatment in combination with statin therapy in the United States.
Davies, Glenn M; Vyas, Ami; Baxter, Carl A
2017-07-01
This study assessed the cost-effectiveness of ezetimibe with statin therapy vs statin monotherapy from a US payer perspective, assuming the impending patent expiration of ezetimibe. A Markov-like economic model consisting of 28 distinct health states was used. Model population data were obtained from US linked claims and electronic medical records, with inclusion criteria based on diagnostic guidelines. Inputs came from recent clinical trials, meta-analyses, and cost-effectiveness analyses. The base-case scenario was used to evaluate the cost-effectiveness of adding ezetimibe 10 mg to statin in patients aged 35-74 years with a history of coronary heart disease (CHD) and/or stroke, and with low-density lipoprotein cholesterol (LDL-C) levels ≥70 mg/dL over a lifetime horizon, assuming a 90% price reduction of ezetimibe after 1 year to take into account the impending patent expiration in the second quarter of 2017. Sub-group analyses included patients with LDL-C levels ≥100 mg/dL and patients with diabetes with LDL-C levels ≥70 mg/dL. The lifetime discounted incremental cost-effectiveness ratio (ICER) for ezetimibe added to statin was $9,149 per quality-adjusted life year (QALY) for the base-case scenario. For patients with LDL-C levels ≥100 mg/dL, the ICER was $839/QALY; for those with diabetes and LDL-C levels ≥70 mg/dL, it was $560/QALY. One-way sensitivity analyses showed that the model was sensitive to changes in cost of ezetimibe, rate reduction of non-fatal CHD, and utility weight for non-fatal CHD in the base-case and sub-group analyses. Indirect costs or treatment discontinuation estimation were not included. Compared with statin monotherapy, ezetimibe with statin therapy was cost-effective for secondary prevention of CHD and stroke and for primary prevention of these conditions in patients whose LDL-C levels are ≥100 mg/dL and in patients with diabetes, taking into account a 90% cost reduction for ezetimibe.
Canonical decomposition of magnetotelluric responses: Experiment on 1D anisotropic structures
NASA Astrophysics Data System (ADS)
Guo, Ze-qiu; Wei, Wen-bo; Ye, Gao-feng; Jin, Sheng; Jing, Jian-en
2015-08-01
Horizontal electrical heterogeneity of subsurface earth is mostly originated from structural complexity and electrical anisotropy, and local near-surface electrical heterogeneity will severely distort regional electromagnetic responses. Conventional distortion analyses for magnetotelluric soundings are primarily physical decomposition methods with respect to isotropic models, which mostly presume that the geoelectric distribution of geological structures is of local and regional patterns represented by 3D/2D models. Due to the widespread anisotropy of earth media, the confusion between 1D anisotropic responses and 2D isotropic responses, and the defects of physical decomposition methods, we propose to conduct modeling experiments with canonical decomposition in terms of 1D layered anisotropic models, and the method is one of the mathematical decomposition methods based on eigenstate analyses differentiated from distortion analyses, which can be used to recover electrical information such as strike directions, and maximum and minimum conductivity. We tested this method with numerical simulation experiments on several 1D synthetic models, which turned out that canonical decomposition is quite effective to reveal geological anisotropic information. Finally, for the background of anisotropy from previous study by geological and seismological methods, canonical decomposition is applied to real data acquired in North China Craton for 1D anisotropy analyses, and the result shows that, with effective modeling and cautious interpretation, canonical decomposition could be another good method to detect anisotropy of geological media.
Quantitative Measures of Immersion in Cloud and the Biogeography of Cloud Forests
NASA Technical Reports Server (NTRS)
Lawton, R. O.; Nair, U. S.; Ray, D.; Regmi, A.; Pounds, J. A.; Welch, R. M.
2010-01-01
Sites described as tropical montane cloud forests differ greatly, in part because observers tend to differ in their opinion as to what constitutes frequent and prolonged immersion in cloud. This definitional difficulty interferes with hydrologic analyses, assessments of environmental impacts on ecosystems, and biogeographical analyses of cloud forest communities and species. Quantitative measurements of cloud immersion can be obtained on site, but the observations are necessarily spatially limited, although well-placed observers can examine 10 50 km of a mountain range under rainless conditions. Regional analyses, however, require observations at a broader scale. This chapter discusses remote sensing and modeling approaches that can provide quantitative measures of the spatiotemporal patterns of cloud cover and cloud immersion in tropical mountain ranges. These approaches integrate remote sensing tools of various spatial resolutions and frequencies of observation, digital elevation models, regional atmospheric models, and ground-based observations to provide measures of cloud cover, cloud base height, and the intersection of cloud and terrain. This combined approach was applied to the Monteverde region of northern Costa Rica to illustrate how the proportion of time the forest is immersed in cloud may vary spatially and temporally. The observed spatial variation was largely due to patterns of airflow over the mountains. The temporal variation reflected the diurnal rise and fall of the orographic cloud base, which was influenced in turn by synoptic weather conditions, the seasonal movement of the Intertropical Convergence Zone and the north-easterly trade winds. Knowledge of the proportion of the time that sites are immersed in clouds should facilitate ecological comparisons and biogeographical analyses, as well as land use planning and hydrologic assessments in areas where intensive on-site work is not feasible.
McGuire, A.D.; Christensen, T.R.; Hayes, D.; Heroult, A.; Euskirchen, E.; Yi, Y.; Kimball, J.S.; Koven, C.; Lafleur, P.; Miller, P.A.; Oechel, W.; Peylin, P.; Williams, M.
2012-01-01
Although arctic tundra has been estimated to cover only 8% of the global land surface, the large and potentially labile carbon pools currently stored in tundra soils have the potential for large emissions of carbon (C) under a warming climate. These emissions as radiatively active greenhouse gases in the form of both CO2 and CH4 could amplify global warming. Given the potential sensitivity of these ecosystems to climate change and the expectation that the Arctic will experience appreciable warming over the next century, it is important to assess whether responses of C exchange in tundra regions are likely to enhance or mitigate warming. In this study we compared analyses of C exchange of Arctic tundra between 1990–1999 and 2000–2006 among observations, regional and global applications of process-based terrestrial biosphere models, and atmospheric inversion models. Syntheses of the compilation of flux observations and of inversion model results indicate that the annual exchange of CO2 between arctic tundra and the atmosphere has large uncertainties that cannot be distinguished from neutral balance. The mean estimate from an ensemble of process-based model simulations suggests that arctic tundra acted as a sink for atmospheric CO2 in recent decades, but based on the uncertainty estimates it cannot be determined with confidence whether these ecosystems represent a weak or a strong sink. Tundra was 0.6 °C warmer in the 2000s compared to the 1990s. The central estimates of the observations, process-based models, and inversion models each identify stronger sinks in the 2000s compared with the 1990s. Similarly, the observations and the applications of regional process-based models suggest that CH4 emissions from arctic tundra have increased from the 1990s to 2000s. Based on our analyses of the estimates from observations, process-based models, and inversion models, we estimate that arctic tundra was a sink for atmospheric CO2 of 110 Tg C yr-1 (uncertainty between a sink of 291 Tg C yr-1 and a source of 80 Tg C yr-1) and a source of CH4 to the atmosphere of 19 Tg C yr-1 (uncertainty between sources of 8 and 29 Tg C yr-1). The suite of analyses conducted in this study indicate that it is clearly important to reduce uncertainties in the observations, process-based models, and inversions in order to better understand the degree to which Arctic tundra is influencing atmospheric CO2 and CH4 concentrations. The reduction of uncertainties can be accomplished through (1) the strategic placement of more CO2 and CH4 monitoring stations to reduce uncertainties in inversions, (2) improved observation networks of ground-based measurements of CO2 and CH4 exchange to understand exchange in response to disturbance and across gradients of hydrological variability, and (3) the effective transfer of information from enhanced observation networks into process-based models to improve the simulation of CO2 and CH4 exchange from arctic tundra to the atmosphere.
Finite Element Model Optimization of the FalconSAT-5 Structural Engineering Model
2009-03-01
for coupled loads analyses. To develop the FE tuning process, this research focuses on the United States Air Force Academy (USAFA) FalconSAT-5 SEM II...Kirtland Air Force Base (KAFB) were sufficient for design engineers to ensure compliance with launch loads. However, for the coupled loads analysis...OF THE AIR FORCE AIR UNIVERSITY AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base, Ohio APPROVED FOR PUBLIC RELEASE; DISTRIBUTION
Wei, Ching-Yun; Quek, Ruben G W; Villa, Guillermo; Gandra, Shravanthi R; Forbes, Carol A; Ryder, Steve; Armstrong, Nigel; Deshpande, Sohan; Duffy, Steven; Kleijnen, Jos; Lindgren, Peter
2017-03-01
Previous reviews have evaluated economic analyses of lipid-lowering therapies using lipid levels as surrogate markers for cardiovascular disease. However, drug approval and health technology assessment agencies have stressed that surrogates should only be used in the absence of clinical endpoints. The aim of this systematic review was to identify and summarise the methodologies, weaknesses and strengths of economic models based on atherosclerotic cardiovascular disease event rates. Cost-effectiveness evaluations of lipid-lowering therapies using cardiovascular event rates in adults with hyperlipidaemia were sought in Medline, Embase, Medline In-Process, PubMed and NHS EED and conference proceedings. Search results were independently screened, extracted and quality checked by two reviewers. Searches until February 2016 retrieved 3443 records, from which 26 studies (29 publications) were selected. Twenty-two studies evaluated secondary prevention (four also assessed primary prevention), two considered only primary prevention and two included mixed primary and secondary prevention populations. Most studies (18) based treatment-effect estimates on single trials, although more recent evaluations deployed meta-analyses (5/10 over the last 10 years). Markov models (14 studies) were most commonly used and only one study employed discrete event simulation. Models varied particularly in terms of health states and treatment-effect duration. No studies used a systematic review to obtain utilities. Most studies took a healthcare perspective (21/26) and sourced resource use from key trials instead of local data. Overall, reporting quality was suboptimal. This review reveals methodological changes over time, but reporting weaknesses remain, particularly with respect to transparency of model reporting.
Liou, Shwu-Ru
2009-01-01
To systematically analyse the Organizational Commitment model and Theory of Reasoned Action and determine concepts that can better explain nurses' intention to leave their job. The Organizational Commitment model and Theory of Reasoned Action have been proposed and applied to understand intention to leave and turnover behaviour, which are major contributors to nursing shortage. However, the appropriateness of applying these two models in nursing was not analysed. Three main criteria of a useful model were used for the analysis: consistency in the use of concepts, testability and predictability. Both theories use concepts consistently. Concepts in the Theory of Reasoned Action are defined broadly whereas they are operationally defined in the Organizational Commitment model. Predictability of the Theory of Reasoned Action is questionable whereas the Organizational Commitment model can be applied to predict intention to leave. A model was proposed based on this analysis. Organizational commitment, intention to leave, work experiences, job characteristics and personal characteristics can be concepts for predicting nurses' intention to leave. Nursing managers may consider nurses' personal characteristics and experiences to increase their organizational commitment and enhance their intention to stay. Empirical studies are needed to test and cross-validate the re-synthesized model for nurses' intention to leave their job.
ERIC Educational Resources Information Center
Mitchell, James K.; Carter, William E.
2000-01-01
Describes using a computer statistical software package called Minitab to model the sensitivity of several microbes to the disinfectant NaOCl (Clorox') using the Kirby-Bauer technique. Each group of students collects data from one microbe, conducts regression analyses, then chooses the best-fit model based on the highest r-values obtained.…
PESTEL Model Analysis and Legal Guarantee of Tourism Environmental Protection in China
NASA Astrophysics Data System (ADS)
Zhiyong, Xian
2017-08-01
On the basis of summarizing the general situation of tourism environmental protection in China, this paper analyses the macro factors of tourism environmental protection by using PESTEL model. On this basis, this paper explores the improvement paths of tourism environmental protection based on PESTEL model. Finally, it puts forward the legal guarantee suggestion of tourism environment protection.
ERIC Educational Resources Information Center
Kennedy, Eileen; Laurillard, Diana; Horan, Bernard; Charlton, Patricia
2015-01-01
This article reports on a design-based research project to create a modelling tool to analyse the costs and learning benefits involved in different modes of study. The Course Resource Appraisal Model (CRAM) provides accurate cost-benefit information so that institutions are able to make more meaningful decisions about which kind of…
NASA Astrophysics Data System (ADS)
Zielnica, J.; Ziółkowski, A.; Cempel, C.
2003-03-01
Design and theoretical and experimental investigation of vibroisolation pads with non-linear static and dynamic responses is the objective of the paper. The analytical investigations are based on non-linear finite element analysis where the load-deflection response is traced against the shape and material properties of the analysed model of the vibroisolation pad. A new model of vibroisolation pad of antisymmetrical type was designed and analysed by the finite element method based on the second-order theory (large displacements and strains) with the assumption of material's non-linearities (Mooney-Rivlin model). Stability loss phenomenon was used in the design of the vibroisolators, and it was proved that it would be possible to design a model of vibroisolator in the form of a continuous pad with non-linear static and dynamic response, typical to vibroisolation purposes. The materials used for the vibroisolator are those of rubber, elastomers, and similar ones. The results of theoretical investigations were examined experimentally. A series of models made of soft rubber were designed for the test purposes. The experimental investigations of the vibroisolation models, under static and dynamic loads, confirmed the results of the FEM analysis.
Effectiveness of a worksite mindfulness-based multi-component intervention on lifestyle behaviors
2014-01-01
Introduction Overweight and obesity are associated with an increased risk of morbidity. Mindfulness training could be an effective strategy to optimize lifestyle behaviors related to body weight gain. The aim of this study was to evaluate the effectiveness of a worksite mindfulness-based multi-component intervention on vigorous physical activity in leisure time, sedentary behavior at work, fruit intake and determinants of these behaviors. The control group received information on existing lifestyle behavior- related facilities that were already available at the worksite. Methods In a randomized controlled trial design (n = 257), 129 workers received a mindfulness training, followed by e-coaching, lunch walking routes and fruit. Outcome measures were assessed at baseline and after 6 and 12 months using questionnaires. Physical activity was also measured using accelerometers. Effects were analyzed using linear mixed effect models according to the intention-to-treat principle. Linear regression models (complete case analyses) were used as sensitivity analyses. Results There were no significant differences in lifestyle behaviors and determinants of these behaviors between the intervention and control group after 6 or 12 months. The sensitivity analyses showed effect modification for gender in sedentary behavior at work at 6-month follow-up, although the main analyses did not. Conclusions This study did not show an effect of a worksite mindfulness-based multi-component intervention on lifestyle behaviors and behavioral determinants after 6 and 12 months. The effectiveness of a worksite mindfulness-based multi-component intervention as a health promotion intervention for all workers could not be established. PMID:24467802
An empirical study using permutation-based resampling in meta-regression
2012-01-01
Background In meta-regression, as the number of trials in the analyses decreases, the risk of false positives or false negatives increases. This is partly due to the assumption of normality that may not hold in small samples. Creation of a distribution from the observed trials using permutation methods to calculate P values may allow for less spurious findings. Permutation has not been empirically tested in meta-regression. The objective of this study was to perform an empirical investigation to explore the differences in results for meta-analyses on a small number of trials using standard large sample approaches verses permutation-based methods for meta-regression. Methods We isolated a sample of randomized controlled clinical trials (RCTs) for interventions that have a small number of trials (herbal medicine trials). Trials were then grouped by herbal species and condition and assessed for methodological quality using the Jadad scale, and data were extracted for each outcome. Finally, we performed meta-analyses on the primary outcome of each group of trials and meta-regression for methodological quality subgroups within each meta-analysis. We used large sample methods and permutation methods in our meta-regression modeling. We then compared final models and final P values between methods. Results We collected 110 trials across 5 intervention/outcome pairings and 5 to 10 trials per covariate. When applying large sample methods and permutation-based methods in our backwards stepwise regression the covariates in the final models were identical in all cases. The P values for the covariates in the final model were larger in 78% (7/9) of the cases for permutation and identical for 22% (2/9) of the cases. Conclusions We present empirical evidence that permutation-based resampling may not change final models when using backwards stepwise regression, but may increase P values in meta-regression of multiple covariates for relatively small amount of trials. PMID:22587815
Physics-Based Fragment Acceleration Modeling for Pressurized Tank Burst Risk Assessments
NASA Technical Reports Server (NTRS)
Manning, Ted A.; Lawrence, Scott L.
2014-01-01
As part of comprehensive efforts to develop physics-based risk assessment techniques for space systems at NASA, coupled computational fluid and rigid body dynamic simulations were carried out to investigate the flow mechanisms that accelerate tank fragments in bursting pressurized vessels. Simulations of several configurations were compared to analyses based on the industry-standard Baker explosion model, and were used to formulate an improved version of the model. The standard model, which neglects an external fluid, was found to agree best with simulation results only in configurations where the internal-to-external pressure ratio is very high and fragment curvature is small. The improved model introduces terms that accommodate an external fluid and better account for variations based on circumferential fragment count. Physics-based analysis was critical in increasing the model's range of applicability. The improved tank burst model can be used to produce more accurate risk assessments of space vehicle failure modes that involve high-speed debris, such as exploding propellant tanks and bursting rocket engines.
Zeng, Fangfang; Li, Zhongtao; Yu, Xiaoling; Zhou, Linuo
2013-01-01
Background This study aimed to develop the artificial neural network (ANN) and multivariable logistic regression (LR) analyses for prediction modeling of cardiovascular autonomic (CA) dysfunction in the general population, and compare the prediction models using the two approaches. Methods and Materials We analyzed a previous dataset based on a Chinese population sample consisting of 2,092 individuals aged 30–80 years. The prediction models were derived from an exploratory set using ANN and LR analysis, and were tested in the validation set. Performances of these prediction models were then compared. Results Univariate analysis indicated that 14 risk factors showed statistically significant association with the prevalence of CA dysfunction (P<0.05). The mean area under the receiver-operating curve was 0.758 (95% CI 0.724–0.793) for LR and 0.762 (95% CI 0.732–0.793) for ANN analysis, but noninferiority result was found (P<0.001). The similar results were found in comparisons of sensitivity, specificity, and predictive values in the prediction models between the LR and ANN analyses. Conclusion The prediction models for CA dysfunction were developed using ANN and LR. ANN and LR are two effective tools for developing prediction models based on our dataset. PMID:23940593
Reduced-Order Modeling: Cooperative Research and Development at the NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Beran, Philip S.; Cesnik, Carlos E. S.; Guendel, Randal E.; Kurdila, Andrew; Prazenica, Richard J.; Librescu, Liviu; Marzocca, Piergiovanni; Raveh, Daniella E.
2001-01-01
Cooperative research and development activities at the NASA Langley Research Center (LaRC) involving reduced-order modeling (ROM) techniques are presented. Emphasis is given to reduced-order methods and analyses based on Volterra series representations, although some recent results using Proper Orthogonal Deco in position (POD) are discussed as well. Results are reported for a variety of computational and experimental nonlinear systems to provide clear examples of the use of reduced-order models, particularly within the field of computational aeroelasticity. The need for and the relative performance (speed, accuracy, and robustness) of reduced-order modeling strategies is documented. The development of unsteady aerodynamic state-space models directly from computational fluid dynamics analyses is presented in addition to analytical and experimental identifications of Volterra kernels. Finally, future directions for this research activity are summarized.
Based on the National Academy of Science’s 2004 report, “Air Quality Management in the United States”, the National Research Council (NRC) recommended to the US Environmental Protection Agency (EPA) that standard setting, planning, and control strategy development should be based...
Confirmatory Factor Analyses Comparing Parental Involvement Frameworks with Secondary Students
ERIC Educational Resources Information Center
Duppong Hurley, Kristin; Lambert, Matthew C.; January, Stacy-Ann A.; Huscroft D'Angelo, Jacqueline
2017-01-01
Given the lack of research on measurement models used to operationalize parental involvement with secondary students, the goal of this research is to examine the measurement properties of the three-domain conceptualization of parental involvement including school-based involvement, home-based involvement, and academic socialization, compared to a…
User's guide: RPGrow$: a red pine growth and analysis spreadsheet for the Lake States.
Carol A. Hyldahl; Gerald H. Grossman
1993-01-01
Describes RPGrow$, a stand-level, interactive spreadsheet for projecting growth and yield and estimating financial returns of red pine plantations in the Lake States. This spreadsheet is based on published growth models for red pine. Financial analyses are based on discounted cash flow methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard, M.A.; Sommer, S.C.
1995-04-01
AUTOCASK (AUTOmatic Generation of 3-D CASK models) is a microcomputer-based system of computer programs and databases developed at the Lawrence Livermore National Laboratory (LLNL) for the structural analysis of shipping casks for radioactive material. Model specification is performed on the microcomputer, and the analyses are performed on an engineering workstation or mainframe computer. AUTOCASK is based on 80386/80486 compatible microcomputers. The system is composed of a series of menus, input programs, display programs, a mesh generation program, and archive programs. All data is entered through fill-in-the-blank input screens that contain descriptive data requests.
Optimization of Nd: YAG Laser Marking of Alumina Ceramic Using RSM And ANN
NASA Astrophysics Data System (ADS)
Peter, Josephine; Doloi, B.; Bhattacharyya, B.
2011-01-01
The present research papers deals with the artificial neural network (ANN) and the response surface methodology (RSM) based mathematical modeling and also an optimization analysis on marking characteristics on alumina ceramic. The experiments have been planned and carried out based on Design of Experiment (DOE). It also analyses the influence of the major laser marking process parameters and the optimal combination of laser marking process parametric setting has been obtained. The output of the RSM optimal data is validated through experimentation and ANN predictive model. A good agreement is observed between the results based on ANN predictive model and actual experimental observations.
NASA Astrophysics Data System (ADS)
Dæhli, Lars Edvard Bryhni; Morin, David; Børvik, Tore; Hopperstad, Odd Sture
2017-10-01
Numerical unit cell models of an approximative representative volume element for a porous ductile solid are utilized to investigate differences in the mechanical response between a quadratic and a non-quadratic matrix yield surface. A Hershey equivalent stress measure with two distinct values of the yield surface exponent is employed as the matrix description. Results from the unit cell calculations are further used to calibrate a heuristic extension of the Gurson model which incorporates effects of the third deviatoric stress invariant. An assessment of the porous plasticity model reveals its ability to describe the unit cell response to some extent, however underestimating the effect of the Lode parameter for the lower triaxiality ratios imposed in this study when compared to unit cell simulations. Ductile failure predictions by means of finite element simulations using a unit cell model that resembles an imperfection band are then conducted to examine how the non-quadratic matrix yield surface influences the failure strain as compared to the quadratic matrix yield surface. Further, strain localization predictions based on bifurcation analyses and imperfection band analyses are undertaken using the calibrated porous plasticity model. These simulations are then compared to the unit cell calculations in order to elucidate the differences between the various modelling strategies. The current study reveals that strain localization analyses using an imperfection band model and a spatially discretized unit cell are in reasonable agreement, while the bifurcation analyses predict higher strain levels at localization. Imperfection band analyses are finally used to calculate failure loci for the quadratic and the non-quadratic matrix yield surface under a wide range of loading conditions. The underlying matrix yield surface is demonstrated to have a pronounced influence on the onset of strain localization.
Assessing uncertainty in published risk estimates using ...
Introduction: The National Research Council recommended quantitative evaluation of uncertainty in effect estimates for risk assessment. This analysis considers uncertainty across model forms and model parameterizations with hexavalent chromium [Cr(VI)] and lung cancer mortality as an example. The objective is to characterize model uncertainty by evaluating estimates across published epidemiologic studies of the same cohort.Methods: This analysis was based on 5 studies analyzing a cohort of 2,357 workers employed from 1950-74 in a chromate production plant in Maryland. Cox and Poisson models were the only model forms considered by study authors to assess the effect of Cr(VI) on lung cancer mortality. All models adjusted for smoking and included a 5-year exposure lag, however other latency periods and model covariates such as age and race were considered. Published effect estimates were standardized to the same units and normalized by their variances to produce a standardized metric to compare variability within and between model forms. A total of 5 similarly parameterized analyses were considered across model form, and 16 analyses with alternative parameterizations were considered within model form (10 Cox; 6 Poisson). Results: Across Cox and Poisson model forms, adjusted cumulative exposure coefficients (betas) for 5 similar analyses ranged from 2.47 to 4.33 (mean=2.97, σ2=0.63). Within the 10 Cox models, coefficients ranged from 2.53 to 4.42 (mean=3.29, σ2=0.
Harrison, Luke B; Larsson, Hans C E
2015-03-01
Likelihood-based methods are commonplace in phylogenetic systematics. Although much effort has been directed toward likelihood-based models for molecular data, comparatively less work has addressed models for discrete morphological character (DMC) data. Among-character rate variation (ACRV) may confound phylogenetic analysis, but there have been few analyses of the magnitude and distribution of rate heterogeneity among DMCs. Using 76 data sets covering a range of plants, invertebrate, and vertebrate animals, we used a modified version of MrBayes to test equal, gamma-distributed and lognormally distributed models of ACRV, integrating across phylogenetic uncertainty using Bayesian model selection. We found that in approximately 80% of data sets, unequal-rates models outperformed equal-rates models, especially among larger data sets. Moreover, although most data sets were equivocal, more data sets favored the lognormal rate distribution relative to the gamma rate distribution, lending some support for more complex character correlations than in molecular data. Parsimony estimation of the underlying rate distributions in several data sets suggests that the lognormal distribution is preferred when there are many slowly evolving characters and fewer quickly evolving characters. The commonly adopted four rate category discrete approximation used for molecular data was found to be sufficient to approximate a gamma rate distribution with discrete characters. However, among the two data sets tested that favored a lognormal rate distribution, the continuous distribution was better approximated with at least eight discrete rate categories. Although the effect of rate model on the estimation of topology was difficult to assess across all data sets, it appeared relatively minor between the unequal-rates models for the one data set examined carefully. As in molecular analyses, we argue that researchers should test and adopt the most appropriate model of rate variation for the data set in question. As discrete characters are increasingly used in more sophisticated likelihood-based phylogenetic analyses, it is important that these studies be built on the most appropriate and carefully selected underlying models of evolution. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Pignalosa, Antonio; Di Crescenzo, Giuseppe; Marino, Ermanno; Terracciano, Rosario; Santo, Antonio
2015-04-01
The work here presented concerns a case study in which a complete multidisciplinary workflow has been applied for an extensive assessment of the rockslide susceptibility and hazard in a common scenario such as a vertical and fractured rocky cliffs. The studied area is located in a high-relief zone in Southern Italy (Sacco, Salerno, Campania), characterized by wide vertical rocky cliffs formed by tectonized thick successions of shallow-water limestones. The study concerned the following phases: a) topographic surveying integrating of 3d laser scanning, photogrammetry and GNSS; b) gelogical surveying, characterization of single instabilities and geomecanichal surveying, conducted by geologists rock climbers; c) processing of 3d data and reconstruction of high resolution geometrical models; d) structural and geomechanical analyses; e) data filing in a GIS-based spatial database; f) geo-statistical and spatial analyses and mapping of the whole set of data; g) 3D rockfall analysis; The main goals of the study have been a) to set-up an investigation method to achieve a complete and thorough characterization of the slope stability conditions and b) to provide a detailed base for an accurate definition of the reinforcement and mitigation systems. For this purposes the most up-to-date methods of field surveying, remote sensing, 3d modelling and geospatial data analysis have been integrated in a systematic workflow, accounting of the economic sustainability of the whole project. A novel integrated approach have been applied both fusing deterministic and statistical surveying methods. This approach enabled to deal with the wide extension of the studied area (near to 200.000 m2), without compromising an high accuracy of the results. The deterministic phase, based on a field characterization of single instabilities and their further analyses on 3d models, has been applied for delineating the peculiarity of each single feature. The statistical approach, based on geostructural field mapping and on punctual geomechanical data from scan-line surveying, allowed the rock mass partitioning in homogeneous geomechanical sectors and data interpolation through bounded geostatistical analyses on 3d models. All data, resulting from both approaches, have been referenced and filed in a single spatial database and considered in global geo-statistical analyses for deriving a fully modelled and comprehensive evaluation of the rockslide susceptibility. The described workflow yielded the following innovative results: a) a detailed census of single potential instabilities, through a spatial database recording the geometrical, geological and mechanical features, along with the expected failure modes; b) an high resolution characterization of the whole slope rockslide susceptibility, based on the partitioning of the area according to the stability and mechanical conditions which can be directly related to specific hazard mitigation systems; c) the exact extension of the area exposed to the rockslide hazard, along with the dynamic parameters of expected phenomena; d) an intervention design for hazard mitigation.
[Model-based biofuels system analysis: a review].
Chang, Shiyan; Zhang, Xiliang; Zhao, Lili; Ou, Xunmin
2011-03-01
Model-based system analysis is an important tool for evaluating the potential and impacts of biofuels, and for drafting biofuels technology roadmaps and targets. The broad reach of the biofuels supply chain requires that biofuels system analyses span a range of disciplines, including agriculture/forestry, energy, economics, and the environment. Here we reviewed various models developed for or applied to modeling biofuels, and presented a critical analysis of Agriculture/Forestry System Models, Energy System Models, Integrated Assessment Models, Micro-level Cost, Energy and Emission Calculation Models, and Specific Macro-level Biofuel Models. We focused on the models' strengths, weaknesses, and applicability, facilitating the selection of a suitable type of model for specific issues. Such an analysis was a prerequisite for future biofuels system modeling, and represented a valuable resource for researchers and policy makers.
What Can the Diffusion Model Tell Us About Prospective Memory?
Horn, Sebastian S.; Bayen, Ute J.; Smith, Rebekah E.
2011-01-01
Cognitive process models, such as Ratcliff’s (1978) diffusion model, are useful tools for examining cost- or interference effects in event-based prospective memory (PM). The diffusion model includes several parameters that provide insight into how and why ongoing-task performance may be affected by a PM task and is ideally suited to analyze performance because both reaction time and accuracy are taken into account. Separate analyses of these measures can easily yield misleading interpretations in cases of speed-accuracy tradeoffs. The diffusion model allows us to measure possible criterion shifts and is thus an important methodological improvement over standard analyses. Performance in an ongoing lexical decision task (Smith, 2003) was analyzed with the diffusion model. The results suggest that criterion shifts play an important role when a PM task is added, but do not fully explain the cost effect on RT. PMID:21443332
POWER ANALYSIS FOR COMPLEX MEDIATIONAL DESIGNS USING MONTE CARLO METHODS
Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.
2013-01-01
Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex mediational models. The approach is based on the well known technique of generating a large number of samples in a Monte Carlo study, and estimating power as the percentage of cases in which an estimate of interest is significantly different from zero. Examples of power calculation for commonly used mediational models are provided. Power analyses for the single mediator, multiple mediators, three-path mediation, mediation with latent variables, moderated mediation, and mediation in longitudinal designs are described. Annotated sample syntax for Mplus is appended and tabled values of required sample sizes are shown for some models. PMID:23935262
Development of Parametric Mass and Volume Models for an Aerospace SOFC/Gas Turbine Hybrid System
NASA Technical Reports Server (NTRS)
Tornabene, Robert; Wang, Xiao-yen; Steffen, Christopher J., Jr.; Freeh, Joshua E.
2005-01-01
In aerospace power systems, mass and volume are key considerations to produce a viable design. The utilization of fuel cells is being studied for a commercial aircraft electrical power unit. Based on preliminary analyses, a SOFC/gas turbine system may be a potential solution. This paper describes the parametric mass and volume models that are used to assess an aerospace hybrid system design. The design tool utilizes input from the thermodynamic system model and produces component sizing, performance, and mass estimates. The software is designed such that the thermodynamic model is linked to the mass and volume model to provide immediate feedback during the design process. It allows for automating an optimization process that accounts for mass and volume in its figure of merit. Each component in the system is modeled with a combination of theoretical and empirical approaches. A description of the assumptions and design analyses is presented.
Welton, Nicky J; Madan, Jason; Ades, Anthony E
2011-09-01
Reimbursement decisions are typically based on cost-effectiveness analyses. While a cost-effectiveness analysis can identify the optimum strategy, there is usually some degree of uncertainty around this decision. Sources of uncertainty include statistical sampling error in treatment efficacy measures, underlying baseline risk, utility measures and costs, as well as uncertainty in the structure of the model. The optimal strategy is therefore only optimal on average, and a decision to adopt this strategy might still be the wrong decision if all uncertainty could be eliminated. This means that there is a quantifiable expected (average) loss attaching to decisions made under uncertainty, and hence a value in collecting information to reduce that uncertainty. Value of information (VOI) analyses can be used to provide guidance on whether more research would be cost-effective, which particular model inputs (parameters) have the most bearing on decision uncertainty, and can also help with the design and sample size of further research. Here, we introduce the key concepts in VOI analyses, and highlight the inputs required to calculate it. The adoption of the new biologic treatments for RA and PsA tends to be based on placebo-controlled trials. We discuss the possible role of VOI analyses in deciding whether head-to-head comparisons of the biologic therapies should be carried out, illustrating with examples from other fields. We emphasize the need for a model of the natural history of RA and PsA, which reflects a consensus view.
Qualitatively modelling and analysing genetic regulatory networks: a Petri net approach.
Steggles, L Jason; Banks, Richard; Shaw, Oliver; Wipat, Anil
2007-02-01
New developments in post-genomic technology now provide researchers with the data necessary to study regulatory processes in a holistic fashion at multiple levels of biological organization. One of the major challenges for the biologist is to integrate and interpret these vast data resources to gain a greater understanding of the structure and function of the molecular processes that mediate adaptive and cell cycle driven changes in gene expression. In order to achieve this biologists require new tools and techniques to allow pathway related data to be modelled and analysed as network structures, providing valuable insights which can then be validated and investigated in the laboratory. We propose a new technique for constructing and analysing qualitative models of genetic regulatory networks based on the Petri net formalism. We take as our starting point the Boolean network approach of treating genes as binary switches and develop a new Petri net model which uses logic minimization to automate the construction of compact qualitative models. Our approach addresses the shortcomings of Boolean networks by providing access to the wide range of existing Petri net analysis techniques and by using non-determinism to cope with incomplete and inconsistent data. The ideas we present are illustrated by a case study in which the genetic regulatory network controlling sporulation in the bacterium Bacillus subtilis is modelled and analysed. The Petri net model construction tool and the data files for the B. subtilis sporulation case study are available at http://bioinf.ncl.ac.uk/gnapn.
A Model for Simulating the Response of Aluminum Honeycomb Structure to Transverse Loading
NASA Technical Reports Server (NTRS)
Ratcliffe, James G.; Czabaj, Michael W.; Jackson, Wade C.
2012-01-01
A 1-dimensional material model was developed for simulating the transverse (thickness-direction) loading and unloading response of aluminum honeycomb structure. The model was implemented as a user-defined material subroutine (UMAT) in the commercial finite element analysis code, ABAQUS(Registered TradeMark)/Standard. The UMAT has been applied to analyses for simulating quasi-static indentation tests on aluminum honeycomb-based sandwich plates. Comparison of analysis results with data from these experiments shows overall good agreement. Specifically, analyses of quasi-static indentation tests yielded accurate global specimen responses. Predicted residual indentation was also in reasonable agreement with measured values. Overall, this simple model does not involve a significant computational burden, which makes it more tractable to simulate other damage mechanisms in the same analysis.
Does integration matter? A holistic model for building community resilience in Pakistan.
Kanta Kafle, Shesh
2017-01-01
This paper analyses an integrated communitybased risk reduction model adopted by the Pakistan Red Crescent. The paper analyses the model's constructs and definitions, and provides a conceptual framework and a set of practical recommendations for building community resilience. The study uses the process of outcome-based resilience index to assess the effectiveness of the approach. The results indicate that the integrated programming approach is an effective way to build community resilience as it offers a number of tangible and longlasting benefits, including effective and efficient service delivery, local ownership, sustainability of results, and improved local resilience with respect to the shock and stress associated with disaster. The paper also outlines a set of recommendations for the effective and efficient use of the model for building community resilience in Pakistan.
Numerical Modelling of Connections Between Stones in Foundations of Historical Buildings
NASA Astrophysics Data System (ADS)
Przewlocki, Jaroslaw; Zielinska, Monika; Grebowski, Karol
2017-12-01
The aim of this paper is to analyse the behaviour of old building foundations composed of stones (the main load-bearing elements) and mortar, based on numerical analysis. Some basic aspects of historical foundations are briefly discussed, with an emphasis on their development, techniques, and material. The behaviour of a foundation subjected to the loads transmitted from the upper parts of the structure is described using the finite element method (FEM). The main problems in analysing the foundations of historical buildings are determining the characteristics of the materials and the degree of degradation of the mortar, which is the weakest part of the foundation. Mortar is graded using the damaged-plastic model. In this model, exceeding the bearing capacity occurs due to the degradation of materials. The damaged-plastic model is the most accurate model describing the work and properties of mortar because it shows exactly what happens with this material throughout its total load history. For a uniformly loaded fragment of the foundation, both stresses and strains were analysed. The results of the analysis presented in this paper contribute to further research in the field of understanding both behaviour and modelling in historical buildings’ foundations.
Cross-Sectional Analysis of Longitudinal Mediation Processes.
O'Laughlin, Kristine D; Martin, Monica J; Ferrer, Emilio
2018-01-01
Statistical mediation analysis can help to identify and explain the mechanisms behind psychological processes. Examining a set of variables for mediation effects is a ubiquitous process in the social sciences literature; however, despite evidence suggesting that cross-sectional data can misrepresent the mediation of longitudinal processes, cross-sectional analyses continue to be used in this manner. Alternative longitudinal mediation models, including those rooted in a structural equation modeling framework (cross-lagged panel, latent growth curve, and latent difference score models) are currently available and may provide a better representation of mediation processes for longitudinal data. The purpose of this paper is twofold: first, we provide a comparison of cross-sectional and longitudinal mediation models; second, we advocate using models to evaluate mediation effects that capture the temporal sequence of the process under study. Two separate empirical examples are presented to illustrate differences in the conclusions drawn from cross-sectional and longitudinal mediation analyses. Findings from these examples yielded substantial differences in interpretations between the cross-sectional and longitudinal mediation models considered here. Based on these observations, researchers should use caution when attempting to use cross-sectional data in place of longitudinal data for mediation analyses.
Jacob, Benjamin J; Krapp, Fiorella; Ponce, Mario; Gottuzzo, Eduardo; Griffith, Daniel A; Novak, Robert J
2010-05-01
Spatial autocorrelation is problematic for classical hierarchical cluster detection tests commonly used in multi-drug resistant tuberculosis (MDR-TB) analyses as considerable random error can occur. Therefore, when MDRTB clusters are spatially autocorrelated the assumption that the clusters are independently random is invalid. In this research, a product moment correlation coefficient (i.e., the Moran's coefficient) was used to quantify local spatial variation in multiple clinical and environmental predictor variables sampled in San Juan de Lurigancho, Lima, Peru. Initially, QuickBird 0.61 m data, encompassing visible bands and the near infra-red bands, were selected to synthesize images of land cover attributes of the study site. Data of residential addresses of individual patients with smear-positive MDR-TB were geocoded, prevalence rates calculated and then digitally overlaid onto the satellite data within a 2 km buffer of 31 georeferenced health centers, using a 10 m2 grid-based algorithm. Geographical information system (GIS)-gridded measurements of each health center were generated based on preliminary base maps of the georeferenced data aggregated to block groups and census tracts within each buffered area. A three-dimensional model of the study site was constructed based on a digital elevation model (DEM) to determine terrain covariates associated with the sampled MDR-TB covariates. Pearson's correlation was used to evaluate the linear relationship between the DEM and the sampled MDR-TB data. A SAS/GIS(R) module was then used to calculate univariate statistics and to perform linear and non-linear regression analyses using the sampled predictor variables. The estimates generated from a global autocorrelation analyses were then spatially decomposed into empirical orthogonal bases using a negative binomial regression with a non-homogeneous mean. Results of the DEM analyses indicated a statistically non-significant, linear relationship between georeferenced health centers and the sampled covariate elevation. The data exhibited positive spatial autocorrelation and the decomposition of Moran's coefficient into uncorrelated, orthogonal map pattern components revealed global spatial heterogeneities necessary to capture latent autocorrelation in the MDR-TB model. It was thus shown that Poisson regression analyses and spatial eigenvector mapping can elucidate the mechanics of MDR-TB transmission by prioritizing clinical and environmental-sampled predictor variables for identifying high risk populations.
NASA Astrophysics Data System (ADS)
El Naqa, I.; Suneja, G.; Lindsay, P. E.; Hope, A. J.; Alaly, J. R.; Vicic, M.; Bradley, J. D.; Apte, A.; Deasy, J. O.
2006-11-01
Radiotherapy treatment outcome models are a complicated function of treatment, clinical and biological factors. Our objective is to provide clinicians and scientists with an accurate, flexible and user-friendly software tool to explore radiotherapy outcomes data and build statistical tumour control or normal tissue complications models. The software tool, called the dose response explorer system (DREES), is based on Matlab, and uses a named-field structure array data type. DREES/Matlab in combination with another open-source tool (CERR) provides an environment for analysing treatment outcomes. DREES provides many radiotherapy outcome modelling features, including (1) fitting of analytical normal tissue complication probability (NTCP) and tumour control probability (TCP) models, (2) combined modelling of multiple dose-volume variables (e.g., mean dose, max dose, etc) and clinical factors (age, gender, stage, etc) using multi-term regression modelling, (3) manual or automated selection of logistic or actuarial model variables using bootstrap statistical resampling, (4) estimation of uncertainty in model parameters, (5) performance assessment of univariate and multivariate analyses using Spearman's rank correlation and chi-square statistics, boxplots, nomograms, Kaplan-Meier survival plots, and receiver operating characteristics curves, and (6) graphical capabilities to visualize NTCP or TCP prediction versus selected variable models using various plots. DREES provides clinical researchers with a tool customized for radiotherapy outcome modelling. DREES is freely distributed. We expect to continue developing DREES based on user feedback.
Rothman, Jessica; Kayigamba, Felix; Hills, Victoria; Gupta, Neil; Machara, Faustin; Niyigena, Peter; Franke, Molly F
2018-01-01
The objective of this study was to examine how food insecurity changed among HIV-positive adults during the first 12 months of combination antiretroviral therapy (cART) and whether any change differed according to the receipt of food support, which was provided in the context of a comprehensive community-based intervention. We conducted secondary data analyses of data from a prospective cohort study of the effectiveness of a community-based cART delivery model when added to clinic-based cART delivery in Rwanda. We included patients from four health centers that implemented a clinic-based cART delivery model alone and five health centers that additionally implemented the intervention, which included 10 months of food support. We compared food insecurity at 3, 6, and 12 months, relative to baseline, and stratified by receipt of the intervention. Relative to baseline, median food insecurity score decreased after 3, 6, and 12 months (p value <0.0001 for all) for patients receiving a food ration through the community-based model for cART delivery. Among patients receiving care under the clinic-based cART model, food insecurity scores remained unchanged at 3 and 12 months and were significantly higher after 6 months. In adjusted analyses, participants enrolled in the community-based intervention with a food ration had a lower risk of severe food insecurity and a lower risk of moderate or severe food insecurity after 12 months. A comprehensive community-based HIV program including a food ration likely contributes to an alleviation of food insecurity among adults newly initiating cART.
Semi-supervised Machine Learning for Analysis of Hydrogeochemical Data and Models
NASA Astrophysics Data System (ADS)
Vesselinov, Velimir; O'Malley, Daniel; Alexandrov, Boian; Moore, Bryan
2017-04-01
Data- and model-based analyses such as uncertainty quantification, sensitivity analysis, and decision support using complex physics models with numerous model parameters and typically require a huge number of model evaluations (on order of 10^6). Furthermore, model simulations of complex physics may require substantial computational time. For example, accounting for simultaneously occurring physical processes such as fluid flow and biogeochemical reactions in heterogeneous porous medium may require several hours of wall-clock computational time. To address these issues, we have developed a novel methodology for semi-supervised machine learning based on Non-negative Matrix Factorization (NMF) coupled with customized k-means clustering. The algorithm allows for automated, robust Blind Source Separation (BSS) of groundwater types (contamination sources) based on model-free analyses of observed hydrogeochemical data. We have also developed reduced order modeling tools, which coupling support vector regression (SVR), genetic algorithms (GA) and artificial and convolutional neural network (ANN/CNN). SVR is applied to predict the model behavior within prior uncertainty ranges associated with the model parameters. ANN and CNN procedures are applied to upscale heterogeneity of the porous medium. In the upscaling process, fine-scale high-resolution models of heterogeneity are applied to inform coarse-resolution models which have improved computational efficiency while capturing the impact of fine-scale effects at the course scale of interest. These techniques are tested independently on a series of synthetic problems. We also present a decision analysis related to contaminant remediation where the developed reduced order models are applied to reproduce groundwater flow and contaminant transport in a synthetic heterogeneous aquifer. The tools are coded in Julia and are a part of the MADS high-performance computational framework (https://github.com/madsjulia/Mads.jl).
Olszewski, Raphael; Szymor, Piotr; Kozakiewicz, Marcin
2014-12-01
Our study aimed to determine the accuracy of a low-cost, paper-based 3D printer by comparing a dry human mandible to its corresponding three-dimensional (3D) model using a 3D measuring arm. One dry human mandible and its corresponding printed model were evaluated. The model was produced using DICOM data from cone beam computed tomography. The data were imported into Maxilim software, wherein automatic segmentation was performed, and the STL file was saved. These data were subsequently analysed, repaired, cut and prepared for printing with netfabb software. These prepared data were used to create a paper-based model of a mandible with an MCor Matrix 300 printer. Seventy-six anatomical landmarks were chosen and measured 20 times on the mandible and the model using a MicroScribe G2X 3D measuring arm. The distances between all the selected landmarks were measured and compared. Only landmarks with a point inaccuracy less than 30% were used in further analyses. The mean absolute difference for the selected 2016 measurements was 0.36 ± 0.29 mm. The mean relative difference was 1.87 ± 3.14%; however, the measurement length significantly influenced the relative difference. The accuracy of the 3D model printed using the paper-based, low-cost 3D Matrix 300 printer was acceptable. The average error was no greater than that measured with other types of 3D printers. The mean relative difference should not be considered the best way to compare studies. The point inaccuracy methodology proposed in this study may be helpful in future studies concerned with evaluating the accuracy of 3D rapid prototyping models. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Hypoglycemia alarm enhancement using data fusion.
Skladnev, Victor N; Tarnavskii, Stanislav; McGregor, Thomas; Ghevondian, Nejhdeh; Gourlay, Steve; Jones, Timothy W
2010-01-01
The acceptance of closed-loop blood glucose (BG) control using continuous glucose monitoring systems (CGMS) is likely to improve with enhanced performance of their integral hypoglycemia alarms. This article presents an in silico analysis (based on clinical data) of a modeled CGMS alarm system with trained thresholds on type 1 diabetes mellitus (T1DM) patients that is augmented by sensor fusion from a prototype hypoglycemia alarm system (HypoMon). This prototype alarm system is based on largely independent autonomic nervous system (ANS) response features. Alarm performance was modeled using overnight BG profiles recorded previously on 98 T1DM volunteers. These data included the corresponding ANS response features detected by HypoMon (AiMedics Pty. Ltd.) systems. CGMS data and alarms were simulated by applying a probabilistic model to these overnight BG profiles. The probabilistic model developed used a mean response delay of 7.1 minutes, measurement error offsets on each sample of +/- standard deviation (SD) = 4.5 mg/dl (0.25 mmol/liter), and vertical shifts (calibration offsets) of +/- SD = 19.8 mg/dl (1.1 mmol/liter). Modeling produced 90 to 100 simulated measurements per patient. Alarm systems for all analyses were optimized on a training set of 46 patients and evaluated on the test set of 56 patients. The split between the sets was based on enrollment dates. Optimization was based on detection accuracy but not time to detection for these analyses. The contribution of this form of data fusion to hypoglycemia alarm performance was evaluated by comparing the performance of the trained CGMS and fused data algorithms on the test set under the same evaluation conditions. The simulated addition of HypoMon data produced an improvement in CGMS hypoglycemia alarm performance of 10% at equal specificity. Sensitivity improved from 87% (CGMS as stand-alone measurement) to 97% for the enhanced alarm system. Specificity was maintained constant at 85%. Positive predictive values on the test set improved from 61 to 66% with negative predictive values improving from 96 to 99%. These enhancements were stable within sensitivity analyses. Sensitivity analyses also suggested larger performance increases at lower CGMS alarm performance levels. Autonomic nervous system response features provide complementary information suitable for fusion with CGMS data to enhance nocturnal hypoglycemia alarms. 2010 Diabetes Technology Society.
Two dimensional finite element heat transfer models for softwood
Hongmei Gu; John F. Hunt
2004-01-01
The anisotropy of wood creates a complex problem for solving heat and mass transfer problems that require analyses be based on fundamental material properties of the wood structure. Most heat transfer models use average thermal properties across either the radial or tangential directions and have not differentiated the effects of cellular alignment, earlywood/latewood...
USDA-ARS?s Scientific Manuscript database
Human selection has reshaped crop genomes. Here we report an apple genome variation map generated through genome sequencing of 117 diverse accessions. A comprehensive model of apple speciation and domestication along the Silk Road was proposed based on evidence from diverse genomic analyses. Cultiva...
On-Line Critiques in Collaborative Design Studio
ERIC Educational Resources Information Center
Sagun, Aysu; Demirkan, Halime
2009-01-01
In this study, the Design Collaboration Model (DCM) was developed to provide a medium for the on-line collaboration of the design courses. The model was based on the situated and reflective practice characteristics of the design process. The segmentation method was used to analyse the design process observed both in the design diaries and the…
ERIC Educational Resources Information Center
Boyles, William W.
1975-01-01
In 1973, Ronald G. Lykins presented a model for cash management and analysed its benefits for Ohio University. This paper attempts to expand on the previous method by providing answers to questions raised by the Lykins methods by a series of simple algebraic formulas. Both methods are based on two premises: (1) all cash over which the business…
Resilience: Enhancing Well-Being through the Positive Cognitive Triad
ERIC Educational Resources Information Center
Mak, Winnie W. S.; Ng, Ivy S. W.; Wong, Celia C. Y.
2011-01-01
The present study tested whether the relationships among resilience, life satisfaction, and depression could be explained by positive views toward the self, the world, and the future (positive cognitive triad). Structural equation modeling and mediation analyses were conducted based on 1,419 college students in Hong Kong. The model of positive…
An enhanced version of a bone-remodelling model based on the continuum damage mechanics theory.
Mengoni, M; Ponthot, J P
2015-01-01
The purpose of this work was to propose an enhancement of Doblaré and García's internal bone remodelling model based on the continuum damage mechanics (CDM) theory. In their paper, they stated that the evolution of the internal variables of the bone microstructure, and its incidence on the modification of the elastic constitutive parameters, may be formulated following the principles of CDM, although no actual damage was considered. The resorption and apposition criteria (similar to the damage criterion) were expressed in terms of a mechanical stimulus. However, the resorption criterion is lacking a dimensional consistency with the remodelling rate. We propose here an enhancement to this resorption criterion, insuring the dimensional consistency while retaining the physical properties of the original remodelling model. We then analyse the change in the resorption criterion hypersurface in the stress space for a two-dimensional (2D) analysis. We finally apply the new formulation to analyse the structural evolution of a 2D femur. This analysis gives results consistent with the original model but with a faster and more stable convergence rate.
Lee, Ann-Ying; Chen, Chun-Yi; Chang, Yao-Chien Alex; Chao, Ya-Ting; Shih, Ming-Che
2013-01-01
Previously we developed genomic resources for orchids, including transcriptomic analyses using next-generation sequencing techniques and construction of a web-based orchid genomic database. Here, we report a modified molecular model of flower development in the Orchidaceae based on functional analysis of gene expression profiles in Phalaenopsis aphrodite (a moth orchid) that revealed novel roles for the transcription factors involved in floral organ pattern formation. Phalaenopsis orchid floral organ-specific genes were identified by microarray analysis. Several critical transcription factors including AP3, PI, AP1 and AGL6, displayed distinct spatial distribution patterns. Phylogenetic analysis of orchid MADS box genes was conducted to infer the evolutionary relationship among floral organ-specific genes. The results suggest that gene duplication MADS box genes in orchid may have resulted in their gaining novel functions during evolution. Based on these analyses, a modified model of orchid flowering was proposed. Comparison of the expression profiles of flowers of a peloric mutant and wild-type Phalaenopsis orchid further identified genes associated with lip morphology and peloric effects. Large scale investigation of gene expression profiles revealed that homeotic genes from the ABCDE model of flower development classes A and B in the Phalaenopsis orchid have novel functions due to evolutionary diversification, and display differential expression patterns. PMID:24265826
Advanced space program studies: Overall executive summary
NASA Technical Reports Server (NTRS)
Sitney, L. R.
1974-01-01
Studies were conducted to provide NASA with advanced planning analyses which relate integrated space program goals and options to credible technical capabilities, applications potential, and funding resources. The studies concentrated on the following subjects: (1) upper stage options for the space transportation system based on payload considerations, (2) space servicing and standardization of payloads, (3) payload operations, and (4) space transportation system economic analyses related to user charges and new space applications. A systems cost/performance model was developed to synthesize automated, unmanned spacecraft configurations based on the system requirements and a list of equipments at the assembly level.
A formal approach to the analysis of clinical computer-interpretable guideline modeling languages.
Grando, M Adela; Glasspool, David; Fox, John
2012-01-01
To develop proof strategies to formally study the expressiveness of workflow-based languages, and to investigate their applicability to clinical computer-interpretable guideline (CIG) modeling languages. We propose two strategies for studying the expressiveness of workflow-based languages based on a standard set of workflow patterns expressed as Petri nets (PNs) and notions of congruence and bisimilarity from process calculus. Proof that a PN-based pattern P can be expressed in a language L can be carried out semi-automatically. Proof that a language L cannot provide the behavior specified by a PNP requires proof by exhaustion based on analysis of cases and cannot be performed automatically. The proof strategies are generic but we exemplify their use with a particular CIG modeling language, PROforma. To illustrate the method we evaluate the expressiveness of PROforma against three standard workflow patterns and compare our results with a previous similar but informal comparison. We show that the two proof strategies are effective in evaluating a CIG modeling language against standard workflow patterns. We find that using the proposed formal techniques we obtain different results to a comparable previously published but less formal study. We discuss the utility of these analyses as the basis for principled extensions to CIG modeling languages. Additionally we explain how the same proof strategies can be reused to prove the satisfaction of patterns expressed in the declarative language CIGDec. The proof strategies we propose are useful tools for analysing the expressiveness of CIG modeling languages. This study provides good evidence of the benefits of applying formal methods of proof over semi-formal ones. Copyright © 2011 Elsevier B.V. All rights reserved.
Gaskin, Cadeyrn J; Happell, Brenda
2014-05-01
To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Marchetti, Monia; Liberato, Nicola Lucio
2014-12-01
In refractory Crohn's disease, anti-TNF and anti-α 4 integrin agents are used for ameliorating disease activity but impose high costs to health-care systems. The authors systematically reviewed cost-effectiveness analyses based on decision models: most of the studies were judged to have a good quality, but a large portion assessed health and costs in a short time horizon, usually disregarding fistulizing disease and not considering safety. Infliximab induction followed by on-demand retreatment consistently proved to have a good cost per quality-adjusted life year, while maintenance treatment never satisfied commonly accepted cost-utility thresholds. Challenges in cost-effectiveness analysis include the lack of a standard model structure, a large variability in the costs of surgery and poor data on indirect costs. As clinical practice is moving to mucosal healing as a robust response marker, personalized schedules of anti-TNF therapies might prove cost-effective even in the perspective of the health-care system in the near future.
A Bayesian network model for predicting type 2 diabetes risk based on electronic health records
NASA Astrophysics Data System (ADS)
Xie, Jiang; Liu, Yan; Zeng, Xu; Zhang, Wu; Mei, Zhen
2017-07-01
An extensive, in-depth study of diabetes risk factors (DBRF) is of crucial importance to prevent (or reduce) the chance of suffering from type 2 diabetes (T2D). Accumulation of electronic health records (EHRs) makes it possible to build nonlinear relationships between risk factors and diabetes. However, the current DBRF researches mainly focus on qualitative analyses, and the inconformity of physical examination items makes the risk factors likely to be lost, which drives us to study the novel machine learning approach for risk model development. In this paper, we use Bayesian networks (BNs) to analyze the relationship between physical examination information and T2D, and to quantify the link between risk factors and T2D. Furthermore, with the quantitative analyses of DBRF, we adopt EHR and propose a machine learning approach based on BNs to predict the risk of T2D. The experiments demonstrate that our approach can lead to better predictive performance than the classical risk model.
NASA Astrophysics Data System (ADS)
Shojaeefard, Mohammad Hasan; Khalkhali, Abolfazl; Yarmohammadisatri, Sadegh
2017-06-01
The main purpose of this paper is to propose a new method for designing Macpherson suspension, based on the Sobol indices in terms of Pearson correlation which determines the importance of each member on the behaviour of vehicle suspension. The formulation of dynamic analysis of Macpherson suspension system is developed using the suspension members as the modified links in order to achieve the desired kinematic behaviour. The mechanical system is replaced with an equivalent constrained links and then kinematic laws are utilised to obtain a new modified geometry of Macpherson suspension. The equivalent mechanism of Macpherson suspension increased the speed of analysis and reduced its complexity. The ADAMS/CAR software is utilised to simulate a full vehicle, Renault Logan car, in order to analyse the accuracy of modified geometry model. An experimental 4-poster test rig is considered for validating both ADAMS/CAR simulation and analytical geometry model. Pearson correlation coefficient is applied to analyse the sensitivity of each suspension member according to vehicle objective functions such as sprung mass acceleration, etc. Besides this matter, the estimation of Pearson correlation coefficient between variables is analysed in this method. It is understood that the Pearson correlation coefficient is an efficient method for analysing the vehicle suspension which leads to a better design of Macpherson suspension system.
In vitro-in vivo correlation for nevirapine extended release tablets.
Macha, Sreeraj; Yong, Chan-Loi; Darrington, Todd; Davis, Mark S; MacGregor, Thomas R; Castles, Mark; Krill, Steven L
2009-12-01
An in vitro-in vivo correlation (IVIVC) for four nevirapine extended release tablets with varying polymer contents was developed. The pharmacokinetics of extended release formulations were assessed in a parallel group study with healthy volunteers and compared with corresponding in vitro dissolution data obtained using a USP apparatus type 1. In vitro samples were analysed using HPLC with UV detection and in vivo samples were analysed using a HPLC-MS/MS assay; the IVIVC analyses comparing the two results were performed using WinNonlin. A Double Weibull model optimally fits the in vitro data. A unit impulse response (UIR) was assessed using the fastest ER formulation as a reference. The deconvolution of the in vivo concentration time data was performed using the UIR to estimate an in vivo drug release profile. A linear model with a time-scaling factor clarified the relationship between in vitro and in vivo data. The predictability of the final model was consistent based on internal validation. Average percent prediction errors for pharmacokinetic parameters were <10% and individual values for all formulations were <15%. Therefore, a Level A IVIVC was developed and validated for nevirapine extended release formulations providing robust predictions of in vivo profiles based on in vitro dissolution profiles. Copyright 2009 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Yorek, Nurettin; Ugulu, Ilker
2015-01-01
In this study, artificial neural networks are suggested as a model that can be "trained" to yield qualitative results out of a huge amount of categorical data. It can be said that this is a new approach applied in educational qualitative data analysis. In this direction, a cascade-forward back-propagation neural network (CFBPN) model was…
Liu, Chun; Bridges, Melissa E; Kaundun, Shiv S; Glasgow, Les; Owen, Micheal Dk; Neve, Paul
2017-02-01
Simulation models are useful tools for predicting and comparing the risk of herbicide resistance in weed populations under different management strategies. Most existing models assume a monogenic mechanism governing herbicide resistance evolution. However, growing evidence suggests that herbicide resistance is often inherited in a polygenic or quantitative fashion. Therefore, we constructed a generalised modelling framework to simulate the evolution of quantitative herbicide resistance in summer annual weeds. Real-field management parameters based on Amaranthus tuberculatus (Moq.) Sauer (syn. rudis) control with glyphosate and mesotrione in Midwestern US maize-soybean agroecosystems demonstrated that the model can represent evolved herbicide resistance in realistic timescales. Sensitivity analyses showed that genetic and management parameters were impactful on the rate of quantitative herbicide resistance evolution, whilst biological parameters such as emergence and seed bank mortality were less important. The simulation model provides a robust and widely applicable framework for predicting the evolution of quantitative herbicide resistance in summer annual weed populations. The sensitivity analyses identified weed characteristics that would favour herbicide resistance evolution, including high annual fecundity, large resistance phenotypic variance and pre-existing herbicide resistance. Implications for herbicide resistance management and potential use of the model are discussed. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Detection of greenhouse-gas-induced climatic change. Progress report, July 1, 1994--July 31, 1995
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, P.D.; Wigley, T.M.L.
1995-07-21
The objective of this research is to assembly and analyze instrumental climate data and to develop and apply climate models as a basis for detecting greenhouse-gas-induced climatic change, and validation of General Circulation Models. In addition to changes due to variations in anthropogenic forcing, including greenhouse gas and aerosol concentration changes, the global climate system exhibits a high degree of internally-generated and externally-forced natural variability. To detect the anthropogenic effect, its signal must be isolated from the ``noise`` of this natural climatic variability. A high quality, spatially extensive data base is required to define the noise and its spatial characteristics.more » To facilitate this, available land and marine data bases will be updated and expanded. The data will be analyzed to determine the potential effects on climate of greenhouse gas and aerosol concentration changes and other factors. Analyses will be guided by a variety of models, from simple energy balance climate models to coupled atmosphere ocean General Circulation Models. These analyses are oriented towards obtaining early evidence of anthropogenic climatic change that would lead either to confirmation, rejection or modification of model projections, and towards the statistical validation of General Circulation Model control runs and perturbation experiments.« less
Network growth models: A behavioural basis for attachment proportional to fitness
NASA Astrophysics Data System (ADS)
Bell, Michael; Perera, Supun; Piraveenan, Mahendrarajah; Bliemer, Michiel; Latty, Tanya; Reid, Chris
2017-02-01
Several growth models have been proposed in the literature for scale-free complex networks, with a range of fitness-based attachment models gaining prominence recently. However, the processes by which such fitness-based attachment behaviour can arise are less well understood, making it difficult to compare the relative merits of such models. This paper analyses an evolutionary mechanism that would give rise to a fitness-based attachment process. In particular, it is proven by analytical and numerical methods that in homogeneous networks, the minimisation of maximum exposure to node unfitness leads to attachment probabilities that are proportional to node fitness. This result is then extended to heterogeneous networks, with supply chain networks being used as an example.
New geomorphic data on the active Taiwan orogen: A multisource approach
NASA Technical Reports Server (NTRS)
Deffontaines, B.; Lee, J.-C.; Angelier, J.; Carvalho, J.; Rudant, J.-P.
1994-01-01
A multisource and multiscale approach of Taiwan morphotectonics combines different complementary geomorphic analyses based on a new elevation model (DEM), side-looking airborne radar (SLAR), and satellite (SPOT) imagery, aerial photographs, and control from independent field data. This analysis enables us not only to present an integrated geomorphic description of the Taiwan orogen but also to highlight some new geodynamic aspects. Well-known, major geological structures such as the Longitudinal Valley, Lishan, Pingtung, and the Foothills fault zones are of course clearly recognized, but numerous, previously unrecognized structures appear distributed within different regions of Taiwan. For instance, transfer fault zones within the Western Foothills and the Central Range are identified based on analyses of lineaments and general morphology. In many cases, the existence of geomorphic features identified in general images is supported by the results of geological field analyses carried out independently. In turn, the field analyses of structures and mechanisms at some sites provide a key for interpreting similar geomorphic featues in other areas. Examples are the conjugate pattern of strike-slip faults within the Central Range and the oblique fold-and-thrust pattern of the Coastal Range. Furthermore, neotectonic and morphological analyses (drainage and erosional surfaces) has been combined in order to obtain a more comprehensive description and interpretation of neotectonic features in Taiwan, such as for the Longitudinal Valley Fault. Next, at a more general scale, numerical processing of digital elevation models, resulting in average topography, summit level or base level maps, allows identification of major features related to the dynamics of uplift and erosion and estimates of erosion balance. Finally, a preliminary morphotectonic sketch map of Taiwan, combining information from all the sources listed above, is presented.
How Confident can we be in Flood Risk Assessments?
NASA Astrophysics Data System (ADS)
Merz, B.
2017-12-01
Flood risk management should be based on risk analyses quantifying the risk and its reduction for different risk reduction strategies. However, validating risk estimates by comparing model simulations with past observations is hardly possible, since the assessment typically encompasses extreme events and their impacts that have not been observed before. Hence, risk analyses are strongly based on assumptions and expert judgement. This situation opens the door for cognitive biases, such as `illusion of certainty', `overconfidence' or `recency bias'. Such biases operate specifically in complex situations with many factors involved, when uncertainty is high and events are probabilistic, or when close learning feedback loops are missing - aspects that all apply to risk analyses. This contribution discusses how confident we can be in flood risk assessments, and reflects about more rigorous approaches towards their validation.
Tsao, C C; Liou, J U; Wen, P H; Peng, C C; Liu, T S
2013-01-01
Aim To develop analytical models and analyse the stress distribution and flexibility of nickel–titanium (NiTi) instruments subject to bending forces. Methodology The analytical method was used to analyse the behaviours of NiTi instruments under bending forces. Two NiTi instruments (RaCe and Mani NRT) with different cross-sections and geometries were considered. Analytical results were derived using Euler–Bernoulli nonlinear differential equations that took into account the screw pitch variation of these NiTi instruments. In addition, the nonlinear deformation analysis based on the analytical model and the finite element nonlinear analysis was carried out. Numerical results are obtained by carrying out a finite element method. Results According to analytical results, the maximum curvature of the instrument occurs near the instrument tip. Results of the finite element analysis revealed that the position of maximum von Mises stress was near the instrument tip. Therefore, the proposed analytical model can be used to predict the position of maximum curvature in the instrument where fracture may occur. Finally, results of analytical and numerical models were compatible. Conclusion The proposed analytical model was validated by numerical results in analysing bending deformation of NiTi instruments. The analytical model is useful in the design and analysis of instruments. The proposed theoretical model is effective in studying the flexibility of NiTi instruments. Compared with the finite element method, the analytical model can deal conveniently and effectively with the subject of bending behaviour of rotary NiTi endodontic instruments. PMID:23173762
NASA Astrophysics Data System (ADS)
Baek, Hamin; Schwarz, Christina V.
2015-04-01
In the past decade, reform efforts in science education have increasingly attended to engaging students in scientific practices such as scientific modeling. Engaging students in scientific modeling can help them develop their epistemologies by allowing them to attend to the roles of mechanism and empirical evidence when constructing and revising models. In this article, we present our in-depth case study of how two fifth graders—Brian and Joon—who were students in a public school classroom located in a Midwestern state shifted their epistemologies in modeling as they participated in the enactment of a technologically enhanced, model-based curriculum unit on evaporation and condensation. First, analyses of Brian's and Joon's models indicate that their epistemologies in modeling related to explanation and empirical evidence shifted productively throughout the unit. Additionally, while their initial and final epistemologies in modeling were similar, the pathways in which their epistemologies in modeling shifted differed. Next, analyses of the classroom activities illustrate how various components of the learning ecology including technological tools, the teacher's scaffolding remarks, and students' collective activities and conversations, were marshaled in the service of the two students' shifting epistemologies in modeling. These findings suggest a nuanced view of individual learners' engagement in scientific modeling, their epistemological shifts in the practice, and the roles of technology and other components of a modeling-oriented learning environment for such shifts.
Stratonovitch, Pierre; Elias, Jan; Denholm, Ian; Slater, Russell; Semenov, Mikhail A.
2014-01-01
Preventing a pest population from damaging an agricultural crop and, at the same time, preventing the development of pesticide resistance is a major challenge in crop protection. Understanding how farming practices and environmental factors interact with pest characteristics to influence the spread of resistance is a difficult and complex task. It is extremely challenging to investigate such interactions experimentally at realistic spatial and temporal scales. Mathematical modelling and computer simulation have, therefore, been used to analyse resistance evolution and to evaluate potential resistance management tactics. Of the many modelling approaches available, individual-based modelling of a pest population offers most flexibility to include and analyse numerous factors and their interactions. Here, a pollen beetle (Meligethes aeneus) population was modelled as an aggregate of individual insects inhabiting a spatially heterogeneous landscape. The development of the pest and host crop (oilseed rape) was driven by climatic variables. The agricultural land of the landscape was managed by farmers applying a specific rotation and crop protection strategy. The evolution of a single resistance allele to the pyrethroid lambda cyhalothrin was analysed for different combinations of crop management practices and for a recessive, intermediate and dominant resistance allele. While the spread of a recessive resistance allele was severely constrained, intermediate or dominant resistance alleles showed a similar response to the management regime imposed. Calendar treatments applied irrespective of pest density accelerated the development of resistance compared to ones applied in response to prescribed pest density thresholds. A greater proportion of spring-sown oilseed rape was also found to increase the speed of resistance as it increased the period of insecticide exposure. Our study demonstrates the flexibility and power of an individual-based model to simulate how farming practices affect pest population dynamics, and the consequent impact of different control strategies on the risk and speed of resistance development. PMID:25531104
Stratonovitch, Pierre; Elias, Jan; Denholm, Ian; Slater, Russell; Semenov, Mikhail A
2014-01-01
Preventing a pest population from damaging an agricultural crop and, at the same time, preventing the development of pesticide resistance is a major challenge in crop protection. Understanding how farming practices and environmental factors interact with pest characteristics to influence the spread of resistance is a difficult and complex task. It is extremely challenging to investigate such interactions experimentally at realistic spatial and temporal scales. Mathematical modelling and computer simulation have, therefore, been used to analyse resistance evolution and to evaluate potential resistance management tactics. Of the many modelling approaches available, individual-based modelling of a pest population offers most flexibility to include and analyse numerous factors and their interactions. Here, a pollen beetle (Meligethes aeneus) population was modelled as an aggregate of individual insects inhabiting a spatially heterogeneous landscape. The development of the pest and host crop (oilseed rape) was driven by climatic variables. The agricultural land of the landscape was managed by farmers applying a specific rotation and crop protection strategy. The evolution of a single resistance allele to the pyrethroid lambda cyhalothrin was analysed for different combinations of crop management practices and for a recessive, intermediate and dominant resistance allele. While the spread of a recessive resistance allele was severely constrained, intermediate or dominant resistance alleles showed a similar response to the management regime imposed. Calendar treatments applied irrespective of pest density accelerated the development of resistance compared to ones applied in response to prescribed pest density thresholds. A greater proportion of spring-sown oilseed rape was also found to increase the speed of resistance as it increased the period of insecticide exposure. Our study demonstrates the flexibility and power of an individual-based model to simulate how farming practices affect pest population dynamics, and the consequent impact of different control strategies on the risk and speed of resistance development.
Mathematical model of unmanned aerial vehicle used for endurance autonomous monitoring
NASA Astrophysics Data System (ADS)
Chelaru, Teodor-Viorel; Chelaru, Adrian
2014-12-01
The paper purpose is to present some aspects regarding the control system of unmanned aerial vehicle - UAV, used to local observations, surveillance and monitoring interest area. The calculus methodology allows a numerical simulation of UAV evolution in bad atmospheric conditions by using nonlinear model, as well as a linear one for obtaining guidance command. The UAV model which will be presented has six DOF (degrees of freedom), and autonomous control system. This theoretical development allows us to build stability matrix, command matrix and control matrix and finally to analyse the stability of autonomous UAV flight. A robust guidance system, based on uncoupled state will be evaluated for different fly conditions and the results will be presented. The flight parameters and guidance will be analysed.
Appraisal of ground water for irrigation in the Little Falls area, Morrison County, Minnesota
Helgesen, John O.
1973-01-01
Possible future response to pumping was studied through electric analog analyses by stressing the modeled aquifer system in accordance with areal variations in expected well yields. The model interpretation indicates most of the sustained pumpage would be obtained from intercepted base flow and evapotranspiration. Simulated withdrawals totaling 18,000 acre-feet of water per year for 10 years resulted in little adverse effect on the aquifer system. Simulated larger withdrawals, assumed to represent denser well spacing, caused greater depletion of aquifer storage, streamflow, and lake volumes, excessively so in some areas. Results of model analyses provide a guide for ground-water development by identifying the capability of all parts of the aquifer system to support sustained pumping for irrigation.
Radiation doses and neutron irridation effects on human cells based on calculations
NASA Astrophysics Data System (ADS)
Radojevic, B. B.; Cukavac, M.; Jovanovic, D.
In general, main aim of our paper is to follow influence of neutron's radiation on materials, but one of possible applications of fast neutrons in therapeutical reasons i.e. their influence on carcinom cells of difficuilt geometries in human bodies too. Interactions between neutrons and human cells of tissue are analysed here. We know that the light nuclei of hydrogen, nitrogen, carbon, and oxygen are main constituents of human cells, and that different nuclear models are usually used to present interactions of nuclear particles with mentioned elements. Some of most widely used pre-equilibrium nuclear models are: intranuclear cascade model (ICN), Harp-Miller-Berne (HMB), geometry-dependent hybrid (GDH) and exciton models (EM). In this paper is studied and calculated the primary energetic spectra of the secundary particles (neutrons, protons, and gamas) emitted from this interactions, and followed by corresponding integral cross sections, based on exciton model (EM). The total emission cross-section is the sum of emissions in all stages of energies. Obtained spectra for interactions type of (n, n'), (n, p), and (n, ?), for various incident neutron energies in the interval from 3 MeV up to 30 MeV are analysed too. Some results of calculations are presented here.
NASA Astrophysics Data System (ADS)
Clifford, Corey; Kimber, Mark
2017-11-01
Over the last 30 years, an industry-wide shift within the nuclear community has led to increased utilization of computational fluid dynamics (CFD) to supplement nuclear reactor safety analyses. One such area that is of particular interest to the nuclear community, specifically to those performing loss-of-flow accident (LOFA) analyses for next-generation very-high temperature reactors (VHTR), is the capacity of current computational models to predict heat transfer across a wide range of buoyancy conditions. In the present investigation, a critical evaluation of Reynolds-averaged Navier-Stokes (RANS) and large-eddy simulation (LES) turbulence modeling techniques is conducted based on CFD validation data collected from the Rotatable Buoyancy Tunnel (RoBuT) at Utah State University. Four different experimental flow conditions are investigated: (1) buoyancy-aided forced convection; (2) buoyancy-opposed forced convection; (3) buoyancy-aided mixed convection; (4) buoyancy-opposed mixed convection. Overall, good agreement is found for both forced convection-dominated scenarios, but an overly-diffusive prediction of the normal Reynolds stress is observed for the RANS-based turbulence models. Low-Reynolds number RANS models perform adequately for mixed convection, while higher-order RANS approaches underestimate the influence of buoyancy on the production of turbulence.
Estimating population diversity with CatchAll
Bunge, John; Woodard, Linda; Böhning, Dankmar; Foster, James A.; Connolly, Sean; Allen, Heather K.
2012-01-01
Motivation: The massive data produced by next-generation sequencing require advanced statistical tools. We address estimating the total diversity or species richness in a population. To date, only relatively simple methods have been implemented in available software. There is a need for software employing modern, computationally intensive statistical analyses including error, goodness-of-fit and robustness assessments. Results: We present CatchAll, a fast, easy-to-use, platform-independent program that computes maximum likelihood estimates for finite-mixture models, weighted linear regression-based analyses and coverage-based non-parametric methods, along with outlier diagnostics. Given sample ‘frequency count’ data, CatchAll computes 12 different diversity estimates and applies a model-selection algorithm. CatchAll also derives discounted diversity estimates to adjust for possibly uncertain low-frequency counts. It is accompanied by an Excel-based graphics program. Availability: Free executable downloads for Linux, Windows and Mac OS, with manual and source code, at www.northeastern.edu/catchall. Contact: jab18@cornell.edu PMID:22333246
A Gibbs sampler for Bayesian analysis of site-occupancy data
Dorazio, Robert M.; Rodriguez, Daniel Taylor
2012-01-01
1. A Bayesian analysis of site-occupancy data containing covariates of species occurrence and species detection probabilities is usually completed using Markov chain Monte Carlo methods in conjunction with software programs that can implement those methods for any statistical model, not just site-occupancy models. Although these software programs are quite flexible, considerable experience is often required to specify a model and to initialize the Markov chain so that summaries of the posterior distribution can be estimated efficiently and accurately. 2. As an alternative to these programs, we develop a Gibbs sampler for Bayesian analysis of site-occupancy data that include covariates of species occurrence and species detection probabilities. This Gibbs sampler is based on a class of site-occupancy models in which probabilities of species occurrence and detection are specified as probit-regression functions of site- and survey-specific covariate measurements. 3. To illustrate the Gibbs sampler, we analyse site-occupancy data of the blue hawker, Aeshna cyanea (Odonata, Aeshnidae), a common dragonfly species in Switzerland. Our analysis includes a comparison of results based on Bayesian and classical (non-Bayesian) methods of inference. We also provide code (based on the R software program) for conducting Bayesian and classical analyses of site-occupancy data.
NASA Astrophysics Data System (ADS)
Haneberg, W. C.; Gurung, N.
2016-12-01
The village of Laprak, located in the Gorkha District of western Nepal, was built on a large colluvium landslide about 10 km from the epicenter of the 25 April 2015 M 7.8 Nepal earthquake. Recent episodic movement began during a wet period in 1999 and continued in at least 2002, 2006, and 2007, destroying 24 homes, removing 23 hectares of land from agricultural production, and claiming 1 life. Reconnaissance mapping, soil sampling and testing, and slope stability analyses undertaken before the 2015 earthquake suggested that the hillside should be stable under dry conditions, unstable to marginally stable under static wet conditions, and wholly unstable under wet seismic conditions. Most of the buildings in Laprak, which were predominantly of dry fitted stone masonry, were destroyed by Intensity IX shaking during the 2015 earthquake. Interpretation of remotely sensed imagery and published photographs shows new landslide features; hence, some downslope movement occurred but the landslide did not mobilize into a long run-out flow. Monte Carlo simulations based upon a pseudostatic infinite slope model and constrained by reasonable distributions of soil shear strength, pore pressure, and slope angle from earlier work and seismic coefficients based upon the observed Intensity IX shaking (and inferred PGA) yield high probabilities of failure for steep portions of the slope above and below the village but moderate probabilities of failure for the more gentle portion of the slope upon which most of the village was constructed. In retrospect, the seismic coefficient selected for the pre-earthquake analysis proved to be remarkably prescient. Similar results were obtained using a first-order, second-moment (FOSM) approach that is convenient for GIS based regional analyses. Predictions of permanent displacement made using a variety of published empirical formulae based upon sliding block analyses range from about 10 cm to about 200 cm, also broadly consistent with the observed earthquake effects. Thus, at least in the case of Laprak, the conceptually simple infinite slope models used in many GIS based regional analyses appear to provide robust results if reasonable ranges of soil strength, pore pressure, slope geometry, and seismic loading are used.
Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H
2017-03-01
To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.
A public health decision support system model using reasoning methods.
Mera, Maritza; González, Carolina; Blobel, Bernd
2015-01-01
Public health programs must be based on the real health needs of the population. However, the design of efficient and effective public health programs is subject to availability of information that can allow users to identify, at the right time, the health issues that require special attention. The objective of this paper is to propose a case-based reasoning model for the support of decision-making in public health. The model integrates a decision-making process and case-based reasoning, reusing past experiences for promptly identifying new population health priorities. A prototype implementation of the model was performed, deploying the case-based reasoning framework jColibri. The proposed model contributes to solve problems found today when designing public health programs in Colombia. Current programs are developed under uncertain environments, as the underlying analyses are carried out on the basis of outdated and unreliable data.
Task Delegation Based Access Control Models for Workflow Systems
NASA Astrophysics Data System (ADS)
Gaaloul, Khaled; Charoy, François
e-Government organisations are facilitated and conducted using workflow management systems. Role-based access control (RBAC) is recognised as an efficient access control model for large organisations. The application of RBAC in workflow systems cannot, however, grant permissions to users dynamically while business processes are being executed. We currently observe a move away from predefined strict workflow modelling towards approaches supporting flexibility on the organisational level. One specific approach is that of task delegation. Task delegation is a mechanism that supports organisational flexibility, and ensures delegation of authority in access control systems. In this paper, we propose a Task-oriented Access Control (TAC) model based on RBAC to address these requirements. We aim to reason about task from organisational perspectives and resources perspectives to analyse and specify authorisation constraints. Moreover, we present a fine grained access control protocol to support delegation based on the TAC model.
Numerical framework for the modeling of electrokinetic flows
NASA Astrophysics Data System (ADS)
Deshpande, Manish; Ghaddar, Chahid; Gilbert, John R.; St. John, Pamela M.; Woudenberg, Timothy M.; Connell, Charles R.; Molho, Joshua; Herr, Amy; Mungal, Godfrey; Kenny, Thomas W.
1998-09-01
This paper presents a numerical framework for design-based analyses of electrokinetic flow in interconnects. Electrokinetic effects, which can be broadly divided into electrophoresis and electroosmosis, are of importance in providing a transport mechanism in microfluidic devices for both pumping and separation. Models for the electrokinetic effects can be derived and coupled to the fluid dynamic equations through appropriate source terms. In the design of practical microdevices, however, accurate coupling of the electrokinetic effects requires the knowledge of several material and physical parameters, such as the diffusivity and the mobility of the solute in the solvent. Additionally wall-based effects such as chemical binding sites might exist that affect the flow patterns. In this paper, we address some of these issues by describing a synergistic numerical/experimental process to extract the parameters required. Experiments were conducted to provide the numerical simulations with a mechanism to extract these parameters based on quantitative comparisons with each other. These parameters were then applied in predicting further experiments to validate the process. As part of this research, we have created NetFlow, a tool for micro-fluid analyses. The tool can be validated and applied in existing technologies by first creating test structures to extract representations of the physical phenomena in the device, and then applying them in the design analyses to predict correct behavior.
The available data on the pharmacokinetics of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) in animals and humans have been thoroughly reviewed in literature. It is evident based on these reviews and other analyses that three distinctive features of TCDD play important roles in dete...
ERIC Educational Resources Information Center
Hwang, Gwo-Jen; Kuo, Fan-Ray
2015-01-01
Web-based problem-solving, a compound ability of critical thinking, creative thinking, reasoning thinking and information-searching abilities, has been recognised as an important competence for elementary school students. Some researchers have reported the possible correlations between problem-solving competence and information searching ability;…
An Approach Based on Social Network Analysis Applied to a Collaborative Learning Experience
ERIC Educational Resources Information Center
Claros, Iván; Cobos, Ruth; Collazos, César A.
2016-01-01
The Social Network Analysis (SNA) techniques allow modelling and analysing the interaction among individuals based on their attributes and relationships. This approach has been used by several researchers in order to measure the social processes in collaborative learning experiences. But oftentimes such measures were calculated at the final state…
ERIC Educational Resources Information Center
Zwick, Rebecca; Lenaburg, Lubella
2009-01-01
In certain data analyses (e.g., multiple discriminant analysis and multinomial log-linear modeling), classification decisions are made based on the estimated posterior probabilities that individuals belong to each of several distinct categories. In the Bayesian network literature, this type of classification is often accomplished by assigning…
ERIC Educational Resources Information Center
Ball, B. Hunter; Brewer, Gene A.
2018-01-01
The present study implemented an individual differences approach in conjunction with response time (RT) variability and distribution modeling techniques to better characterize the cognitive control dynamics underlying ongoing task cost (i.e., slowing) and cue detection in event-based prospective memory (PM). Three experiments assessed the relation…
A dynamical system of deposit and loan volumes based on the Lotka-Volterra model
NASA Astrophysics Data System (ADS)
Sumarti, N.; Nurfitriyana, R.; Nurwenda, W.
2014-02-01
In this research, we proposed a dynamical system of deposit and loan volumes of a bank using a predator-prey paradigm, where the predator is loan volumes, and the prey is deposit volumes. The existence of loan depends on the existence of deposit because the bank will allocate the loan volume from a portion of the deposit volume. The dynamical systems have been constructed are a simple model, a model with Michaelis-Menten Response and a model with the Reserve Requirement. Equilibria of the systems are analysed whether they are stable or unstable based on their linearised system.
Assessing the Performance of a Computer-Based Policy Model of HIV and AIDS
Rydzak, Chara E.; Cotich, Kara L.; Sax, Paul E.; Hsu, Heather E.; Wang, Bingxia; Losina, Elena; Freedberg, Kenneth A.; Weinstein, Milton C.; Goldie, Sue J.
2010-01-01
Background Model-based analyses, conducted within a decision analytic framework, provide a systematic way to combine information about the natural history of disease and effectiveness of clinical management strategies with demographic and epidemiological characteristics of the population. Among the challenges with disease-specific modeling include the need to identify influential assumptions and to assess the face validity and internal consistency of the model. Methods and Findings We describe a series of exercises involved in adapting a computer-based simulation model of HIV disease to the Women's Interagency HIV Study (WIHS) cohort and assess model performance as we re-parameterized the model to address policy questions in the U.S. relevant to HIV-infected women using data from the WIHS. Empiric calibration targets included 24-month survival curves stratified by treatment status and CD4 cell count. The most influential assumptions in untreated women included chronic HIV-associated mortality following an opportunistic infection, and in treated women, the ‘clinical effectiveness’ of HAART and the ability of HAART to prevent HIV complications independent of virologic suppression. Good-fitting parameter sets required reductions in the clinical effectiveness of 1st and 2nd line HAART and improvements in 3rd and 4th line regimens. Projected rates of treatment regimen switching using the calibrated cohort-specific model closely approximated independent analyses published using data from the WIHS. Conclusions The model demonstrated good internal consistency and face validity, and supported cohort heterogeneities that have been reported in the literature. Iterative assessment of model performance can provide information about the relative influence of uncertain assumptions and provide insight into heterogeneities within and between cohorts. Description of calibration exercises can enhance the transparency of disease-specific models. PMID:20844741
Assessing the performance of a computer-based policy model of HIV and AIDS.
Rydzak, Chara E; Cotich, Kara L; Sax, Paul E; Hsu, Heather E; Wang, Bingxia; Losina, Elena; Freedberg, Kenneth A; Weinstein, Milton C; Goldie, Sue J
2010-09-09
Model-based analyses, conducted within a decision analytic framework, provide a systematic way to combine information about the natural history of disease and effectiveness of clinical management strategies with demographic and epidemiological characteristics of the population. Among the challenges with disease-specific modeling include the need to identify influential assumptions and to assess the face validity and internal consistency of the model. We describe a series of exercises involved in adapting a computer-based simulation model of HIV disease to the Women's Interagency HIV Study (WIHS) cohort and assess model performance as we re-parameterized the model to address policy questions in the U.S. relevant to HIV-infected women using data from the WIHS. Empiric calibration targets included 24-month survival curves stratified by treatment status and CD4 cell count. The most influential assumptions in untreated women included chronic HIV-associated mortality following an opportunistic infection, and in treated women, the 'clinical effectiveness' of HAART and the ability of HAART to prevent HIV complications independent of virologic suppression. Good-fitting parameter sets required reductions in the clinical effectiveness of 1st and 2nd line HAART and improvements in 3rd and 4th line regimens. Projected rates of treatment regimen switching using the calibrated cohort-specific model closely approximated independent analyses published using data from the WIHS. The model demonstrated good internal consistency and face validity, and supported cohort heterogeneities that have been reported in the literature. Iterative assessment of model performance can provide information about the relative influence of uncertain assumptions and provide insight into heterogeneities within and between cohorts. Description of calibration exercises can enhance the transparency of disease-specific models.
MGAS: a powerful tool for multivariate gene-based genome-wide association analysis.
Van der Sluis, Sophie; Dolan, Conor V; Li, Jiang; Song, Youqiang; Sham, Pak; Posthuma, Danielle; Li, Miao-Xin
2015-04-01
Standard genome-wide association studies, testing the association between one phenotype and a large number of single nucleotide polymorphisms (SNPs), are limited in two ways: (i) traits are often multivariate, and analysis of composite scores entails loss in statistical power and (ii) gene-based analyses may be preferred, e.g. to decrease the multiple testing problem. Here we present a new method, multivariate gene-based association test by extended Simes procedure (MGAS), that allows gene-based testing of multivariate phenotypes in unrelated individuals. Through extensive simulation, we show that under most trait-generating genotype-phenotype models MGAS has superior statistical power to detect associated genes compared with gene-based analyses of univariate phenotypic composite scores (i.e. GATES, multiple regression), and multivariate analysis of variance (MANOVA). Re-analysis of metabolic data revealed 32 False Discovery Rate controlled genome-wide significant genes, and 12 regions harboring multiple genes; of these 44 regions, 30 were not reported in the original analysis. MGAS allows researchers to conduct their multivariate gene-based analyses efficiently, and without the loss of power that is often associated with an incorrectly specified genotype-phenotype models. MGAS is freely available in KGG v3.0 (http://statgenpro.psychiatry.hku.hk/limx/kgg/download.php). Access to the metabolic dataset can be requested at dbGaP (https://dbgap.ncbi.nlm.nih.gov/). The R-simulation code is available from http://ctglab.nl/people/sophie_van_der_sluis. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.
Formal Analysis of Self-Efficacy in Job Interviewee’s Mental State Model
NASA Astrophysics Data System (ADS)
Ajoge, N. S.; Aziz, A. A.; Yusof, S. A. Mohd
2017-08-01
This paper presents a formal analysis approach for self-efficacy model of interviewee’s mental state during a job interview session. Self-efficacy is a construct that has been hypothesised to combine with motivation and interviewee anxiety to define state influence of interviewees. The conceptual model was built based on psychological theories and models related to self-efficacy. A number of well-known relations between events and the course of self-efficacy are summarized from the literature and it is shown that the proposed model exhibits those patterns. In addition, this formal model has been mathematically analysed to find out which stable situations exist. Finally, it is pointed out how this model can be used in a software agent or robot-based platform. Such platform can provide an interview coaching approach where support to the user is provided based on their individual metal state during interview sessions.
Nonlinear flight dynamics and stability of hovering model insects
Liang, Bin; Sun, Mao
2013-01-01
Current analyses on insect dynamic flight stability are based on linear theory and limited to small disturbance motions. However, insects' aerial environment is filled with swirling eddies and wind gusts, and large disturbances are common. Here, we numerically solve the equations of motion coupled with the Navier–Stokes equations to simulate the large disturbance motions and analyse the nonlinear flight dynamics of hovering model insects. We consider two representative model insects, a model hawkmoth (large size, low wingbeat frequency) and a model dronefly (small size, high wingbeat frequency). For small and large initial disturbances, the disturbance motion grows with time, and the insects tumble and never return to the equilibrium state; the hovering flight is inherently (passively) unstable. The instability is caused by a pitch moment produced by forward/backward motion and/or a roll moment produced by side motion of the insect. PMID:23697714
Homodyne detection of short-range Doppler radar using a forced oscillator model
NASA Astrophysics Data System (ADS)
Kittipute, Kunanon; Saratayon, Peerayudh; Srisook, Suthasin; Wardkein, Paramote
2017-03-01
This article presents the homodyne detection in a self-oscillation system, which represented by a short-range radar (SRR) circuit, that is analysed using a multi-time forced oscillator (MTFO) model. The MTFO model is based on a forced oscillation perspective with the signal and system theory, a second-order differential equation, and the multiple time variable technique. This model can also apply to analyse the homodyne phenomenon in a difference kind of the oscillation system under same method such as the self-oscillation system, and the natural oscillation system with external forced. In a free oscillation system, which forced by the external source is represented by a pendulum with an oscillating support experiment, and a modified Colpitts oscillator circuit in the UHF band with input as a Doppler signal is a representative of self-oscillation system. The MTFO model is verified with the experimental result, which well in line with the theoretical analysis.
A transient model of the RL10A-3-3A rocket engine
NASA Technical Reports Server (NTRS)
Binder, Michael P.
1995-01-01
RL10A-3-3A rocket engines have served as the main propulsion system for Centaur upper stage vehicles since the early 1980's. This hydrogen/oxygen expander cycle engine continues to play a major role in the American launch industry. The Space Propulsion Technology Division at the NASA Lewis Research Center has created a computer model of the RL10 engine, based on detailed component analyses and available test data. This RL10 engine model can predict the performance of the engine over a wide range of operating conditions. The model may also be used to predict the effects of any proposed design changes and anticipated failure scenarios. In this paper, the results of the component analyses are discussed. Simulation results from the new system model are compared with engine test and flight data, including the start and shut-down transient characteristics.
Bounthavong, Mark; Pruitt, Larry D; Smolenski, Derek J; Gahm, Gregory A; Bansal, Aasthaa; Hansen, Ryan N
2018-02-01
Introduction Home-based telebehavioural healthcare improves access to mental health care for patients restricted by travel burden. However, there is limited evidence assessing the economic value of home-based telebehavioural health care compared to in-person care. We sought to compare the economic impact of home-based telebehavioural health care and in-person care for depression among current and former US service members. Methods We performed trial-based cost-minimisation and cost-utility analyses to assess the economic impact of home-based telebehavioural health care versus in-person behavioural care for depression. Our analyses focused on the payer perspective (Department of Defense and Department of Veterans Affairs) at three months. We also performed a scenario analysis where all patients possessed video-conferencing technology that was approved by these agencies. The cost-utility analysis evaluated the impact of different depression categories on the incremental cost-effectiveness ratio. One-way and probabilistic sensitivity analyses were performed to test the robustness of the model assumptions. Results In the base case analysis the total direct cost of home-based telebehavioural health care was higher than in-person care (US$71,974 versus US$20,322). Assuming that patients possessed government-approved video-conferencing technology, home-based telebehavioural health care was less costly compared to in-person care (US$19,177 versus US$20,322). In one-way sensitivity analyses, the proportion of patients possessing personal computers was a major driver of direct costs. In the cost-utility analysis, home-based telebehavioural health care was dominant when patients possessed video-conferencing technology. Results from probabilistic sensitivity analyses did not differ substantially from base case results. Discussion Home-based telebehavioural health care is dependent on the cost of supplying video-conferencing technology to patients but offers the opportunity to increase access to care. Health-care policies centred on implementation of home-based telebehavioural health care should ensure that these technologies are able to be successfully deployed on patients' existing technology.
The relative effect of noise at different times of day: An analysis of existing survey data
NASA Technical Reports Server (NTRS)
Fields, J. M.
1986-01-01
This report examines survey evidence on the relative impact of noise at different times of day and assesses the survey methodology which produces that evidence. Analyses of the regression of overall (24-hour) annoyance on noise levels in different time periods can provide direct estimates of the value of the parameters in human reaction models which are used in environmental noise indices such as LDN and CNEL. In this report these analyses are based on the original computer tapes containing the responses of 22,000 respondents from ten studies of response to noise in residential areas. The estimates derived from these analyses are found to be so inaccurate that they do not provide useful information for policy or scientific purposes. The possibility that the type of questionnaire item could be biasing the estimates of the time-of-day weightings is considered but not supported by the data. Two alternatives to the conventional noise reaction model (adjusted energy model) are considered but not supported by the data.
The relative effect of noise at different times of day: An analysis of existing survey data
NASA Astrophysics Data System (ADS)
Fields, J. M.
1986-04-01
This report examines survey evidence on the relative impact of noise at different times of day and assesses the survey methodology which produces that evidence. Analyses of the regression of overall (24-hour) annoyance on noise levels in different time periods can provide direct estimates of the value of the parameters in human reaction models which are used in environmental noise indices such as LDN and CNEL. In this report these analyses are based on the original computer tapes containing the responses of 22,000 respondents from ten studies of response to noise in residential areas. The estimates derived from these analyses are found to be so inaccurate that they do not provide useful information for policy or scientific purposes. The possibility that the type of questionnaire item could be biasing the estimates of the time-of-day weightings is considered but not supported by the data. Two alternatives to the conventional noise reaction model (adjusted energy model) are considered but not supported by the data.
A Risk-Based Approach for Aerothermal/TPS Analysis and Testing
NASA Technical Reports Server (NTRS)
Wright, Michael J.; Grinstead, Jay H.; Bose, Deepak
2007-01-01
The current status of aerothermal and thermal protection system modeling for civilian entry missions is reviewed. For most such missions, the accuracy of our simulations is limited not by the tools and processes currently employed, but rather by reducible deficiencies in the underlying physical models. Improving the accuracy of and reducing the uncertainties in these models will enable a greater understanding of the system level impacts of a particular thermal protection system and of the system operation and risk over the operational life of the system. A strategic plan will be laid out by which key modeling deficiencies can be identified via mission-specific gap analysis. Once these gaps have been identified, the driving component uncertainties are determined via sensitivity analyses. A Monte-Carlo based methodology is presented for physics-based probabilistic uncertainty analysis of aerothermodynamics and thermal protection system material response modeling. These data are then used to advocate for and plan focused testing aimed at reducing key uncertainties. The results of these tests are used to validate or modify existing physical models. Concurrently, a testing methodology is outlined for thermal protection materials. The proposed approach is based on using the results of uncertainty/sensitivity analyses discussed above to tailor ground testing so as to best identify and quantify system performance and risk drivers. A key component of this testing is understanding the relationship between the test and flight environments. No existing ground test facility can simultaneously replicate all aspects of the flight environment, and therefore good models for traceability to flight are critical to ensure a low risk, high reliability thermal protection system design. Finally, the role of flight testing in the overall thermal protection system development strategy is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Huiying; Hou, Zhangshuan; Huang, Maoyi
The Community Land Model (CLM) represents physical, chemical, and biological processes of the terrestrial ecosystems that interact with climate across a range of spatial and temporal scales. As CLM includes numerous sub-models and associated parameters, the high-dimensional parameter space presents a formidable challenge for quantifying uncertainty and improving Earth system predictions needed to assess environmental changes and risks. This study aims to evaluate the potential of transferring hydrologic model parameters in CLM through sensitivity analyses and classification across watersheds from the Model Parameter Estimation Experiment (MOPEX) in the United States. The sensitivity of CLM-simulated water and energy fluxes to hydrologicalmore » parameters across 431 MOPEX basins are first examined using an efficient stochastic sampling-based sensitivity analysis approach. Linear, interaction, and high-order nonlinear impacts are all identified via statistical tests and stepwise backward removal parameter screening. The basins are then classified accordingly to their parameter sensitivity patterns (internal attributes), as well as their hydrologic indices/attributes (external hydrologic factors) separately, using a Principal component analyses (PCA) and expectation-maximization (EM) –based clustering approach. Similarities and differences among the parameter sensitivity-based classification system (S-Class), the hydrologic indices-based classification (H-Class), and the Koppen climate classification systems (K-Class) are discussed. Within each S-class with similar parameter sensitivity characteristics, similar inversion modeling setups can be used for parameter calibration, and the parameters and their contribution or significance to water and energy cycling may also be more transferrable. This classification study provides guidance on identifiable parameters, and on parameterization and inverse model design for CLM but the methodology is applicable to other models. Inverting parameters at representative sites belonging to the same class can significantly reduce parameter calibration efforts.« less
Scripting MODFLOW Model Development Using Python and FloPy.
Bakker, M; Post, V; Langevin, C D; Hughes, J D; White, J T; Starn, J J; Fienen, M N
2016-09-01
Graphical user interfaces (GUIs) are commonly used to construct and postprocess numerical groundwater flow and transport models. Scripting model development with the programming language Python is presented here as an alternative approach. One advantage of Python is that there are many packages available to facilitate the model development process, including packages for plotting, array manipulation, optimization, and data analysis. For MODFLOW-based models, the FloPy package was developed by the authors to construct model input files, run the model, and read and plot simulation results. Use of Python with the available scientific packages and FloPy facilitates data exploration, alternative model evaluations, and model analyses that can be difficult to perform with GUIs. Furthermore, Python scripts are a complete, transparent, and repeatable record of the modeling process. The approach is introduced with a simple FloPy example to create and postprocess a MODFLOW model. A more complicated capture-fraction analysis with a real-world model is presented to demonstrate the types of analyses that can be performed using Python and FloPy. © 2016, National Ground Water Association.
NASA Astrophysics Data System (ADS)
Wang, Wei; Zhong, Ming; Cheng, Ling; Jin, Lu; Shen, Si
2018-02-01
In the background of building global energy internet, it has both theoretical and realistic significance for forecasting and analysing the ratio of electric energy to terminal energy consumption. This paper firstly analysed the influencing factors of the ratio of electric energy to terminal energy and then used combination method to forecast and analyse the global proportion of electric energy. And then, construct the cointegration model for the proportion of electric energy by using influence factor such as electricity price index, GDP, economic structure, energy use efficiency and total population level. At last, this paper got prediction map of the proportion of electric energy by using the combination-forecasting model based on multiple linear regression method, trend analysis method, and variance-covariance method. This map describes the development trend of the proportion of electric energy in 2017-2050 and the proportion of electric energy in 2050 was analysed in detail using scenario analysis.
A design space exploration for control of Critical Quality Attributes of mAb.
Bhatia, Hemlata; Read, Erik; Agarabi, Cyrus; Brorson, Kurt; Lute, Scott; Yoon, Seongkyu
2016-10-15
A unique "design space (DSp) exploration strategy," defined as a function of four key scenarios, was successfully integrated and validated to enhance the DSp building exercise, by increasing the accuracy of analyses and interpretation of processed data. The four key scenarios, defining the strategy, were based on cumulative analyses of individual models developed for the Critical Quality Attributes (23 Glycan Profiles) considered for the study. The analyses of the CQA estimates and model performances were interpreted as (1) Inside Specification/Significant Model (2) Inside Specification/Non-significant Model (3) Outside Specification/Significant Model (4) Outside Specification/Non-significant Model. Each scenario was defined and illustrated through individual models of CQA aligning the description. The R(2), Q(2), Model Validity and Model Reproducibility estimates of G2, G2FaGbGN, G0 and G2FaG2, respectively, signified the four scenarios stated above. Through further optimizations, including the estimation of Edge of Failure and Set Point Analysis, wider and accurate DSps were created for each scenario, establishing critical functional relationship between Critical Process Parameters (CPPs) and Critical Quality Attributes (CQAs). A DSp provides the optimal region for systematic evaluation, mechanistic understanding and refining of a QbD approach. DSp exploration strategy will aid the critical process of consistently and reproducibly achieving predefined quality of a product throughout its lifecycle. Copyright © 2016 Elsevier B.V. All rights reserved.
High-resolution surface analysis for extended-range downscaling with limited-area atmospheric models
NASA Astrophysics Data System (ADS)
Separovic, Leo; Husain, Syed Zahid; Yu, Wei; Fernig, David
2014-12-01
High-resolution limited-area model (LAM) simulations are frequently employed to downscale coarse-resolution objective analyses over a specified area of the globe using high-resolution computational grids. When LAMs are integrated over extended time frames, from months to years, they are prone to deviations in land surface variables that can be harmful to the quality of the simulated near-surface fields. Nudging of the prognostic surface fields toward a reference-gridded data set is therefore devised in order to prevent the atmospheric model from diverging from the expected values. This paper presents a method to generate high-resolution analyses of land-surface variables, such as surface canopy temperature, soil moisture, and snow conditions, to be used for the relaxation of lower boundary conditions in extended-range LAM simulations. The proposed method is based on performing offline simulations with an external surface model, forced with the near-surface meteorological fields derived from short-range forecast, operational analyses, and observed temperatures and humidity. Results show that the outputs of the surface model obtained in the present study have potential to improve the near-surface atmospheric fields in extended-range LAM integrations.
A Bayesian model averaging approach with non-informative priors for cost-effectiveness analyses.
Conigliani, Caterina
2010-07-20
We consider the problem of assessing new and existing technologies for their cost-effectiveness in the case where data on both costs and effects are available from a clinical trial, and we address it by means of the cost-effectiveness acceptability curve. The main difficulty in these analyses is that cost data usually exhibit highly skew and heavy-tailed distributions, so that it can be extremely difficult to produce realistic probabilistic models for the underlying population distribution. Here, in order to integrate the uncertainty about the model into the analysis of cost data and into cost-effectiveness analyses, we consider an approach based on Bayesian model averaging (BMA) in the particular case of weak prior informations about the unknown parameters of the different models involved in the procedure. The main consequence of this assumption is that the marginal densities required by BMA are undetermined. However, in accordance with the theory of partial Bayes factors and in particular of fractional Bayes factors, we suggest replacing each marginal density with a ratio of integrals that can be efficiently computed via path sampling. Copyright (c) 2010 John Wiley & Sons, Ltd.
Schlesinger, Torsten; Nagel, Siegfried
2013-01-01
This article analyses the conditions influencing volunteering in sports clubs. It focuses not only on individual characteristics of volunteers but also on the corresponding structural conditions of sports clubs. It proposes a model of voluntary work in sports clubs based on economic behaviour theory. The influences of both the individual and context levels on the decision to engage in voluntary work are estimated in different multilevel models. Results of these multilevel analyses indicate that volunteering is not just an outcome of individual characteristics such as lower workloads, higher income, children belonging to the sports club, longer club memberships, or a strong commitment to the club. It is also influenced by club-specific structural conditions; volunteering is more probable in rural sports clubs whereas growth-oriented goals in clubs have a destabilising effect.
Brien, Dianne L.; Reid, Mark E.
2008-01-01
In Seattle, Washington, deep-seated landslides on bluffs along Puget Sound have historically caused extensive damage to land and structures. These large failures are controlled by three-dimensional (3-D) variations in strength and pore-water pressures. We assess the slope stability of part of southwestern Seattle using a 3-D limit-equilibrium analysis coupled with a 3-D groundwater flow model. Our analyses use a high-resolution digital elevation model (DEM) combined with assignment of strength and hydraulic properties based on geologic units. The hydrogeology of the Seattle area consists of a layer of permeable glacial outwash sand that overlies less permeable glacial lacustrine silty clay. Using a 3-D groundwater model, MODFLOW-2000, we simulate a water table above the less permeable units and calibrate the model to observed conditions. The simulated pore-pressure distribution is then used in a 3-D slope-stability analysis, SCOOPS, to quantify the stability of the coastal bluffs. For wet winter conditions, our analyses predict that the least stable areas are steep hillslopes above Puget Sound, where pore pressures are elevated in the outwash sand. Groundwater flow converges in coastal reentrants, resulting in elevated pore pressures and destabilization of slopes. Regions predicted to be least stable include the areas in or adjacent to three mapped historically active deep-seated landslides. The results of our 3-D analyses differ significantly from a slope map or results from one-dimensional (1-D) analyses.
Habitat features and predictive habitat modeling for the Colorado chipmunk in southern New Mexico
Rivieccio, M.; Thompson, B.C.; Gould, W.R.; Boykin, K.G.
2003-01-01
Two subspecies of Colorado chipmunk (state threatened and federal species of concern) occur in southern New Mexico: Tamias quadrivittatus australis in the Organ Mountains and T. q. oscuraensis in the Oscura Mountains. We developed a GIS model of potentially suitable habitat based on vegetation and elevation features, evaluated site classifications of the GIS model, and determined vegetation and terrain features associated with chipmunk occurrence. We compared GIS model classifications with actual vegetation and elevation features measured at 37 sites. At 60 sites we measured 18 habitat variables regarding slope, aspect, tree species, shrub species, and ground cover. We used logistic regression to analyze habitat variables associated with chipmunk presence/absence. All (100%) 37 sample sites (28 predicted suitable, 9 predicted unsuitable) were classified correctly by the GIS model regarding elevation and vegetation. For 28 sites predicted suitable by the GIS model, 18 sites (64%) appeared visually suitable based on habitat variables selected from logistic regression analyses, of which 10 sites (36%) were specifically predicted as suitable habitat via logistic regression. We detected chipmunks at 70% of sites deemed suitable via the logistic regression models. Shrub cover, tree density, plant proximity, presence of logs, and presence of rock outcrop were retained in the logistic model for the Oscura Mountains; litter, shrub cover, and grass cover were retained in the logistic model for the Organ Mountains. Evaluation of predictive models illustrates the need for multi-stage analyses to best judge performance. Microhabitat analyses indicate prospective needs for different management strategies between the subspecies. Sensitivities of each population of the Colorado chipmunk to natural and prescribed fire suggest that partial burnings of areas inhabited by Colorado chipmunks in southern New Mexico may be beneficial. These partial burnings may later help avoid a fire that could substantially reduce habitat of chipmunks over a mountain range.
NASA Astrophysics Data System (ADS)
Iskandar, Ismed; Satria Gondokaryono, Yudi
2016-02-01
In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range between the true value and the maximum likelihood estimated value lines.
Inference of mantle viscosity for depth resolutions of GIA observations
NASA Astrophysics Data System (ADS)
Nakada, Masao; Okuno, Jun'ichi
2016-11-01
Inference of the mantle viscosity from observations for glacial isostatic adjustment (GIA) process has usually been conducted through the analyses based on the simple three-layer viscosity model characterized by lithospheric thickness, upper- and lower-mantle viscosities. Here, we examine the viscosity structures for the simple three-layer viscosity model and also for the two-layer lower-mantle viscosity model defined by viscosities of η670,D (670-D km depth) and ηD,2891 (D-2891 km depth) with D-values of 1191, 1691 and 2191 km. The upper-mantle rheological parameters for the two-layer lower-mantle viscosity model are the same as those for the simple three-layer one. For the simple three-layer viscosity model, rate of change of degree-two zonal harmonics of geopotential due to GIA process (GIA-induced J˙2) of -(6.0-6.5) × 10-11 yr-1 provides two permissible viscosity solutions for the lower mantle, (7-20) × 1021 and (5-9) × 1022 Pa s, and the analyses with observational constraints of the J˙2 and Last Glacial Maximum (LGM) sea levels at Barbados and Bonaparte Gulf indicate (5-9) × 1022 Pa s for the lower mantle. However, the analyses for the J˙2 based on the two-layer lower-mantle viscosity model only require a viscosity layer higher than (5-10) × 1021 Pa s for a depth above the core-mantle boundary (CMB), in which the value of (5-10) × 1021 Pa s corresponds to the solution of (7-20) × 1021 Pa s for the simple three-layer one. Moreover, the analyses with the J˙2 and LGM sea level constraints for the two-layer lower-mantle viscosity model indicate two viscosity solutions: η670,1191 > 3 × 1021 and η1191,2891 ˜ (5-10) × 1022 Pa s, and η670,1691 > 1022 and η1691,2891 ˜ (5-10) × 1022 Pa s. The inferred upper-mantle viscosity for such solutions is (1-4) × 1020 Pa s similar to the estimate for the simple three-layer viscosity model. That is, these analyses require a high viscosity layer of (5-10) × 1022 Pa s at least in the deep mantle, and suggest that the GIA-based lower-mantle viscosity structure should be treated carefully in discussing the mantle dynamics related to the viscosity jump at ˜670 km depth. We also preliminarily put additional constraints on these viscosity solutions by examining typical relative sea level (RSL) changes used to infer the lower-mantle viscosity. The viscosity solution inferred from the far-field RSL changes in the Australian region is consistent with those for the J˙2 and LGM sea levels, and the analyses for RSL changes at Southport and Bermuda in the intermediate region for the North American ice sheets suggest the solution of η670,D > 1022, ηD,2891 ˜ (5-10) × 1022 Pa s (D = 1191 or 1691 km) and upper-mantle viscosity higher than 6 × 1020 Pa s.
Optimization of Nd: YAG Laser Marking of Alumina Ceramic Using RSM And ANN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peter, Josephine; Doloi, B.; Bhattacharyya, B.
The present research papers deals with the artificial neural network (ANN) and the response surface methodology (RSM) based mathematical modeling and also an optimization analysis on marking characteristics on alumina ceramic. The experiments have been planned and carried out based on Design of Experiment (DOE). It also analyses the influence of the major laser marking process parameters and the optimal combination of laser marking process parametric setting has been obtained. The output of the RSM optimal data is validated through experimentation and ANN predictive model. A good agreement is observed between the results based on ANN predictive model and actualmore » experimental observations.« less
Boundary Layer Depth In Coastal Regions
NASA Astrophysics Data System (ADS)
Porson, A.; Schayes, G.
The results of earlier studies performed about sea breezes simulations have shown that this is a relevant feature of the Planetary Boundary Layer that still requires effort to be diagnosed properly by atmospheric models. Based on the observations made during the ESCOMPTE campaign, over the Mediterranean Sea, different CBL and SBL height estimation processes have been tested with a meso-scale model, TVM. The aim was to compare the critical points of the BL height determination computed using turbulent kinetic energy profile with some other standard evaluations. Moreover, these results have been analysed with different mixing length formulation. The sensitivity of formulation is also analysed with a simple coastal configuration.
Viscous and thermal modelling of thermoplastic composites forming process
NASA Astrophysics Data System (ADS)
Guzman, Eduardo; Liang, Biao; Hamila, Nahiene; Boisse, Philippe
2016-10-01
Thermoforming thermoplastic prepregs is a fast manufacturing process. It is suitable for automotive composite parts manufacturing. The simulation of thermoplastic prepreg forming is achieved by alternate thermal and mechanical analyses. The thermal properties are obtained from a mesoscopic analysis and a homogenization procedure. The forming simulation is based on a viscous-hyperelastic approach. The thermal simulations define the coefficients of the mechanical model that depend on the temperature. The forming simulations modify the boundary conditions and the internal geometry of the thermal analyses. The comparison of the simulation with an experimental thermoforming of a part representative of automotive applications shows the efficiency of the approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, Jinwoo; Lee, Jewon; Song, Hanjung
2011-03-15
This paper presents a fully integrated circuit implementation of an operational amplifier (op-amp) based chaotic neuron model with a bipolar output function, experimental measurements, and analyses of its chaotic behavior. The proposed chaotic neuron model integrated circuit consists of several op-amps, sample and hold circuits, a nonlinear function block for chaotic signal generation, a clock generator, a nonlinear output function, etc. Based on the HSPICE (circuit program) simulation results, approximated empirical equations for analyses were formulated. Then, the chaotic dynamical responses such as bifurcation diagrams, time series, and Lyapunov exponent were calculated using these empirical equations. In addition, we performedmore » simulations about two chaotic neuron systems with four synapses to confirm neural network connections and got normal behavior of the chaotic neuron such as internal state bifurcation diagram according to the synaptic weight variation. The proposed circuit was fabricated using a 0.8-{mu}m single poly complementary metal-oxide semiconductor technology. Measurements of the fabricated single chaotic neuron with {+-}2.5 V power supplies and a 10 kHz sampling clock frequency were carried out and compared with the simulated results.« less
Exploration of cellular reaction systems.
Kirkilionis, Markus
2010-01-01
We discuss and review different ways to map cellular components and their temporal interaction with other such components to different non-spatially explicit mathematical models. The essential choices made in the literature are between discrete and continuous state spaces, between rule and event-based state updates and between deterministic and stochastic series of such updates. The temporal modelling of cellular regulatory networks (dynamic network theory) is compared with static network approaches in two first introductory sections on general network modelling. We concentrate next on deterministic rate-based dynamic regulatory networks and their derivation. In the derivation, we include methods from multiscale analysis and also look at structured large particles, here called macromolecular machines. It is clear that mass-action systems and their derivatives, i.e. networks based on enzyme kinetics, play the most dominant role in the literature. The tools to analyse cellular reaction networks are without doubt most complete for mass-action systems. We devote a long section at the end of the review to make a comprehensive review of related tools and mathematical methods. The emphasis is to show how cellular reaction networks can be analysed with the help of different associated graphs and the dissection into modules, i.e. sub-networks.
Le Teuff, Gwenaël; Abrahamowicz, Michal; Bolard, Philippe; Quantin, Catherine
2005-12-30
In many prognostic studies focusing on mortality of persons affected by a particular disease, the cause of death of individual patients is not recorded. In such situations, the conventional survival analytical methods, such as the Cox's proportional hazards regression model, do not allow to discriminate the effects of prognostic factors on disease-specific mortality from their effects on all-causes mortality. In the last decade, the relative survival approach has been proposed to deal with the analyses involving population-based cancer registries, where the problem of missing information on the cause of death is very common. However, some questions regarding the ability of the relative survival methods to accurately discriminate between the two sources of mortality remain open. In order to systematically assess the performance of the relative survival model proposed by Esteve et al., and to quantify its potential advantages over the Cox's model analyses, we carried out a series of simulation experiments, based on the population-based colon cancer registry in the French region of Burgundy. Simulations showed a systematic bias induced by the 'crude' conventional Cox's model analyses when individual causes of death are unknown. In simulations where only about 10 per cent of patients died of causes other than colon cancer, the Cox's model over-estimated the effects of male gender and oldest age category by about 17 and 13 per cent, respectively, with the coverage rate of the 95 per cent CI for the latter estimate as low as 65 per cent. In contrast, the effect of higher cancer stages was under-estimated by 8-28 per cent. In contrast to crude survival, relative survival model largely reduced such problems and handled well even such challenging tasks as separating the opposite effects of the same variable on cancer-related versus other-causes mortality. Specifically, in all the cases discussed above, the relative bias in the estimates from the Esteve et al.'s model was always below 10 per cent, with the coverage rates above 81 per cent. Copyright 2005 John Wiley & Sons, Ltd.
Goldstein, Daniel A.; Chen, Qiushi; Ayer, Turgay; Howard, David H.; Lipscomb, Joseph; El-Rayes, Bassel F.; Flowers, Christopher R.
2015-01-01
Purpose The addition of bevacizumab to fluorouracil-based chemotherapy is a standard of care for previously untreated metastatic colorectal cancer. Continuation of bevacizumab beyond progression is an accepted standard of care based on a 1.4-month increase in median overall survival observed in a randomized trial. No United States–based cost-effectiveness modeling analyses are currently available addressing the use of bevacizumab in metastatic colorectal cancer. Our objective was to determine the cost effectiveness of bevacizumab in the first-line setting and when continued beyond progression from the perspective of US payers. Methods We developed two Markov models to compare the cost and effectiveness of fluorouracil, leucovorin, and oxaliplatin with or without bevacizumab in the first-line treatment and subsequent fluorouracil, leucovorin, and irinotecan with or without bevacizumab in the second-line treatment of metastatic colorectal cancer. Model robustness was addressed by univariable and probabilistic sensitivity analyses. Health outcomes were measured in life-years and quality-adjusted life-years (QALYs). Results Using bevacizumab in first-line therapy provided an additional 0.10 QALYs (0.14 life-years) at a cost of $59,361. The incremental cost-effectiveness ratio was $571,240 per QALY. Continuing bevacizumab beyond progression provided an additional 0.11 QALYs (0.16 life-years) at a cost of $39,209. The incremental cost-effectiveness ratio was $364,083 per QALY. In univariable sensitivity analyses, the variables with the greatest influence on the incremental cost-effectiveness ratio were bevacizumab cost, overall survival, and utility. Conclusion Bevacizumab provides minimal incremental benefit at high incremental cost per QALY in both the first- and second-line settings of metastatic colorectal cancer treatment. PMID:25691669
A multi-GPU real-time dose simulation software framework for lung radiotherapy.
Santhanam, A P; Min, Y; Neelakkantan, H; Papp, N; Meeks, S L; Kupelian, P A
2012-09-01
Medical simulation frameworks facilitate both the preoperative and postoperative analysis of the patient's pathophysical condition. Of particular importance is the simulation of radiation dose delivery for real-time radiotherapy monitoring and retrospective analyses of the patient's treatment. In this paper, a software framework tailored for the development of simulation-based real-time radiation dose monitoring medical applications is discussed. A multi-GPU-based computational framework coupled with inter-process communication methods is introduced for simulating the radiation dose delivery on a deformable 3D volumetric lung model and its real-time visualization. The model deformation and the corresponding dose calculation are allocated among the GPUs in a task-specific manner and is performed in a pipelined manner. Radiation dose calculations are computed on two different GPU hardware architectures. The integration of this computational framework with a front-end software layer and back-end patient database repository is also discussed. Real-time simulation of the dose delivered is achieved at once every 120 ms using the proposed framework. With a linear increase in the number of GPU cores, the computational time of the simulation was linearly decreased. The inter-process communication time also improved with an increase in the hardware memory. Variations in the delivered dose and computational speedup for variations in the data dimensions are investigated using D70 and D90 as well as gEUD as metrics for a set of 14 patients. Computational speed-up increased with an increase in the beam dimensions when compared with a CPU-based commercial software while the error in the dose calculation was <1%. Our analyses show that the framework applied to deformable lung model-based radiotherapy is an effective tool for performing both real-time and retrospective analyses.
Crop status evaluations and yield predictions
NASA Technical Reports Server (NTRS)
Haun, J. R.
1976-01-01
One phase of the large area crop inventory project is presented. Wheat yield models based on the input of environmental variables potentially obtainable through the use of space remote sensing were developed and demonstrated. By the use of a unique method for visually qualifying daily plant development and subsequent multifactor computer analyses, it was possible to develop practical models for predicting crop development and yield. Development of wheat yield prediction models was based on the discovery that morphological changes in plants are detected and quantified on a daily basis, and that this change during a portion of the season was proportional to yield.
NASA Astrophysics Data System (ADS)
Baehr, J.; Fröhlich, K.; Botzet, M.; Domeisen, D. I. V.; Kornblueh, L.; Notz, D.; Piontek, R.; Pohlmann, H.; Tietsche, S.; Müller, W. A.
2015-05-01
A seasonal forecast system is presented, based on the global coupled climate model MPI-ESM as used for CMIP5 simulations. We describe the initialisation of the system and analyse its predictive skill for surface temperature. The presented system is initialised in the atmospheric, oceanic, and sea ice component of the model from reanalysis/observations with full field nudging in all three components. For the initialisation of the ensemble, bred vectors with a vertically varying norm are implemented in the ocean component to generate initial perturbations. In a set of ensemble hindcast simulations, starting each May and November between 1982 and 2010, we analyse the predictive skill. Bias-corrected ensemble forecasts for each start date reproduce the observed surface temperature anomalies at 2-4 months lead time, particularly in the tropics. Niño3.4 sea surface temperature anomalies show a small root-mean-square error and predictive skill up to 6 months. Away from the tropics, predictive skill is mostly limited to the ocean, and to regions which are strongly influenced by ENSO teleconnections. In summary, the presented seasonal prediction system based on a coupled climate model shows predictive skill for surface temperature at seasonal time scales comparable to other seasonal prediction systems using different underlying models and initialisation strategies. As the same model underlying our seasonal prediction system—with a different initialisation—is presently also used for decadal predictions, this is an important step towards seamless seasonal-to-decadal climate predictions.
Biases and power for groups comparison on subjective health measurements.
Hamel, Jean-François; Hardouin, Jean-Benoit; Le Neel, Tanguy; Kubis, Gildas; Roquelaure, Yves; Sébille, Véronique
2012-01-01
Subjective health measurements are increasingly used in clinical research, particularly for patient groups comparisons. Two main types of analytical strategies can be used for such data: so-called classical test theory (CTT), relying on observed scores and models coming from Item Response Theory (IRT) relying on a response model relating the items responses to a latent parameter, often called latent trait. Whether IRT or CTT would be the most appropriate method to compare two independent groups of patients on a patient reported outcomes measurement remains unknown and was investigated using simulations. For CTT-based analyses, groups comparison was performed using t-test on the scores. For IRT-based analyses, several methods were compared, according to whether the Rasch model was considered with random effects or with fixed effects, and the group effect was included as a covariate or not. Individual latent traits values were estimated using either a deterministic method or by stochastic approaches. Latent traits were then compared with a t-test. Finally, a two-steps method was performed to compare the latent trait distributions, and a Wald test was performed to test the group effect in the Rasch model including group covariates. The only unbiased IRT-based method was the group covariate Wald's test, performed on the random effects Rasch model. This model displayed the highest observed power, which was similar to the power using the score t-test. These results need to be extended to the case frequently encountered in practice where data are missing and possibly informative.
NASA Astrophysics Data System (ADS)
Zhang, Hao; Yuan, Yan; Su, Lijuan; Huang, Fengzhen; Bai, Qing
2016-09-01
The Risley-prism-based light beam steering apparatus delivers superior pointing accuracy and it is used in imaging LIDAR and imaging microscopes. A general model for pointing error analysis of the Risley prisms is proposed in this paper, based on ray direction deviation in light refraction. This model captures incident beam deviation, assembly deflections, and prism rotational error. We derive the transmission matrixes of the model firstly. Then, the independent and cumulative effects of different errors are analyzed through this model. Accuracy study of the model shows that the prediction deviation of pointing error for different error is less than 4.1×10-5° when the error amplitude is 0.1°. Detailed analyses of errors indicate that different error sources affect the pointing accuracy to varying degree, and the major error source is the incident beam deviation. The prism tilting has a relative big effect on the pointing accuracy when prism tilts in the principal section. The cumulative effect analyses of multiple errors represent that the pointing error can be reduced by tuning the bearing tilting in the same direction. The cumulative effect of rotational error is relative big when the difference of these two prism rotational angles equals 0 or π, while it is relative small when the difference equals π/2. The novelty of these results suggests that our analysis can help to uncover the error distribution and aid in measurement calibration of Risley-prism systems.
Active lifestyles in older adults: an integrated predictive model of physical activity and exercise
Galli, Federica; Chirico, Andrea; Mallia, Luca; Girelli, Laura; De Laurentiis, Michelino; Lucidi, Fabio; Giordano, Antonio; Botti, Gerardo
2018-01-01
Physical activity and exercise have been identified as behaviors to preserve physical and mental health in older adults. The aim of the present study was to test the Integrated Behavior Change model in exercise and physical activity behaviors. The study evaluated two different samples of older adults: the first engaged in exercise class, the second doing spontaneous physical activity. The key analyses relied on Variance-Based Structural Modeling, which were performed by means of WARP PLS 6.0 statistical software. The analyses estimated the Integrated Behavior Change model in predicting exercise and physical activity, in a longitudinal design across two months of assessment. The tested models exhibited a good fit with the observed data derived from the model focusing on exercise, as well as with those derived from the model focusing on physical activity. Results showed, also, some effects and relations specific to each behavioral context. Results may form a starting point for future experimental and intervention research. PMID:29875997
Terminal Area Productivity Airport Wind Analysis and Chicago O'Hare Model Description
NASA Technical Reports Server (NTRS)
Hemm, Robert; Shapiro, Gerald
1998-01-01
This paper describes two results from a continuing effort to provide accurate cost-benefit analyses of the NASA Terminal Area Productivity (TAP) program technologies. Previous tasks have developed airport capacity and delay models and completed preliminary cost benefit estimates for TAP technologies at 10 U.S. airports. This task covers two improvements to the capacity and delay models. The first improvement is the completion of a detailed model set for the Chicago O'Hare (ORD) airport. Previous analyses used a more general model to estimate the benefits for ORD. This paper contains a description of the model details with results corresponding to current conditions. The second improvement is the development of specific wind speed and direction criteria for use in the delay models to predict when the Aircraft Vortex Spacing System (AVOSS) will allow use of reduced landing separations. This paper includes a description of the criteria and an estimate of AVOSS utility for 10 airports based on analysis of 35 years of weather data.
Estimation of Effective Directional Strength of Single Walled Wavy CNT Reinforced Nanocomposite
NASA Astrophysics Data System (ADS)
Bhowmik, Krishnendu; Kumar, Pranav; Khutia, Niloy; Chowdhury, Amit Roy
2018-03-01
In this present work, single walled wavy carbon nanotube reinforced into composite has been studied to predict the effective directional strength of the nanocomposite. The effect of waviness on the overall Young’s modulus of the composite has been analysed using three dimensional finite element model. Waviness pattern of carbon nanotube is considered as periodic cosine function. Both long (continuous) and short (discontinuous) carbon nanotubes are being idealized as solid annular tube. Short carbon nanotube is modelled with hemispherical cap at its both ends. Representative Volume Element models have been developed with different waviness, height fractions, volume fractions and modulus ratios of carbon nanotubes. Consequently a micromechanics based analytical model has been formulated to derive the effective reinforcing modulus of wavy carbon nanotubes. In these models wavy single walled wavy carbon nanotubes are considered to be aligned along the longitudinal axis of the Representative Volume Element model. Results obtained from finite element analyses are compared with analytical model and they are found in good agreement.
Modeling Student Test-Taking Motivation in the Context of an Adaptive Achievement Test
ERIC Educational Resources Information Center
Wise, Steven L.; Kingsbury, G. Gage
2016-01-01
This study examined the utility of response time-based analyses in understanding the behavior of unmotivated test takers. For the data from an adaptive achievement test, patterns of observed rapid-guessing behavior and item response accuracy were compared to the behavior expected under several types of models that have been proposed to represent…
Principal Components Analyses of the MMPI-2 PSY-5 Scales. Identification of Facet Subscales
ERIC Educational Resources Information Center
Arnau, Randolph C.; Handel, Richard W.; Archer, Robert P.
2005-01-01
The Personality Psychopathology Five (PSY-5) is a five-factor personality trait model designed for assessing personality pathology using quantitative dimensions. Harkness, McNulty, and Ben-Porath developed Minnesota Multiphasic Personality Inventory-2 (MMPI-2) scales based on the PSY-5 model, and these scales were recently added to the standard…
Optimal Partitioning of a Data Set Based on the "p"-Median Model
ERIC Educational Resources Information Center
Brusco, Michael J.; Kohn, Hans-Friedrich
2008-01-01
Although the "K"-means algorithm for minimizing the within-cluster sums of squared deviations from cluster centroids is perhaps the most common method for applied cluster analyses, a variety of other criteria are available. The "p"-median model is an especially well-studied clustering problem that requires the selection of "p" objects to serve as…
The Industrial Sectors Integrated Solutions (ISIS) model for the pulp and paper sector is currently under development at the U.S. Environmental Protection Agency (EPA), and can be utilized to facilitate multi-pollutant sector-based analyses that are performed in conjunction with ...
John F. Hunt; Hongmei Gu
2006-01-01
The anisotropy of wood complicates solution of heat and mass transfer problems that require analyses be based on fundamental material properties of the wood structure. Most heat transfer models use average thermal properties across either the radial or tangential direction and do not differentiate the effects of cellular alignment, earlywood/latewood differences, or...
Integrating Vegetation Classification, Mapping, and Strategic Inventory for Forest Management
C. K. Brewer; R. Bush; D. Berglund; J. A. Barber; S. R. Brown
2006-01-01
Many of the analyses needed to address multiple resource issues are focused on vegetation pattern and process relationships and most rely on the data models produced from vegetation classification, mapping, and/or inventory. The Northern Region Vegetation Mapping Project (R1-VMP) data models are based on these three integrally related, yet separate processes. This...
Robert E. Keane; Matthew G. Rollins; Cecilia H. McNicoll; Russell A. Parsons
2002-01-01
Presented is a prototype of the Landscape Ecosystem Inventory System (LEIS), a system for creating maps of important landscape characteristics for natural resource planning. This system uses gradient-based field inventories coupled with gradient modeling remote sensing, ecosystem simulation, and statistical analyses to derive spatial data layers required for ecosystem...
Testing Structural Models of DSM-IV Symptoms of Common Forms of Child and Adolescent Psychopathology
ERIC Educational Resources Information Center
Lahey, Benjamin B.; Rathouz, Paul J.; Van Hulle, Carol; Urbano, Richard C.; Krueger, Robert F.; Applegate, Brooks; Garriock, Holly A.; Chapman, Derek A.; Waldman, Irwin D.
2008-01-01
Confirmatory factor analyses were conducted of "Diagnostic and Statistical Manual of Mental Disorders", Fourth Edition (DSM-IV) symptoms of common mental disorders derived from structured interviews of a representative sample of 4,049 twin children and adolescents and their adult caretakers. A dimensional model based on the assignment of symptoms…
Modeling Health Care Expenditures and Use.
Deb, Partha; Norton, Edward C
2018-04-01
Health care expenditures and use are challenging to model because these dependent variables typically have distributions that are skewed with a large mass at zero. In this article, we describe estimation and interpretation of the effects of a natural experiment using two classes of nonlinear statistical models: one for health care expenditures and the other for counts of health care use. We extend prior analyses to test the effect of the ACA's young adult expansion on three different outcomes: total health care expenditures, office-based visits, and emergency department visits. Modeling the outcomes with a two-part or hurdle model, instead of a single-equation model, reveals that the ACA policy increased the number of office-based visits but decreased emergency department visits and overall spending.
Modelling the Cost Effectiveness of Disease-Modifying Treatments for Multiple Sclerosis
Thompson, Joel P.; Abdolahi, Amir; Noyes, Katia
2013-01-01
Several cost-effectiveness models of disease-modifying treatments (DMTs) for multiple sclerosis (MS) have been developed for different populations and different countries. Vast differences in the approaches and discrepancies in the results give rise to heated discussions and limit the use of these models. Our main objective is to discuss the methodological challenges in modelling the cost effectiveness of treatments for MS. We conducted a review of published models to describe the approaches taken to date, to identify the key parameters that influence the cost effectiveness of DMTs, and to point out major areas of weakness and uncertainty. Thirty-six published models and analyses were identified. The greatest source of uncertainty is the absence of head-to-head randomized clinical trials. Modellers have used various techniques to compensate, including utilizing extension trials. The use of large observational cohorts in recent studies aids in identifying population-based, ‘real-world’ treatment effects. Major drivers of results include the time horizon modelled and DMT acquisition costs. Model endpoints must target either policy makers (using cost-utility analysis) or clinicians (conducting cost-effectiveness analyses). Lastly, the cost effectiveness of DMTs outside North America and Europe is currently unknown, with the lack of country-specific data as the major limiting factor. We suggest that limited data should not preclude analyses, as models may be built and updated in the future as data become available. Disclosure of modelling methods and assumptions could improve the transferability and applicability of models designed to reflect different healthcare systems. PMID:23640103
Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J
2013-05-01
Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.
The Effect of Lake Temperatures and Emissions on Ozone Exposure in the Western Great Lakes Region
Jerome D. Fast; Warren E. Heilman
2003-01-01
A meteorological-chemical model with a 12-km horizontal grid spacing was used to simulate the evolution of ozone over the western Great Lakes region during a 30-day period in the summer of 1999. Lake temperatures in the model were based on analyses derived from daily satellite measurements. The model performance was evaluated using operational surface and upper-air...
A Methodological Review of US Budget-Impact Models for New Drugs.
Mauskopf, Josephine; Earnshaw, Stephanie
2016-11-01
A budget-impact analysis is required by many jurisdictions when adding a new drug to the formulary. However, previous reviews have indicated that adherence to methodological guidelines is variable. In this methodological review, we assess the extent to which US budget-impact analyses for new drugs use recommended practices. We describe recommended practice for seven key elements in the design of a budget-impact analysis. Targeted literature searches for US studies reporting estimates of the budget impact of a new drug were performed and we prepared a summary of how each study addressed the seven key elements. The primary finding from this review is that recommended practice is not followed in many budget-impact analyses. For example, we found that growth in the treated population size and/or changes in disease-related costs expected during the model time horizon for more effective treatments was not included in several analyses for chronic conditions. In addition, all drug-related costs were not captured in the majority of the models. Finally, for most studies, one-way sensitivity and scenario analyses were very limited, and the ranges used in one-way sensitivity analyses were frequently arbitrary percentages rather than being data driven. The conclusions from our review are that changes in population size, disease severity mix, and/or disease-related costs should be properly accounted for to avoid over- or underestimating the budget impact. Since each budget holder might have different perspectives and different values for many of the input parameters, it is also critical for published budget-impact analyses to include extensive sensitivity and scenario analyses based on realistic input values.
A proposed model for economic evaluations of major depressive disorder.
Haji Ali Afzali, Hossein; Karnon, Jonathan; Gray, Jodi
2012-08-01
In countries like UK and Australia, the comparability of model-based analyses is an essential aspect of reimbursement decisions for new pharmaceuticals, medical services and technologies. Within disease areas, the use of models with alternative structures, type of modelling techniques and/or data sources for common parameters reduces the comparability of evaluations of alternative technologies for the same condition. The aim of this paper is to propose a decision analytic model to evaluate long-term costs and benefits of alternative management options in patients with depression. The structure of the proposed model is based on the natural history of depression and includes clinical events that are important from both clinical and economic perspectives. Considering its greater flexibility with respect to handling time, discrete event simulation (DES) is an appropriate simulation platform for modelling studies of depression. We argue that the proposed model can be used as a reference model in model-based studies of depression improving the quality and comparability of studies.
Lecerf, Thierry; Canivez, Gary L
2018-06-01
Interpretation of the French Wechsler Intelligence Scale for Children-Fifth Edition (French WISC-V; Wechsler, 2016a) is based on a 5-factor model including Verbal Comprehension (VC), Visual Spatial (VS), Fluid Reasoning (FR), Working Memory (WM), and Processing Speed (PS). Evidence for the French WISC-V factorial structure was established exclusively through confirmatory factor analyses (CFAs). However, as recommended by Carroll (1995); Reise (2012), and Brown (2015), factorial structure should derive from both exploratory factor analysis (EFA) and CFA. The first goal of this study was to examine the factorial structure of the French WISC-V using EFA. The 15 French WISC-V primary and secondary subtest scaled scores intercorrelation matrix was used and factor extraction criteria suggested from 1 to 4 factors. To disentangle the contribution of first- and second-order factors, the Schmid and Leiman (1957) orthogonalization transformation (SLT) was applied. Overall, no EFA evidence for 5 factors was found. Results indicated that the g factor accounted for about 67% of the common variance and that the contributions of the first-order factors were weak (3.6 to 11.9%). CFA was used to test numerous alternative models. Results indicated that bifactor models produced better fit to these data than higher-order models. Consistent with previous studies, findings suggested dominance of the general intelligence factor and that users should thus emphasize the Full Scale IQ (FSIQ) when interpreting the French WISC-V. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Humphreys-Pereira, Danny A; Elling, Axel A
2014-01-01
Root-knot nematodes (Meloidogyne spp.) are among the most important plant pathogens. In this study, the mitochondrial (mt) genomes of the root-knot nematodes, M. chitwoodi and M. incognita were sequenced. PCR analyses suggest that both mt genomes are circular, with an estimated size of 19.7 and 18.6-19.1kb, respectively. The mt genomes each contain a large non-coding region with tandem repeats and the control region. The mt gene arrangement of M. chitwoodi and M. incognita is unlike that of other nematodes. Sequence alignments of the two Meloidogyne mt genomes showed three translocations; two in transfer RNAs and one in cox2. Compared with other nematode mt genomes, the gene arrangement of M. chitwoodi and M. incognita was most similar to Pratylenchus vulnus. Phylogenetic analyses (Maximum Likelihood and Bayesian inference) were conducted using 78 complete mt genomes of diverse nematode species. Analyses based on nucleotides and amino acids of the 12 protein-coding mt genes showed strong support for the monophyly of class Chromadorea, but only amino acid-based analyses supported the monophyly of class Enoplea. The suborder Spirurina was not monophyletic in any of the phylogenetic analyses, contradicting the Clade III model, which groups Ascaridomorpha, Spiruromorpha and Oxyuridomorpha based on the small subunit ribosomal RNA gene. Importantly, comparisons of mt gene arrangement and tree-based methods placed Meloidogyne as sister taxa of Pratylenchus, a migratory plant endoparasitic nematode, and not with the sedentary endoparasitic Heterodera. Thus, comparative analyses of mt genomes suggest that sedentary endoparasitism in Meloidogyne and Heterodera is based on convergent evolution. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Emoto, K.; Saito, T.; Shiomi, K.
2017-12-01
Short-period (<1 s) seismograms are strongly affected by small-scale (<10 km) heterogeneities in the lithosphere. In general, short-period seismograms are analysed based on the statistical method by considering the interaction between seismic waves and randomly distributed small-scale heterogeneities. Statistical properties of the random heterogeneities have been estimated by analysing short-period seismograms. However, generally, the small-scale random heterogeneity is not taken into account for the modelling of long-period (>2 s) seismograms. We found that the energy of the coda of long-period seismograms shows a spatially flat distribution. This phenomenon is well known in short-period seismograms and results from the scattering by small-scale heterogeneities. We estimate the statistical parameters that characterize the small-scale random heterogeneity by modelling the spatiotemporal energy distribution of long-period seismograms. We analyse three moderate-size earthquakes that occurred in southwest Japan. We calculate the spatial distribution of the energy density recorded by a dense seismograph network in Japan at the period bands of 8-16 s, 4-8 s and 2-4 s and model them by using 3-D finite difference (FD) simulations. Compared to conventional methods based on statistical theories, we can calculate more realistic synthetics by using the FD simulation. It is not necessary to assume a uniform background velocity, body or surface waves and scattering properties considered in general scattering theories. By taking the ratio of the energy of the coda area to that of the entire area, we can separately estimate the scattering and the intrinsic absorption effects. Our result reveals the spectrum of the random inhomogeneity in a wide wavenumber range including the intensity around the corner wavenumber as P(m) = 8πε2a3/(1 + a2m2)2, where ε = 0.05 and a = 3.1 km, even though past studies analysing higher-frequency records could not detect the corner. Finally, we estimate the intrinsic attenuation by modelling the decay rate of the energy. The method proposed in this study is suitable for quantifying the statistical properties of long-wavelength subsurface random inhomogeneity, which leads the way to characterizing a wider wavenumber range of spectra, including the corner wavenumber.
Furusawa, Chikara; Horinouchi, Takaaki; Hirasawa, Takashi; Shimizu, Hiroshi
2013-01-01
It is widely acknowledged that in order to establish sustainable societies, production processes should shift from petrochemical-based processes to bioprocesses. Because bioconversion technologies, in which biomass resources are converted to valuable materials, are preferable to processes dependent on fossil resources, the former should be further developed. The following two approaches can be adopted to improve cellular properties and obtain high productivity and production yield of target products: (1) optimization of cellular metabolic pathways involved in various bioprocesses and (2) creation of stress-tolerant cells that can be active even under severe stress conditions in the bioprocesses. Recent progress in omics analyses has facilitated the analysis of microorganisms based on bioinformatics data for molecular breeding and bioprocess development. Systems metabolic engineering is a new area of study, and it has been defined as a methodology in which metabolic engineering and systems biology are integrated to upgrade the designability of industrially useful microorganisms. This chapter discusses multi-omics analyses and rational design methods for molecular breeding. The first is an example of the rational design of metabolic networks for target production by flux balance analysis using genome-scale metabolic models. Recent progress in the development of genome-scale metabolic models and the application of these models to the design of desirable metabolic networks is also described in this example. The second is an example of evolution engineering with omics analyses for the creation of stress-tolerant microorganisms. Long-term culture experiments to obtain the desired phenotypes and omics analyses to identify the phenotypic changes are described here.
Cypko, Mario A; Stoehr, Matthaeus; Kozniewski, Marcin; Druzdzel, Marek J; Dietz, Andreas; Berliner, Leonard; Lemke, Heinz U
2017-11-01
Oncological treatment is being increasingly complex, and therefore, decision making in multidisciplinary teams is becoming the key activity in the clinical pathways. The increased complexity is related to the number and variability of possible treatment decisions that may be relevant to a patient. In this paper, we describe validation of a multidisciplinary cancer treatment decision in the clinical domain of head and neck oncology. Probabilistic graphical models and corresponding inference algorithms, in the form of Bayesian networks, can support complex decision-making processes by providing a mathematically reproducible and transparent advice. The quality of BN-based advice depends on the quality of the model. Therefore, it is vital to validate the model before it is applied in practice. For an example BN subnetwork of laryngeal cancer with 303 variables, we evaluated 66 patient records. To validate the model on this dataset, a validation workflow was applied in combination with quantitative and qualitative analyses. In the subsequent analyses, we observed four sources of imprecise predictions: incorrect data, incomplete patient data, outvoting relevant observations, and incorrect model. Finally, the four problems were solved by modifying the data and the model. The presented validation effort is related to the model complexity. For simpler models, the validation workflow is the same, although it may require fewer validation methods. The validation success is related to the model's well-founded knowledge base. The remaining laryngeal cancer model may disclose additional sources of imprecise predictions.
Specialty Payment Model Opportunities and Assessment
Mulcahy, Andrew W.; Chan, Chris; Hirshman, Samuel; Huckfeldt, Peter J.; Kofner, Aaron; Liu, Jodi L.; Lovejoy, Susan L.; Popescu, Ioana; Timbie, Justin W.; Hussey, Peter S.
2015-01-01
Abstract Gastroenterology and cardiology services are common and costly among Medicare beneficiaries. Episode-based payment, which aims to create incentives for high-quality, low-cost care, has been identified as a promising alternative payment model. This article describes research related to the design of episode-based payment models for ambulatory gastroenterology and cardiology services for possible testing by the Center for Medicare and Medicaid Innovation at the Centers for Medicare and Medicaid Services (CMS). The authors analyzed Medicare claims data to describe the frequency and characteristics of gastroenterology and cardiology index procedures, the practices that delivered index procedures, and the patients that received index procedures. The results of these analyses can help inform CMS decisions about the definition of episodes in an episode-based payment model; payment adjustments for service setting, multiple procedures, or other factors; and eligibility for the payment model. PMID:28083363
Requirements for energy based constitutive modeling in tire mechanics
NASA Technical Reports Server (NTRS)
Luchini, John R.; Peters, Jim M.; Mars, Will V.
1995-01-01
The history, requirements, and theoretical basis of a new energy based constitutive model for (rubber) material elasticity, hysteresis, and failure are presented. Energy based elasticity is handled by many constitutive models, both in one dimension and in three dimensions. Conversion of mechanical energy to heat can be modeled with viscoelasticity or as structural hysteresis. We are seeking unification of elasticity, hysteresis, and failure mechanisms such as fatigue and wear. An energy state characterization for failure criteria of (rubber) materials may provide this unification and also help explain the interaction of temperature effects with failure mechanisms which are described as creation of growth of internal crack surface. Improved structural modeling of tires with FEM should result from such a unified constitutive theory. The theory will also guide experimental work and should enable better interpretation of the results of computational stress analyses.
Gradient-based model calibration with proxy-model assistance
NASA Astrophysics Data System (ADS)
Burrows, Wesley; Doherty, John
2016-02-01
Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.
Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.
Ligmann-Zielinska, Arika; Kramer, Daniel B.; Spence Cheruvelil, Kendra; Soranno, Patricia A.
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system. PMID:25340764
Analysis of Solar Potential of Roofs Based on Digital Terrain Model
NASA Astrophysics Data System (ADS)
Gorički, M.; Poslončec-Petrić, V.; Frangeš, S.; Bačić, Ž.
2017-09-01
One of the basic goals of the smart city concept is to create a high-quality environment that is long sustainable and economically justifiable. The priority and concrete goal today is to promote and provide sustainable sources of energy (SSE). Croatia is rich with sun energy and as one of the sunniest European countries, it has a huge insufficiently used solar potential at its disposal. The paper describes the procedure of analysing the solar potential of a pilot area Sveti Križ Začretje by means of digital surface model (DSM) and based on the data available in the Meteorological and Hydrological Service of the Republic of Croatia. Although a more detailed analysis would require some additional factors, it is clear that the installation of 19,6m2 of solar panels in each household could cover annual requirements of the household in the analysed area, the locality Sveti Križ Začretje.
Fatigue reliability of deck structures subjected to correlated crack growth
NASA Astrophysics Data System (ADS)
Feng, G. Q.; Garbatov, Y.; Guedes Soares, C.
2013-12-01
The objective of this work is to analyse fatigue reliability of deck structures subjected to correlated crack growth. The stress intensity factors of the correlated cracks are obtained by finite element analysis and based on which the geometry correction functions are derived. The Monte Carlo simulations are applied to predict the statistical descriptors of correlated cracks based on the Paris-Erdogan equation. A probabilistic model of crack growth as a function of time is used to analyse the fatigue reliability of deck structures accounting for the crack propagation correlation. A deck structure is modelled as a series system of stiffened panels, where a stiffened panel is regarded as a parallel system composed of plates and are longitudinal. It has been proven that the method developed here can be conveniently applied to perform the fatigue reliability assessment of structures subjected to correlated crack growth.
Salomon, Joshua A
2003-01-01
Background In survey studies on health-state valuations, ordinal ranking exercises often are used as precursors to other elicitation methods such as the time trade-off (TTO) or standard gamble, but the ranking data have not been used in deriving cardinal valuations. This study reconsiders the role of ordinal ranks in valuing health and introduces a new approach to estimate interval-scaled valuations based on aggregate ranking data. Methods Analyses were undertaken on data from a previously published general population survey study in the United Kingdom that included rankings and TTO values for hypothetical states described using the EQ-5D classification system. The EQ-5D includes five domains (mobility, self-care, usual activities, pain/discomfort and anxiety/depression) with three possible levels on each. Rank data were analysed using a random utility model, operationalized through conditional logit regression. In the statistical model, probabilities of observed rankings were related to the latent utilities of different health states, modeled as a linear function of EQ-5D domain scores, as in previously reported EQ-5D valuation functions. Predicted valuations based on the conditional logit model were compared to observed TTO values for the 42 states in the study and to predictions based on a model estimated directly from the TTO values. Models were evaluated using the intraclass correlation coefficient (ICC) between predictions and mean observations, and the root mean squared error of predictions at the individual level. Results Agreement between predicted valuations from the rank model and observed TTO values was very high, with an ICC of 0.97, only marginally lower than for predictions based on the model estimated directly from TTO values (ICC = 0.99). Individual-level errors were also comparable in the two models, with root mean squared errors of 0.503 and 0.496 for the rank-based and TTO-based predictions, respectively. Conclusions Modeling health-state valuations based on ordinal ranks can provide results that are similar to those obtained from more widely analyzed valuation techniques such as the TTO. The information content in aggregate ranking data is not currently exploited to full advantage. The possibility of estimating cardinal valuations from ordinal ranks could also simplify future data collection dramatically and facilitate wider empirical study of health-state valuations in diverse settings and population groups. PMID:14687419
Mulcahy, Andrew W; Chan, Chris; Hirshman, Samuel; Huckfeldt, Peter J; Kofner, Aaron; Liu, Jodi L; Lovejoy, Susan L; Popescu, Ioana; Timbie, Justin W; Hussey, Peter S
2015-07-15
Gastroenterology and cardiology services are common and costly among Medicare beneficiaries. Episode-based payment, which aims to create incentives for high-quality, low-cost care, has been identified as a promising alternative payment model. This article describes research related to the design of episode-based payment models for ambulatory gastroenterology and cardiology services for possible testing by the Center for Medicare and Medicaid Innovation at the Centers for Medicare and Medicaid Services (CMS). The authors analyzed Medicare claims data to describe the frequency and characteristics of gastroenterology and cardiology index procedures, the practices that delivered index procedures, and the patients that received index procedures. The results of these analyses can help inform CMS decisions about the definition of episodes in an episode-based payment model; payment adjustments for service setting, multiple procedures, or other factors; and eligibility for the payment model.
Proposals for enhanced health risk assessment and stratification in an integrated care scenario
Dueñas-Espín, Ivan; Vela, Emili; Pauws, Steffen; Bescos, Cristina; Cano, Isaac; Cleries, Montserrat; Contel, Joan Carles; de Manuel Keenoy, Esteban; Garcia-Aymerich, Judith; Gomez-Cabrero, David; Kaye, Rachelle; Lahr, Maarten M H; Lluch-Ariet, Magí; Moharra, Montserrat; Monterde, David; Mora, Joana; Nalin, Marco; Pavlickova, Andrea; Piera, Jordi; Ponce, Sara; Santaeugenia, Sebastià; Schonenberg, Helen; Störk, Stefan; Tegner, Jesper; Velickovski, Filip; Westerteicher, Christoph; Roca, Josep
2016-01-01
Objectives Population-based health risk assessment and stratification are considered highly relevant for large-scale implementation of integrated care by facilitating services design and case identification. The principal objective of the study was to analyse five health-risk assessment strategies and health indicators used in the five regions participating in the Advancing Care Coordination and Telehealth Deployment (ACT) programme (http://www.act-programme.eu). The second purpose was to elaborate on strategies toward enhanced health risk predictive modelling in the clinical scenario. Settings The five ACT regions: Scotland (UK), Basque Country (ES), Catalonia (ES), Lombardy (I) and Groningen (NL). Participants Responsible teams for regional data management in the five ACT regions. Primary and secondary outcome measures We characterised and compared risk assessment strategies among ACT regions by analysing operational health risk predictive modelling tools for population-based stratification, as well as available health indicators at regional level. The analysis of the risk assessment tool deployed in Catalonia in 2015 (GMAs, Adjusted Morbidity Groups) was used as a basis to propose how population-based analytics could contribute to clinical risk prediction. Results There was consensus on the need for a population health approach to generate health risk predictive modelling. However, this strategy was fully in place only in two ACT regions: Basque Country and Catalonia. We found marked differences among regions in health risk predictive modelling tools and health indicators, and identified key factors constraining their comparability. The research proposes means to overcome current limitations and the use of population-based health risk prediction for enhanced clinical risk assessment. Conclusions The results indicate the need for further efforts to improve both comparability and flexibility of current population-based health risk predictive modelling approaches. Applicability and impact of the proposals for enhanced clinical risk assessment require prospective evaluation. PMID:27084274
Naujokaitis-Lewis, Ilona; Curtis, Janelle M R
2016-01-01
Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along with demographic parameters in sensitivity routines. GRIP 2.0 is an important decision-support tool that can be used to prioritize research, identify habitat-based thresholds and management intervention points to improve probability of species persistence, and evaluate trade-offs of alternative management options.
Curtis, Janelle M.R.
2016-01-01
Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along with demographic parameters in sensitivity routines. GRIP 2.0 is an important decision-support tool that can be used to prioritize research, identify habitat-based thresholds and management intervention points to improve probability of species persistence, and evaluate trade-offs of alternative management options. PMID:27547529
Finite-element modeling of the human neurocranium under functional anatomical aspects.
Mall, G; Hubig, M; Koebke, J; Steinbuch, R
1997-08-01
Due to its functional significance the human skull plays an important role in biomechanical research. The present work describes a new Finite-Element model of the human neurocranium. The dry skull of a middle-aged woman served as a pattern. The model was developed using only the preprocessor (Mentat) of a commercial FE-system (Marc). Unlike that of other FE models of the human skull mentioned in the literature, the geometry in this model was designed according to functional anatomical findings. Functionally important morphological structures representing loci minoris resistentiae, especially the foramina and fissures of the skull base, were included in the model. The results of two linear static loadcase analyses in the region of the skull base underline the importance of modeling from the functional anatomical point of view.
Review of amateur meteor research
NASA Astrophysics Data System (ADS)
Rendtel, Jürgen
2017-09-01
Significant amounts of meteor astronomical data are provided by amateurs worldwide, using various methods. This review concentrates on optical data. Long-term meteor shower analyses based on consistent data are possible over decades (Orionids, Geminids, κ-Cygnids) and allow combination with modelling results. Small and weak structures related to individual stream filaments of cometary dust have been analysed in both major and minor showers (Quadrantids, September ε-Perseids), providing feedback to meteoroid ejection and stream evolution processes. Meteoroid orbit determination from video meteor networks contributes to the improvement of the IAU meteor data base. Professional-amateur cooperation also concerns observations and detailed analysis of fireball data, including meteorite ground searches.
NASA Astrophysics Data System (ADS)
Nardo, A.; Li, B.; Teunissen, P. J. G.
2016-01-01
Integer Ambiguity Resolution (IAR) is the key to fast and precise GNSS positioning. The proper diagnostic metric for successful IAR is provided by the ambiguity success rate being the probability of correct integer estimation. In this contribution we analyse the performance of different GPS+Galileo models in terms of number of epochs needed to reach a pre-determined success rate, for various ground and space-based applications. The simulation-based controlled model environment enables us to gain insight into the factors contributing to the ambiguity resolution strength of the different GPS+Galileo models. Different scenarios of modernized GPS+Galileo are studied, encompassing the long baseline ground case as well as the medium dynamics case (airplane) and the space-based Low Earth Orbiter (LEO) case. In our analyses of these models the capabilities of partial ambiguity resolution (PAR) are demonstrated and compared to the limitations of full ambiguity resolution (FAR). The results show that PAR is generally a more efficient way than FAR to reduce the time needed to achieve centimetre-level positioning precision. For long single baselines, PAR can achieve time reductions of fifty percent to achieve such precision levels, while for multiple baselines it even becomes more effective, reaching reductions up to eighty percent for four station networks. For a LEO, the rapidly changing observation geometry does not even allow FAR, while PAR is then still possible for both dual- and triple-frequency scenarios. With the triple-frequency GPS+Galileo model the availability of precise positioning improves by fifteen percent with respect to the dual-frequency scenario.
NASA Astrophysics Data System (ADS)
Kolkman, M. J.; Kok, M.; van der Veen, A.
The solution of complex, unstructured problems is faced with policy controversy and dispute, unused and misused knowledge, project delay and failure, and decline of public trust in governmental decisions. Mental model mapping (also called concept mapping) is a technique to analyse these difficulties on a fundamental cognitive level, which can reveal experiences, perceptions, assumptions, knowledge and subjective beliefs of stakeholders, experts and other actors, and can stimulate communication and learning. This article presents the theoretical framework from which the use of mental model mapping techniques to analyse this type of problems emerges as a promising technique. The framework consists of the problem solving or policy design cycle, the knowledge production or modelling cycle, and the (computer) model as interface between the cycles. Literature attributes difficulties in the decision-making process to communication gaps between decision makers, stakeholders and scientists, and to the construction of knowledge within different paradigm groups that leads to different interpretation of the problem situation. Analysis of the decision-making process literature indicates that choices, which are made in all steps of the problem solving cycle, are based on an individual decision maker’s frame of perception. This frame, in turn, depends on the mental model residing in the mind of the individual. Thus we identify three levels of awareness on which the decision process can be analysed. This research focuses on the third level. Mental models can be elicited using mapping techniques. In this way, analysing an individual’s mental model can shed light on decision-making problems. The steps of the knowledge production cycle are, in the same manner, ultimately driven by the mental models of the scientist in a specific discipline. Remnants of this mental model can be found in the resulting computer model. The characteristics of unstructured problems (complexity, uncertainty and disagreement) can be positioned in the framework, as can the communities of knowledge construction and valuation involved in the solution of these problems (core science, applied science, and professional consultancy, and “post-normal” science). Mental model maps, this research hypothesises, are suitable to analyse the above aspects of the problem. This hypothesis is tested for the case of the Zwolle storm surch barrier. Analysis can aid integration between disciplines, participation of public stakeholders, and can stimulate learning processes. Mental model mapping is recommended to visualise the use of knowledge, to analyse difficulties in problem solving process, and to aid information transfer and communication. Mental model mapping help scientists to shape their new, post-normal responsibilities in a manner that complies with integrity when dealing with unstructured problems in complex, multifunctional systems.
Psychopathic Traits in Youth: Is There Evidence for Primary and Secondary Subtypes?
ERIC Educational Resources Information Center
Lee, Zina; Salekin, Randall T.; Iselin, Anne-Marie R.
2010-01-01
The current study employed model-based cluster analysis in a sample of male adolescent offenders (n = 94) to examine subtypes based on psychopathic traits and anxiety. Using the Psychopathy Checklist: Youth Version (PCL:YV; Forth et al. 2003) and the self-report Antisocial Process Screening Device (APSD; Caputo et al. 1999), analyses identified…
ERIC Educational Resources Information Center
Kollias, V.; Mamalougos, N.; Vamvakoussi, X.; Lakkala, M.; Vosniadou, S.
2005-01-01
Fifty-six teachers, from four European countries, were interviewed to ascertain their attitudes to and beliefs about the Collaborative Learning Environments (CLEs) which were designed under the Innovative Technologies for Collaborative Learning Project. Their responses were analysed using categories based on a model from cultural-historical…
ERIC Educational Resources Information Center
Kunnavatana, S. Shanun; Bloom, Sarah E.; Samaha, Andrew L.; Lignugaris/Kraft, Benjamin; Dayton, Elizabeth; Harris, Shannon K.
2013-01-01
Functional behavioral assessments are commonly used in school settings to assess and develop interventions for problem behavior. The trial-based functional analysis is an approach that teachers can use in their classrooms to identify the function of problem behavior. The current study evaluates the effectiveness of a modified pyramidal training…
van der Heijden, A A W A; Feenstra, T L; Hoogenveen, R T; Niessen, L W; de Bruijne, M C; Dekker, J M; Baan, C A; Nijpels, G
2015-12-01
To test a simulation model, the MICADO model, for estimating the long-term effects of interventions in people with and without diabetes. The MICADO model includes micro- and macrovascular diseases in relation to their risk factors. The strengths of this model are its population scope and the possibility to assess parameter uncertainty using probabilistic sensitivity analyses. Outcomes include incidence and prevalence of complications, quality of life, costs and cost-effectiveness. We externally validated MICADO's estimates of micro- and macrovascular complications in a Dutch cohort with diabetes (n = 498,400) by comparing these estimates with national and international empirical data. For the annual number of people undergoing amputations, MICADO's estimate was 592 (95% interquantile range 291-842), which compared well with the registered number of people with diabetes-related amputations in the Netherlands (728). The incidence of end-stage renal disease estimated using the MICADO model was 247 people (95% interquartile range 120-363), which was also similar to the registered incidence in the Netherlands (277 people). MICADO performed well in the validation of macrovascular outcomes of population-based cohorts, while it had more difficulty in reflecting a highly selected trial population. Validation by comparison with independent empirical data showed that the MICADO model simulates the natural course of diabetes and its micro- and macrovascular complications well. As a population-based model, MICADO can be applied for projections as well as scenario analyses to evaluate the long-term (cost-)effectiveness of population-level interventions targeting diabetes and its complications in the Netherlands or similar countries. © 2015 The Authors. Diabetic Medicine © 2015 Diabetes UK.
Probabilistic flood damage modelling at the meso-scale
NASA Astrophysics Data System (ADS)
Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno
2014-05-01
Decisions on flood risk management and adaptation are usually based on risk analyses. Such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments. Most damage models have in common that complex damaging processes are described by simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood damage models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we show how the model BT-FLEMO (Bagging decision Tree based Flood Loss Estimation MOdel) can be applied on the meso-scale, namely on the basis of ATKIS land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany. The application of BT-FLEMO provides a probability distribution of estimated damage to residential buildings per municipality. Validation is undertaken on the one hand via a comparison with eight other damage models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official damage data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of damage estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation model BT-FLEMO is that it inherently provides quantitative information about the uncertainty of the prediction. Reference: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64.
Modeling trends from North American Breeding Bird Survey data: a spatially explicit approach
Bled, Florent; Sauer, John R.; Pardieck, Keith L.; Doherty, Paul; Royle, J. Andy
2013-01-01
Population trends, defined as interval-specific proportional changes in population size, are often used to help identify species of conservation interest. Efficient modeling of such trends depends on the consideration of the correlation of population changes with key spatial and environmental covariates. This can provide insights into causal mechanisms and allow spatially explicit summaries at scales that are of interest to management agencies. We expand the hierarchical modeling framework used in the North American Breeding Bird Survey (BBS) by developing a spatially explicit model of temporal trend using a conditional autoregressive (CAR) model. By adopting a formal spatial model for abundance, we produce spatially explicit abundance and trend estimates. Analyses based on large-scale geographic strata such as Bird Conservation Regions (BCR) can suffer from basic imbalances in spatial sampling. Our approach addresses this issue by providing an explicit weighting based on the fundamental sample allocation unit of the BBS. We applied the spatial model to three species from the BBS. Species have been chosen based upon their well-known population change patterns, which allows us to evaluate the quality of our model and the biological meaning of our estimates. We also compare our results with the ones obtained for BCRs using a nonspatial hierarchical model (Sauer and Link 2011). Globally, estimates for mean trends are consistent between the two approaches but spatial estimates provide much more precise trend estimates in regions on the edges of species ranges that were poorly estimated in non-spatial analyses. Incorporating a spatial component in the analysis not only allows us to obtain relevant and biologically meaningful estimates for population trends, but also enables us to provide a flexible framework in order to obtain trend estimates for any area.
Wang, Hui; Jiang, Mingyue; Li, Shujun; Hse, Chung-Yun; Jin, Chunde; Sun, Fangli; Li, Zhuo
2017-09-01
Cinnamaldehyde amino acid Schiff base (CAAS) is a new class of safe, bioactive compounds which could be developed as potential antifungal agents for fungal infections. To design new cinnamaldehyde amino acid Schiff base compounds with high bioactivity, the quantitative structure-activity relationships (QSARs) for CAAS compounds against Aspergillus niger ( A. niger ) and Penicillium citrinum (P. citrinum) were analysed. The QSAR models ( R 2 = 0.9346 for A. niger , R 2 = 0.9590 for P. citrinum, ) were constructed and validated. The models indicated that the molecular polarity and the Max atomic orbital electronic population had a significant effect on antifungal activity. Based on the best QSAR models, two new compounds were designed and synthesized. Antifungal activity tests proved that both of them have great bioactivity against the selected fungi.
Wang, Hui; Jiang, Mingyue; Hse, Chung-Yun; Jin, Chunde; Sun, Fangli; Li, Zhuo
2017-01-01
Cinnamaldehyde amino acid Schiff base (CAAS) is a new class of safe, bioactive compounds which could be developed as potential antifungal agents for fungal infections. To design new cinnamaldehyde amino acid Schiff base compounds with high bioactivity, the quantitative structure–activity relationships (QSARs) for CAAS compounds against Aspergillus niger (A. niger) and Penicillium citrinum (P. citrinum) were analysed. The QSAR models (R2 = 0.9346 for A. niger, R2 = 0.9590 for P. citrinum,) were constructed and validated. The models indicated that the molecular polarity and the Max atomic orbital electronic population had a significant effect on antifungal activity. Based on the best QSAR models, two new compounds were designed and synthesized. Antifungal activity tests proved that both of them have great bioactivity against the selected fungi. PMID:28989758
Gouda, Hebe N; Critchley, Julia; Powles, John; Capewell, Simon
2012-01-28
Reasons for the widespread declines in coronary heart disease (CHD) mortality in high income countries are controversial. Here we explore how the type of metric chosen for the analyses of these declines affects the answer obtained. The analyses we reviewed were performed using IMPACT, a large Excel based model of the determinants of temporal change in mortality from CHD. Assessments of the decline in CHD mortality in the USA between 1980 and 2000 served as the central case study. Analyses based in the metric of number of deaths prevented attributed about half the decline to treatments (including preventive medications) and half to favourable shifts in risk factors. However, when mortality change was expressed in the metric of life-years-gained, the share attributed to risk factor change rose to 65%. This happened because risk factor changes were modelled as slowing disease progression, such that the hypothetical deaths averted resulted in longer average remaining lifetimes gained than the deaths averted by better treatments. This result was robust to a range of plausible assumptions on the relative effect sizes of changes in treatments and risk factors. Time-based metrics (such as life years) are generally preferable because they direct attention to the changes in the natural history of disease that are produced by changes in key health determinants. The life-years attached to each death averted will also weight deaths in a way that better reflects social preferences.
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
Kim, Hyun Jung; Griffiths, Mansel W; Fazil, Aamir M; Lammerding, Anna M
2009-09-01
Foodborne illness contracted at food service operations is an important public health issue in Korea. In this study, the probabilities for growth of, and enterotoxin production by, Staphylococcus aureus in pork meat-based foods prepared in food service operations were estimated by the Monte Carlo simulation. Data on the prevalence and concentration of S. aureus as well as compliance to guidelines for time and temperature controls during food service operations were collected. The growth of S. aureus was initially estimated by using the U.S. Department of Agriculture's Pathogen Modeling Program. A second model based on raw pork meat was derived to compare cell number predictions. The correlation between toxin level and cell number as well as minimum toxin dose obtained from published data was adopted to quantify the probability of staphylococcal intoxication. When data gaps were found, assumptions were made based on guidelines for food service practices. Baseline risk model and scenario analyses were performed to indicate possible outcomes of staphylococcal intoxication under the scenarios generated based on these data gaps. Staphylococcal growth was predicted during holding before and after cooking, and the highest estimated concentration (4.59 log CFU/g for the 99.9th percentile value) of S. aureus was observed in raw pork initially contaminated with S. aureus and held before cooking. The estimated probability for staphylococcal intoxication was very low, using currently available data. However, scenario analyses revealed an increased possibility of staphylococcal intoxication when increased levels of initial contamination in the raw meat, andlonger holding time both before and after cooking the meat occurred.
From brain topography to brain topology: relevance of graph theory to functional neuroscience.
Minati, Ludovico; Varotto, Giulia; D'Incerti, Ludovico; Panzica, Ferruccio; Chan, Dennis
2013-07-10
Although several brain regions show significant specialization, higher functions such as cross-modal information integration, abstract reasoning and conscious awareness are viewed as emerging from interactions across distributed functional networks. Analytical approaches capable of capturing the properties of such networks can therefore enhance our ability to make inferences from functional MRI, electroencephalography and magnetoencephalography data. Graph theory is a branch of mathematics that focuses on the formal modelling of networks and offers a wide range of theoretical tools to quantify specific features of network architecture (topology) that can provide information complementing the anatomical localization of areas responding to given stimuli or tasks (topography). Explicit modelling of the architecture of axonal connections and interactions among areas can furthermore reveal peculiar topological properties that are conserved across diverse biological networks, and highly sensitive to disease states. The field is evolving rapidly, partly fuelled by computational developments that enable the study of connectivity at fine anatomical detail and the simultaneous interactions among multiple regions. Recent publications in this area have shown that graph-based modelling can enhance our ability to draw causal inferences from functional MRI experiments, and support the early detection of disconnection and the modelling of pathology spread in neurodegenerative disease, particularly Alzheimer's disease. Furthermore, neurophysiological studies have shown that network topology has a profound link to epileptogenesis and that connectivity indices derived from graph models aid in modelling the onset and spread of seizures. Graph-based analyses may therefore significantly help understand the bases of a range of neurological conditions. This review is designed to provide an overview of graph-based analyses of brain connectivity and their relevance to disease aimed principally at general neuroscientists and clinicians.
NASA Technical Reports Server (NTRS)
Tischer, A. E.
1987-01-01
The failure information propagation model (FIPM) data base was developed to store and manipulate the large amount of information anticipated for the various Space Shuttle Main Engine (SSME) FIPMs. The organization and structure of the FIPM data base is described, including a summary of the data fields and key attributes associated with each FIPM data file. The menu-driven software developed to facilitate and control the entry, modification, and listing of data base records is also discussed. The transfer of the FIPM data base and software to the NASA Marshall Space Flight Center is described. Complete listings of all of the data base definition commands and software procedures are included in the appendixes.
Torruella, Guifré; Derelle, Romain; Paps, Jordi; Lang, B. Franz; Roger, Andrew J.; Shalchian-Tabrizi, Kamran; Ruiz-Trillo, Iñaki
2012-01-01
Many of the eukaryotic phylogenomic analyses published to date were based on alignments of hundreds to thousands of genes. Frequently, in such analyses, the most realistic evolutionary models currently available are often used to minimize the impact of systematic error. However, controversy remains over whether or not idiosyncratic gene family dynamics (i.e., gene duplications and losses) and incorrect orthology assignments are always appropriately taken into account. In this paper, we present an innovative strategy for overcoming orthology assignment problems. Rather than identifying and eliminating genes with paralogy problems, we have constructed a data set comprised exclusively of conserved single-copy protein domains that, unlike most of the commonly used phylogenomic data sets, should be less confounded by orthology miss-assignments. To evaluate the power of this approach, we performed maximum likelihood and Bayesian analyses to infer the evolutionary relationships within the opisthokonts (which includes Metazoa, Fungi, and related unicellular lineages). We used this approach to test 1) whether Filasterea and Ichthyosporea form a clade, 2) the interrelationships of early-branching metazoans, and 3) the relationships among early-branching fungi. We also assessed the impact of some methods that are known to minimize systematic error, including reducing the distance between the outgroup and ingroup taxa or using the CAT evolutionary model. Overall, our analyses support the Filozoa hypothesis in which Ichthyosporea are the first holozoan lineage to emerge followed by Filasterea, Choanoflagellata, and Metazoa. Blastocladiomycota appears as a lineage separate from Chytridiomycota, although this result is not strongly supported. These results represent independent tests of previous phylogenetic hypotheses, highlighting the importance of sophisticated approaches for orthology assignment in phylogenomic analyses. PMID:21771718
Transactions in domain-specific information systems
NASA Astrophysics Data System (ADS)
Zacek, Jaroslav
2017-07-01
Substantial number of the current information system (IS) implementations is based on transaction approach. In addition, most of the implementations are domain-specific (e.g. accounting IS, resource planning IS). Therefore, we have to have a generic transaction model to build and verify domain-specific IS. The paper proposes a new transaction model for domain-specific ontologies. This model is based on value oriented business process modelling technique. The transaction model is formalized by the Petri Net theory. First part of the paper presents common business processes and analyses related to business process modeling. Second part defines the transactional model delimited by REA enterprise ontology paradigm and introduces states of the generic transaction model. The generic model proposal is defined and visualized by the Petri Net modelling tool. Third part shows application of the generic transaction model. Last part of the paper concludes results and discusses a practical usability of the generic transaction model.
Yammine, Kaissar
2015-11-01
The open access model, where researchers can publish their work and make it freely available to the whole medical community, is gaining ground over the traditional type of publication. However, fees are to be paid by either the authors or their institutions. The purpose of this paper is to assess the proportion and type of open access evidence-based articles in the form of systematic reviews and meta-analyses in the field of musculoskeletal disorders and orthopedic surgery. PubMed database was searched and the results showed a maximal number of hits for low back pain and total hip arthroplasty. We demonstrated that despite a 10-fold increase in the number of evidence-based publications in the past 10 years, the rate of free systematic reviews in the general biomedical literature did not change for the last two decades. In addition, the average percentage of free open access systematic reviews and meta-analyses for the commonest painful musculoskeletal conditions and orthopedic procedures was 20% and 18%, respectively. Those results were significantly lower than those of the systematic reviews and meta-analyses in the remaining biomedical research. Such findings could indicate a divergence between the efforts engaged at promoting evidence-based principles and those at disseminating evidence-based findings in the field of musculoskeletal disease and trauma. The high processing fee is thought to be a major limitation when considering open access model for publication. © 2015 Chinese Cochrane Center, West China Hospital of Sichuan University and Wiley Publishing Asia Pty Ltd.
Barton, Hugh A; Chiu, Weihsueh A; Setzer, R Woodrow; Andersen, Melvin E; Bailer, A John; Bois, Frédéric Y; Dewoskin, Robert S; Hays, Sean; Johanson, Gunnar; Jones, Nancy; Loizou, George; Macphail, Robert C; Portier, Christopher J; Spendiff, Martin; Tan, Yu-Mei
2007-10-01
Physiologically based pharmacokinetic (PBPK) models are used in mode-of-action based risk and safety assessments to estimate internal dosimetry in animals and humans. When used in risk assessment, these models can provide a basis for extrapolating between species, doses, and exposure routes or for justifying nondefault values for uncertainty factors. Characterization of uncertainty and variability is increasingly recognized as important for risk assessment; this represents a continuing challenge for both PBPK modelers and users. Current practices show significant progress in specifying deterministic biological models and nondeterministic (often statistical) models, estimating parameters using diverse data sets from multiple sources, using them to make predictions, and characterizing uncertainty and variability of model parameters and predictions. The International Workshop on Uncertainty and Variability in PBPK Models, held 31 Oct-2 Nov 2006, identified the state-of-the-science, needed changes in practice and implementation, and research priorities. For the short term, these include (1) multidisciplinary teams to integrate deterministic and nondeterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through improved documentation of model structure(s), parameter values, sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include (1) theoretical and practical methodological improvements for nondeterministic/statistical modeling; (2) better methods for evaluating alternative model structures; (3) peer-reviewed databases of parameters and covariates, and their distributions; (4) expanded coverage of PBPK models across chemicals with different properties; and (5) training and reference materials, such as cases studies, bibliographies/glossaries, model repositories, and enhanced software. The multidisciplinary dialogue initiated by this Workshop will foster the collaboration, research, data collection, and training necessary to make characterizing uncertainty and variability a standard practice in PBPK modeling and risk assessment.
GeNets: a unified web platform for network-based genomic analyses.
Li, Taibo; Kim, April; Rosenbluh, Joseph; Horn, Heiko; Greenfeld, Liraz; An, David; Zimmer, Andrew; Liberzon, Arthur; Bistline, Jon; Natoli, Ted; Li, Yang; Tsherniak, Aviad; Narayan, Rajiv; Subramanian, Aravind; Liefeld, Ted; Wong, Bang; Thompson, Dawn; Calvo, Sarah; Carr, Steve; Boehm, Jesse; Jaffe, Jake; Mesirov, Jill; Hacohen, Nir; Regev, Aviv; Lage, Kasper
2018-06-18
Functional genomics networks are widely used to identify unexpected pathway relationships in large genomic datasets. However, it is challenging to compare the signal-to-noise ratios of different networks and to identify the optimal network with which to interpret a particular genetic dataset. We present GeNets, a platform in which users can train a machine-learning model (Quack) to carry out these comparisons and execute, store, and share analyses of genetic and RNA-sequencing datasets.
NASA Astrophysics Data System (ADS)
Hu, Kun; Zhu, Qi-zhi; Chen, Liang; Shao, Jian-fu; Liu, Jian
2018-06-01
As confining pressure increases, crystalline rocks of moderate porosity usually undergo a transition in failure mode from localized brittle fracture to diffused damage and ductile failure. This transition has been widely reported experimentally for several decades; however, satisfactory modeling is still lacking. The present paper aims at modeling the brittle-ductile transition process of rocks under conventional triaxial compression. Based on quantitative analyses of experimental results, it is found that there is a quite satisfactory linearity between the axial inelastic strain at failure and the confining pressure prescribed. A micromechanics-based frictional damage model is then formulated using an associated plastic flow rule and a strain energy release rate-based damage criterion. The analytical solution to the strong plasticity-damage coupling problem is provided and applied to simulate the nonlinear mechanical behaviors of Tennessee marble, Indiana limestone and Jinping marble, each presenting a brittle-ductile transition in stress-strain curves.
Spatial modeling in ecology: the flexibility of eigenfunction spatial analyses.
Griffith, Daniel A; Peres-Neto, Pedro R
2006-10-01
Recently, analytical approaches based on the eigenfunctions of spatial configuration matrices have been proposed in order to consider explicitly spatial predictors. The present study demonstrates the usefulness of eigenfunctions in spatial modeling applied to ecological problems and shows equivalencies of and differences between the two current implementations of this methodology. The two approaches in this category are the distance-based (DB) eigenvector maps proposed by P. Legendre and his colleagues, and spatial filtering based upon geographic connectivity matrices (i.e., topology-based; CB) developed by D. A. Griffith and his colleagues. In both cases, the goal is to create spatial predictors that can be easily incorporated into conventional regression models. One important advantage of these two approaches over any other spatial approach is that they provide a flexible tool that allows the full range of general and generalized linear modeling theory to be applied to ecological and geographical problems in the presence of nonzero spatial autocorrelation.
Functional connectivity in replicated urban landscapes in the land snail (Cornu aspersum).
Balbi, Manon; Ernoult, Aude; Poli, Pedro; Madec, Luc; Guiller, Annie; Martin, Marie-Claire; Nabucet, Jean; Beaujouan, Véronique; Petit, Eric J
2018-03-01
Urban areas are highly fragmented and thereby exert strong constraints on individual dispersal. Despite this, some species manage to persist in urban areas, such as the garden snail, Cornu aspersum, which is common in cityscapes despite its low mobility. Using landscape genetic approaches, we combined study area replication and multiscale analysis to determine how landscape composition, configuration and connectivity influence snail dispersal across urban areas. At the overall landscape scale, areas with a high percentage of roads decreased genetic differentiation between populations. At the population scale, genetic differentiation was positively linked with building surface, the proportion of borders where wooded patches and roads appeared side by side and the proportion of borders combining wooded patches and other impervious areas. Analyses based on pairwise genetic distances validated the isolation-by-distance and isolation-by-resistance models for this land snail, with an equal fit to least-cost paths and circuit-theory-based models. Each of the 12 landscapes analysed separately yielded specific relations to environmental features, whereas analyses integrating all replicates highlighted general common effects. Our results suggest that urban transport infrastructures facilitate passive snail dispersal. At a local scale, corresponding to active dispersal, unfavourable habitats (wooded and impervious areas) isolate populations. This work upholds the use of replicated landscapes to increase the generalizability of landscape genetics results and shows how multiscale analyses provide insight into scale-dependent processes. © 2018 John Wiley & Sons Ltd.
Survival Regression Modeling Strategies in CVD Prediction.
Barkhordari, Mahnaz; Padyab, Mojgan; Sardarinia, Mahsa; Hadaegh, Farzad; Azizi, Fereidoun; Bozorgmanesh, Mohammadreza
2016-04-01
A fundamental part of prevention is prediction. Potential predictors are the sine qua non of prediction models. However, whether incorporating novel predictors to prediction models could be directly translated to added predictive value remains an area of dispute. The difference between the predictive power of a predictive model with (enhanced model) and without (baseline model) a certain predictor is generally regarded as an indicator of the predictive value added by that predictor. Indices such as discrimination and calibration have long been used in this regard. Recently, the use of added predictive value has been suggested while comparing the predictive performances of the predictive models with and without novel biomarkers. User-friendly statistical software capable of implementing novel statistical procedures is conspicuously lacking. This shortcoming has restricted implementation of such novel model assessment methods. We aimed to construct Stata commands to help researchers obtain the aforementioned statistical indices. We have written Stata commands that are intended to help researchers obtain the following. 1, Nam-D'Agostino X 2 goodness of fit test; 2, Cut point-free and cut point-based net reclassification improvement index (NRI), relative absolute integrated discriminatory improvement index (IDI), and survival-based regression analyses. We applied the commands to real data on women participating in the Tehran lipid and glucose study (TLGS) to examine if information relating to a family history of premature cardiovascular disease (CVD), waist circumference, and fasting plasma glucose can improve predictive performance of Framingham's general CVD risk algorithm. The command is adpredsurv for survival models. Herein we have described the Stata package "adpredsurv" for calculation of the Nam-D'Agostino X 2 goodness of fit test as well as cut point-free and cut point-based NRI, relative and absolute IDI, and survival-based regression analyses. We hope this work encourages the use of novel methods in examining predictive capacity of the emerging plethora of novel biomarkers.
Graphic-based musculoskeletal model for biomechanical analyses and animation.
Chao, Edmund Y S
2003-04-01
The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the 'Virtual Human' reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. This paper details the design, capabilities, and features of the VIMS development at Johns Hopkins University, an effort possible only through academic and commercial collaborations. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of this unique database and simulation technology. This integrated system will impact on medical education, basic research, device development and application, and clinical patient care related to musculoskeletal diseases, trauma, and rehabilitation.
Precursor model and preschool science learning about shadows formation
NASA Astrophysics Data System (ADS)
Delserieys, Alice; Jégou, Corinne; Boilevin, Jean-Marie; Ravanis, Konstantinos
2018-04-01
This work is based on the idea that young children benefit from early introduction of scientific concepts. Few researches describe didactical strategies focusing on physics understanding for young children and analyse their effectiveness in standard classroom environments.
Model-based analyses: Promises, pitfalls, and example applications to the study of cognitive control
Mars, Rogier B.; Shea, Nicholas J.; Kolling, Nils; Rushworth, Matthew F. S.
2011-01-01
We discuss a recent approach to investigating cognitive control, which has the potential to deal with some of the challenges inherent in this endeavour. In a model-based approach, the researcher defines a formal, computational model that performs the task at hand and whose performance matches that of a research participant. The internal variables in such a model might then be taken as proxies for latent variables computed in the brain. We discuss the potential advantages of such an approach for the study of the neural underpinnings of cognitive control and its pitfalls, and we make explicit the assumptions underlying the interpretation of data obtained using this approach. PMID:20437297
NASA Astrophysics Data System (ADS)
Zhou, Zongchuan; Dang, Dongsheng; Qi, Caijuan; Tian, Hongliang
2018-02-01
It is of great significance to make accurate forecasting for the power consumption of high energy-consuming industries. A forecasting model for power consumption of high energy-consuming industries based on system dynamics is proposed in this paper. First, several factors that have influence on the development of high energy-consuming industries in recent years are carefully dissected. Next, by analysing the relationship between each factor and power consumption, the system dynamics flow diagram and equations are set up to reflect the relevant relationships among variables. In the end, the validity of the model is verified by forecasting the power consumption of electrolytic aluminium industry in Ningxia according to the proposed model.
A model-based test for treatment effects with probabilistic classifications.
Cavagnaro, Daniel R; Davis-Stober, Clintin P
2018-05-21
Within modern psychology, computational and statistical models play an important role in describing a wide variety of human behavior. Model selection analyses are typically used to classify individuals according to the model(s) that best describe their behavior. These classifications are inherently probabilistic, which presents challenges for performing group-level analyses, such as quantifying the effect of an experimental manipulation. We answer this challenge by presenting a method for quantifying treatment effects in terms of distributional changes in model-based (i.e., probabilistic) classifications across treatment conditions. The method uses hierarchical Bayesian mixture modeling to incorporate classification uncertainty at the individual level into the test for a treatment effect at the group level. We illustrate the method with several worked examples, including a reanalysis of the data from Kellen, Mata, and Davis-Stober (2017), and analyze its performance more generally through simulation studies. Our simulations show that the method is both more powerful and less prone to type-1 errors than Fisher's exact test when classifications are uncertain. In the special case where classifications are deterministic, we find a near-perfect power-law relationship between the Bayes factor, derived from our method, and the p value obtained from Fisher's exact test. We provide code in an online supplement that allows researchers to apply the method to their own data. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Bae, Gihyun; Huh, Hoon; Park, Sungho
This paper deals with a regression model for light weight and crashworthiness enhancement design of automotive parts in frontal car crash. The ULSAB-AVC model is employed for the crash analysis and effective parts are selected based on the amount of energy absorption during the crash behavior. Finite element analyses are carried out for designated design cases in order to investigate the crashworthiness and weight according to the material and thickness of main energy absorption parts. Based on simulations results, a regression analysis is performed to construct a regression model utilized for light weight and crashworthiness enhancement design of automotive parts. An example for weight reduction of main energy absorption parts demonstrates the validity of a regression model constructed.
One Giant Leap for Categorizers: One Small Step for Categorization Theory
Smith, J. David; Ell, Shawn W.
2015-01-01
We explore humans’ rule-based category learning using analytic approaches that highlight their psychological transitions during learning. These approaches confirm that humans show qualitatively sudden psychological transitions during rule learning. These transitions contribute to the theoretical literature contrasting single vs. multiple category-learning systems, because they seem to reveal a distinctive learning process of explicit rule discovery. A complete psychology of categorization must describe this learning process, too. Yet extensive formal-modeling analyses confirm that a wide range of current (gradient-descent) models cannot reproduce these transitions, including influential rule-based models (e.g., COVIS) and exemplar models (e.g., ALCOVE). It is an important theoretical conclusion that existing models cannot explain humans’ rule-based category learning. The problem these models have is the incremental algorithm by which learning is simulated. Humans descend no gradient in rule-based tasks. Very different formal-modeling systems will be required to explain humans’ psychology in these tasks. An important next step will be to build a new generation of models that can do so. PMID:26332587
Fast-Running Aeroelastic Code Based on Unsteady Linearized Aerodynamic Solver Developed
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Bakhle, Milind A.; Keith, T., Jr.
2003-01-01
The NASA Glenn Research Center has been developing aeroelastic analyses for turbomachines for use by NASA and industry. An aeroelastic analysis consists of a structural dynamic model, an unsteady aerodynamic model, and a procedure to couple the two models. The structural models are well developed. Hence, most of the development for the aeroelastic analysis of turbomachines has involved adapting and using unsteady aerodynamic models. Two methods are used in developing unsteady aerodynamic analysis procedures for the flutter and forced response of turbomachines: (1) the time domain method and (2) the frequency domain method. Codes based on time domain methods require considerable computational time and, hence, cannot be used during the design process. Frequency domain methods eliminate the time dependence by assuming harmonic motion and, hence, require less computational time. Early frequency domain analyses methods neglected the important physics of steady loading on the analyses for simplicity. A fast-running unsteady aerodynamic code, LINFLUX, which includes steady loading and is based on the frequency domain method, has been modified for flutter and response calculations. LINFLUX, solves unsteady linearized Euler equations for calculating the unsteady aerodynamic forces on the blades, starting from a steady nonlinear aerodynamic solution. First, we obtained a steady aerodynamic solution for a given flow condition using the nonlinear unsteady aerodynamic code TURBO. A blade vibration analysis was done to determine the frequencies and mode shapes of the vibrating blades, and an interface code was used to convert the steady aerodynamic solution to a form required by LINFLUX. A preprocessor was used to interpolate the mode shapes from the structural dynamic mesh onto the computational dynamics mesh. Then, we used LINFLUX to calculate the unsteady aerodynamic forces for a given mode, frequency, and phase angle. A postprocessor read these unsteady pressures and calculated the generalized aerodynamic forces, eigenvalues, and response amplitudes. The eigenvalues determine the flutter frequency and damping. As a test case, the flutter of a helical fan was calculated with LINFLUX and compared with calculations from TURBO-AE, a nonlinear time domain code, and from ASTROP2, a code based on linear unsteady aerodynamics.
NASA Astrophysics Data System (ADS)
Fettahlıoğlu, Pınar; Aydoğdu, Mustafa
2018-04-01
The purpose of this research is to investigate the effect of using argumentation and problem-based learning approaches on the development of environmentally responsible behaviours among pre-service science teachers. Experimental activities were implemented for 14 weeks for 52 class hours in an environmental education class within a science teaching department. A mixed method was used as a research design; particularly, a special type of Concurrent Nested Strategy was applied. The quantitative portion was based on the one-group pre-test and post-test models, and the qualitative portion was based on the holistic multiple-case study method. The quantitative portion of the research was conducted with 34 third-year pre-service science teachers studying at a state university. The qualitative portion of the study was conducted with six pre-service science teachers selected among the 34 pre-service science teachers based on the pre-test results obtained from an environmentally responsible behaviour scale. t tests for dependent groups were used to analyse quantitative data. Both descriptive and content analyses of the qualitative data were performed. The results of the study showed that the use of the argumentation and problem-based learning approaches significantly contributed to the development of environmentally responsible behaviours among pre-service science teachers.
Mapping interictal epileptic discharges using mutual information between concurrent EEG and fMRI.
Caballero-Gaudes, César; Van de Ville, Dimitri; Grouiller, Frédéric; Thornton, Rachel; Lemieux, Louis; Seeck, Margitta; Lazeyras, François; Vulliemoz, Serge
2013-03-01
The mapping of haemodynamic changes related to interictal epileptic discharges (IED) in simultaneous electroencephalography (EEG) and functional MRI (fMRI) studies is usually carried out by means of EEG-correlated fMRI analyses where the EEG information specifies the model to test on the fMRI signal. The sensitivity and specificity critically depend on the accuracy of EEG detection and the validity of the haemodynamic model. In this study we investigated whether an information theoretic analysis based on the mutual information (MI) between the presence of epileptic activity on EEG and the fMRI data can provide further insights into the haemodynamic changes related to interictal epileptic activity. The important features of MI are that: 1) both recording modalities are treated symmetrically; 2) no requirement for a-priori models for the haemodynamic response function, or assumption of a linear relationship between the spiking activity and BOLD responses, and 3) no parametric model for the type of noise or its probability distribution is necessary for the computation of MI. Fourteen patients with pharmaco-resistant focal epilepsy underwent EEG-fMRI and intracranial EEG and/or surgical resection with positive postoperative outcome (seizure freedom or considerable reduction in seizure frequency) was available in 7/14 patients. We used nonparametric statistical assessment of the MI maps based on a four-dimensional wavelet packet resampling method. The results of MI were compared to the statistical parametric maps obtained with two conventional General Linear Model (GLM) analyses based on the informed basis set (canonical HRF and its temporal and dispersion derivatives) and the Finite Impulse Response (FIR) models. The MI results were concordant with the electro-clinically or surgically defined epileptogenic area in 8/14 patients and showed the same degree of concordance as the results obtained with the GLM-based methods in 12 patients (7 concordant and 5 discordant). In one patient, the information theoretic analysis improved the delineation of the irritative zone compared with the GLM-based methods. Our findings suggest that an information theoretic analysis can provide clinically relevant information about the BOLD signal changes associated with the generation and propagation of interictal epileptic discharges. The concordance between the MI, GLM and FIR maps support the validity of the assumptions adopted in GLM-based analyses of interictal epileptic activity with EEG-fMRI in such a manner that they do not significantly constrain the localization of the epileptogenic zone. Copyright © 2012 Elsevier Inc. All rights reserved.
A Multivariate Model and Analysis of Competitive Strategy in the U.S. Hardwood Lumber Industry
Robert J. Bush; Steven A. Sinclair
1991-01-01
Business-level competitive strategy in the hardwood lumber industry was modeled through the identification of strategic groups among large U.S. hardwood lumber producers. Strategy was operationalized using a measure based on the variables developed by Dess and Davis (1984). Factor and cluster analyses were used to define strategic groups along the dimensions of cost...
ERIC Educational Resources Information Center
Nikolaidou, Georgia N.
2012-01-01
This exploratory work describes and analyses the collaborative interactions that emerge during computer-based music composition in the primary school. The study draws on socio-cultural theories of learning, originated within Vygotsky's theoretical context, and proposes a new model, namely Computer-mediated Praxis and Logos under Synergy (ComPLuS).…
Choosing a Transformation in Analyses of Insect Counts from Contagious Distributions with Low Means
W.D. Pepper; S.J. Zarnoch; G.L. DeBarr; P. de Groot; C.D. Tangren
1997-01-01
Guidelines based on computer simulation are suggested for choosing a transformation of insect counts from negative binomial distributions with low mean counts and high levels of contagion. Typical values and ranges of negative binomial model parameters were determined by fitting the model to data from 19 entomological field studies. Random sampling of negative binomial...
ERIC Educational Resources Information Center
Leow, Christine; Wen, Xiaoli; Korfmacher, Jon
2015-01-01
This article compares regression modeling and propensity score analysis as different types of statistical techniques used in addressing selection bias when estimating the impact of two-year versus one-year Head Start on children's school readiness. The analyses were based on the national Head Start secondary dataset. After controlling for…
ERIC Educational Resources Information Center
Etheridge Woodson, Stephani; Szkupinski Quiroga, Seline; Underiner, Tamara; Farid Karimi, Robert
2017-01-01
Growing from a multi-year and multidisciplinary research and applied arts investigative team based in North America, this essay presents a model of how performative engagements contribute to individual behavioural change in wellness practices. To be even more specific, this essay analyses and theorises the mechanisms involved in the application of…
The Two- and Three-Dimensional Models of the HK-WISC: A Confirmatory Factor Analysis.
ERIC Educational Resources Information Center
Chan, David W.; Lin, Wen-Ying
1996-01-01
Confirmatory analyses on the Hong Kong Wechsler Intelligence Scale for Children (HK-WISC) provided support for composite score interpretation based on the two- and three-dimensional models across age levels. Test sample was comprised of 1,100 children, ranging in age from 5 to 15 years at all 11 age levels specified by the HK-WISC. (KW)
ERIC Educational Resources Information Center
Bartolucci, Francesco; Solis-Trapala, Ivonne L.
2010-01-01
We demonstrate the use of a multidimensional extension of the latent Markov model to analyse data from studies with repeated binary responses in developmental psychology. In particular, we consider an experiment based on a battery of tests which was administered to pre-school children, at three time periods, in order to measure their inhibitory…
Stage Structure of Moral Development: A Comparison of Alternative Models.
ERIC Educational Resources Information Center
Hau, Kit-Tai
This study evaluated the stage structure of several quasi-simplex and non-simplex models of moral development in two domains of moral development in a British and a Chinese sample. Analyses were based on data reported by Sachs (1992): the Chinese sample consisted of 1,005 students from grade 9 to post-college, and the British sample consisted of…
Stochastic GARCH dynamics describing correlations between stocks
NASA Astrophysics Data System (ADS)
Prat-Ortega, G.; Savel'ev, S. E.
2014-09-01
The ARCH and GARCH processes have been successfully used for modelling price dynamics such as stock returns or foreign exchange rates. Analysing the long range correlations between stocks, we propose a model, based on the GARCH process, which is able to describe the main characteristics of the stock price correlations, including the mean, variance, probability density distribution and the noise spectrum.
Analyzing endocrine system conservation and evolution.
Bonett, Ronald M
2016-08-01
Analyzing variation in rates of evolution can provide important insights into the factors that constrain trait evolution, as well as those that promote diversification. Metazoan endocrine systems exhibit apparent variation in evolutionary rates of their constituent components at multiple levels, yet relatively few studies have quantified these patterns and analyzed them in a phylogenetic context. This may be in part due to historical and current data limitations for many endocrine components and taxonomic groups. However, recent technological advancements such as high-throughput sequencing provide the opportunity to collect large-scale comparative data sets for even non-model species. Such ventures will produce a fertile data landscape for evolutionary analyses of nucleic acid and amino acid based endocrine components. Here I summarize evolutionary rate analyses that can be applied to categorical and continuous endocrine traits, and also those for nucleic acid and protein-based components. I emphasize analyses that could be used to test whether other variables (e.g., ecology, ontogenetic timing of expression, etc.) are related to patterns of rate variation and endocrine component diversification. The application of phylogenetic-based rate analyses to comparative endocrine data will greatly enhance our understanding of the factors that have shaped endocrine system evolution. Copyright © 2016 Elsevier Inc. All rights reserved.
Simulation model for electron irradiated IGZO thin film transistors
NASA Astrophysics Data System (ADS)
Dayananda, G. K.; Shantharama Rai, C.; Jayarama, A.; Kim, Hyun Jae
2018-02-01
An efficient drain current simulation model for the electron irradiation effect on the electrical parameters of amorphous In-Ga-Zn-O (IGZO) thin-film transistors is developed. The model is developed based on the specifications such as gate capacitance, channel length, channel width, flat band voltage etc. Electrical parameters of un-irradiated IGZO samples were simulated and compared with the experimental parameters and 1 kGy electron irradiated parameters. The effect of electron irradiation on the IGZO sample was analysed by developing a mathematical model.
NASA Technical Reports Server (NTRS)
Fukumori, I.; Raghunath, R.; Fu, L. L.
1996-01-01
The relation between large-scale sea level variability and ocean circulation is studied using a numerical model. A global primitive equaiton model of the ocean is forced by daily winds and climatological heat fluxes corresponding to the period from January 1992 to February 1996. The physical nature of the temporal variability from periods of days to a year, are examined based on spectral analyses of model results and comparisons with satellite altimetry and tide gauge measurements.
Crosswind stability of FSAE race car considering the location of the pressure center
NASA Astrophysics Data System (ADS)
Zhao, Lijun; He, Huimin; Wang, Jianfeng; Li, Yaou; Yang, Na; Liu, Yiqun
2017-09-01
An 8-DOF vehicle dynamic model of FSAE race car was established, including the lateral motion, pitch motion, roll motion, yaw motion and four tires rotation. The model of aerodynamic lateral force and pressure center model were set up based on the vehicle speed and crosswind parameters. The simulation model was built by Simulink, to analyse the crosswind stability for straight-line condition. Results showed that crosswind influences the yawing velocity and sideslip angle seriously.
Non-LTE spectral analyses of the lately discovered DB-gap white dwarfs from the SDSS
NASA Astrophysics Data System (ADS)
Hügelmeyer, S. D.; Dreizler, S.
2009-06-01
For a long time, no hydrogen-deficient white dwarfs have been known that have effective temperature between 30 kK and < 45 kK, i. e. exceeding those of DB white dwarfs and having lower ones than DO white dwarfs. Therefore, this temperature range was long known as the DB-gap. Only recently, the SDSS provided spectra of several candidate DB-gap stars. First analyses based on model spectra calculated under the assumption of local thermodynamic equilibrium (LTE) confirmed that these stars had 30 kK < Teff < 45 kK (Eisenstein et al. 2006). It has been shown for DO white dwarfs that the relaxation of LTE is necessary to account for non local effects in the atmosphere caused by the intense radiation field. Therefore, we calculated a non-LTE model grid and re-analysed the aforementioned set of SDSS spectra. Our results confirm the existence of DB-gap white dwarfs.
NASA Astrophysics Data System (ADS)
Farhat, I. A. H.; Gale, E.; Alpha, C.; Isakovic, A. F.
2017-07-01
Optimizing energy performance of Magnetic Tunnel Junctions (MTJs) is the key for embedding Spin Transfer Torque-Random Access Memory (STT-RAM) in low power circuits. Due to the complex interdependencies of the parameters and variables of the device operating energy, it is important to analyse parameters with most effective control of MTJ power. The impact of threshold current density, Jco , on the energy and the impact of HK on Jco are studied analytically, following the expressions that stem from Landau-Lifshitz-Gilbert-Slonczewski (LLGS-STT) model. In addition, the impact of other magnetic material parameters, such as Ms , and geometric parameters such as tfree and λ is discussed. Device modelling study was conducted to analyse the impact at the circuit level. Nano-magnetism simulation based on NMAGTM package was conducted to analyse the impact of controlling HK on the switching dynamics of the film.
An approach to investigating linkage for bipolar disorder using large Costa Rican pedigrees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freimer, N.B.; Reus, V.I.; Vinogradov, S.
1996-05-31
Despite the evidence that major gene effects exist for bipolar disorder (BP), efforts to map BP loci have so far been unsuccessful. A strategy for mapping BP loci is described, focused on investigation of large pedigrees from a genetically homogenous population, that of Costa Rica. This approach is based on the use of a conservative definition of the BP phenotype in preparation for whole genome screening with polymorphic markers. Linkage simulation analyses are utilized to indicate the probability of detecting evidence suggestive of linkage, using these pedigrees. These analyses are performed under a series of single locus models, ranging formmore » recessive to nearly dominant, utilizing both lod score and affected pedigree member analyses. Additional calculations demonstrate that with any of the models employed, most of the information for linkage derives from affected rather than unaffected individuals. 26 refs., 2 figs., 5 tabs.« less
Alderling, Magnus; Theorell, Töres; de la Torre, Bartolomé; Lundberg, Ingvar
2006-01-01
Background Previous studies of the relationship between job strain and blood or saliva cortisol levels have been small and based on selected occupational groups. Our aim was to examine the association between job strain and saliva cortisol levels in a population-based study in which a number of potential confounders could be adjusted for. Methods The material derives from a population-based study in Stockholm on mental health and its potential determinants. Two data collections were performed three years apart with more than 8500 subjects responding to a questionnaire in both waves. In this paper our analyses are based on 529 individuals who held a job, participated in both waves as well as in an interview linked to the second wave. They gave saliva samples at awakening, half an hour later, at lunchtime and before going to bed on a weekday in close connection with the interview. Job control and job demands were assessed from the questionnaire in the second wave. Mixed models were used to analyse the association between the demand control model and saliva cortisol. Results Women in low strain jobs (high control and low demands) had significantly lower cortisol levels half an hour after awakening than women in high strain (low control and high demands), active (high control and high demands) or passive jobs (low control and low demands). There were no significant differences between the groups during other parts of the day and furthermore there was no difference between the job strain, active and passive groups. For men, no differences were found between demand control groups. Conclusion This population-based study, on a relatively large sample, weakly support the hypothesis that the demand control model is associated with saliva cortisol concentrations. PMID:17129377
Development of soy lecithin based novel self-assembled emulsion hydrogels.
Singh, Vinay K; Pandey, Preeti M; Agarwal, Tarun; Kumar, Dilip; Banerjee, Indranil; Anis, Arfat; Pal, Kunal
2015-03-01
The current study reports the development and characterization of soy lecithin based novel self-assembled emulsion hydrogels. Sesame oil was used as the representative oil phase. Emulsion gels were formed when the concentration of soy lecithin was >40% w/w. Metronidazole was used as the model drug for the drug release and the antimicrobial tests. Microscopic study showed the apolar dispersed phase in an aqueous continuum phase, suggesting the formation of emulsion hydrogels. FTIR study indicated the formation of intermolecular hydrogen bonding, whereas, the XRD study indicated predominantly amorphous nature of the emulsion gels. Composition dependent mechanical and drug release properties of the emulsion gels were observed. In-depth analyses of the mechanical studies were done using Ostwald-de Waele power-law, Kohlrausch and Weichert models, whereas, the drug release profiles were modeled using Korsmeyer-Peppas and Peppas-Sahlin models. The mechanical analyses indicated viscoelastic nature of the emulsion gels. The release of the drug from the emulsion gels was diffusion mediated. The drug loaded emulsion gels showed good antimicrobial activity. The biocompatibility test using HaCaT cells (human keratinocytes) suggested biocompatibility of the emulsion gels. Copyright © 2015 Elsevier Ltd. All rights reserved.
Choice of regularization in adjoint tomography based on two-dimensional synthetic tests
NASA Astrophysics Data System (ADS)
Valentová, Lubica; Gallovič, František; Růžek, Bohuslav; de la Puente, Josep; Moczo, Peter
2015-08-01
We present synthetic tests of 2-D adjoint tomography of surface wave traveltimes obtained by the ambient noise cross-correlation analysis across the Czech Republic. The data coverage may be considered perfect for tomography due to the density of the station distribution. Nevertheless, artefacts in the inferred velocity models arising from the data noise may be still observed when weak regularization (Gaussian smoothing of the misfit gradient) or too many iterations are considered. To examine the effect of the regularization and iteration number on the performance of the tomography in more detail we performed extensive synthetic tests. Instead of the typically used (although criticized) checkerboard test, we propose to carry out the tests with two different target models-simple smooth and complex realistic models. The first test reveals the sensitivity of the result on the data noise, while the second helps to analyse the resolving power of the data set. For various noise and Gaussian smoothing levels, we analysed the convergence towards (or divergence from) the target model with increasing number of iterations. Based on the tests we identified the optimal regularization, which we then employed in the inversion of 16 and 20 s Love-wave group traveltimes.
Probabilistic Design of a Wind Tunnel Model to Match the Response of a Full-Scale Aircraft
NASA Technical Reports Server (NTRS)
Mason, Brian H.; Stroud, W. Jefferson; Krishnamurthy, T.; Spain, Charles V.; Naser, Ahmad S.
2005-01-01
approach is presented for carrying out the reliability-based design of a plate-like wing that is part of a wind tunnel model. The goal is to design the wind tunnel model to match the stiffness characteristics of the wing box of a flight vehicle while satisfying strength-based risk/reliability requirements that prevents damage to the wind tunnel model and fixtures. The flight vehicle is a modified F/A-18 aircraft. The design problem is solved using reliability-based optimization techniques. The objective function to be minimized is the difference between the displacements of the wind tunnel model and the corresponding displacements of the flight vehicle. The design variables control the thickness distribution of the wind tunnel model. Displacements of the wind tunnel model change with the thickness distribution, while displacements of the flight vehicle are a set of fixed data. The only constraint imposed is that the probability of failure is less than a specified value. Failure is assumed to occur if the stress caused by aerodynamic pressure loading is greater than the specified strength allowable. Two uncertain quantities are considered: the allowable stress and the thickness distribution of the wind tunnel model. Reliability is calculated using Monte Carlo simulation with response surfaces that provide approximate values of stresses. The response surface equations are, in turn, computed from finite element analyses of the wind tunnel model at specified design points. Because the response surface approximations were fit over a small region centered about the current design, the response surfaces were refit periodically as the design variables changed. Coarse-grained parallelism was used to simultaneously perform multiple finite element analyses. Studies carried out in this paper demonstrate that this scheme of using moving response surfaces and coarse-grained computational parallelism reduce the execution time of the Monte Carlo simulation enough to make the design problem tractable. The results of the reliability-based designs performed in this paper show that large decreases in the probability of stress-based failure can be realized with only small sacrifices in the ability of the wind tunnel model to represent the displacements of the full-scale vehicle.
Quantitative structure-activity relationship models that stand the test of time.
Davis, Andrew M; Wood, David J
2013-04-01
The pharmaceutical industry is in a period of intense change. While this has many drivers, attrition through the development process continues to be an important pressure. The emerging definitions of "compound quality" that are based on retrospective analyses of developmental attrition have highlighted a new direction for medicinal chemistry and the paradigm of "quality at the point of design". The time has come for retrospective analyses to catalyze prospective action. Quality at the point of design places pressure on the quality of our predictive models. Empirical QSAR models when built with care provide true predictive control, but their accuracy and precision can be improved. Here we describe AstraZeneca's experience of automation in QSAR model building and validation, and how an informatics system can provide a step-change in predictive power to project design teams, if they choose to use it.
Raab, Phillip Andrew; Claypoole, Keith Harvey; Hayashi, Kentaro; Baker, Charlene
2012-10-01
Based on the concept of allostatic load, this study proposed and evaluated a model for the relationship between childhood trauma, chronic medical conditions, and intervening variables affecting this relationship in individuals with severe mental illness. Childhood trauma, adult trauma, major depressive disorder symptoms, posttraumatic stress disorder symptoms, health risk factors, and chronic medical conditions were retrospectively assessed using a cross-sectional survey design in a sample of 117 individuals with severe mental illness receiving public mental health services. Path analyses produced a good-fitting model, with significant pathways from childhood to adult trauma and from adult trauma to chronic medical conditions. Multisample path analyses revealed the equivalence of the model across sex. The results support a model for the relationship between childhood and adult trauma and chronic medical conditions, which highlights the pathophysiological toll of cumulative trauma experienced across the life span and the pressing need to prevent retraumatization in this population.
Rotolo, Federico; Paoletti, Xavier; Burzykowski, Tomasz; Buyse, Marc; Michiels, Stefan
2017-01-01
Surrogate endpoints are often used in clinical trials instead of well-established hard endpoints for practical convenience. The meta-analytic approach relies on two measures of surrogacy: one at the individual level and one at the trial level. In the survival data setting, a two-step model based on copulas is commonly used. We present a new approach which employs a bivariate survival model with an individual random effect shared between the two endpoints and correlated treatment-by-trial interactions. We fit this model using auxiliary mixed Poisson models. We study via simulations the operating characteristics of this mixed Poisson approach as compared to the two-step copula approach. We illustrate the application of the methods on two individual patient data meta-analyses in gastric cancer, in the advanced setting (4069 patients from 20 randomized trials) and in the adjuvant setting (3288 patients from 14 randomized trials).
NASA Astrophysics Data System (ADS)
Cich, Matthew J.; Guillaume, Alexandre; Drouin, Brian; Benner, D. Chris
2017-06-01
Multispectrum analysis can be a challenge for a variety of reasons. It can be computationally intensive to fit a proper line shape model especially for high resolution experimental data. Band-wide analyses including many transitions along with interactions, across many pressures and temperatures are essential to accurately model, for example, atmospherically relevant systems. Labfit is a fast multispectrum analysis program originally developed by D. Chris Benner with a text-based interface. More recently at JPL a graphical user interface was developed with the goal of increasing the ease of use but also the number of potential users. The HTP lineshape model has been added to Labfit keeping it up-to-date with community standards. Recent analyses using labfit will be shown to demonstrate its ability to competently handle large experimental datasets, including high order lineshape effects, that are otherwise unmanageable.
Analysing child mortality in Nigeria with geoadditive discrete-time survival models.
Adebayo, Samson B; Fahrmeir, Ludwig
2005-03-15
Child mortality reflects a country's level of socio-economic development and quality of life. In developing countries, mortality rates are not only influenced by socio-economic, demographic and health variables but they also vary considerably across regions and districts. In this paper, we analysed child mortality in Nigeria with flexible geoadditive discrete-time survival models. This class of models allows us to measure small-area district-specific spatial effects simultaneously with possibly non-linear or time-varying effects of other factors. Inference is fully Bayesian and uses computationally efficient Markov chain Monte Carlo (MCMC) simulation techniques. The application is based on the 1999 Nigeria Demographic and Health Survey. Our method assesses effects at a high level of temporal and spatial resolution not available with traditional parametric models, and the results provide some evidence on how to reduce child mortality by improving socio-economic and public health conditions. Copyright (c) 2004 John Wiley & Sons, Ltd.
Estimating statistical power for open-enrollment group treatment trials.
Morgan-Lopez, Antonio A; Saavedra, Lissette M; Hien, Denise A; Fals-Stewart, William
2011-01-01
Modeling turnover in group membership has been identified as a key barrier contributing to a disconnect between the manner in which behavioral treatment is conducted (open-enrollment groups) and the designs of substance abuse treatment trials (closed-enrollment groups, individual therapy). Latent class pattern mixture models (LCPMMs) are emerging tools for modeling data from open-enrollment groups with membership turnover in recently proposed treatment trials. The current article illustrates an approach to conducting power analyses for open-enrollment designs based on the Monte Carlo simulation of LCPMM models using parameters derived from published data from a randomized controlled trial comparing Seeking Safety to a Community Care condition for women presenting with comorbid posttraumatic stress disorder and substance use disorders. The example addresses discrepancies between the analysis framework assumed in power analyses of many recently proposed open-enrollment trials and the proposed use of LCPMM for data analysis. Copyright © 2011 Elsevier Inc. All rights reserved.
Ghaffarzadegan, Navid; Hawley, Joshua; Desai, Anand
2014-03-01
The US government has been increasingly supporting postdoctoral training in biomedical sciences to develop the domestic research workforce. However, current trends suggest that mostly international researchers benefit from the funding, many of whom might leave the USA after training. In this paper, we describe a model used to analyse the flow of national versus international researchers into and out of postdoctoral training. We calibrate our model in the case of the USA and successfully replicate the data. We use the model to conduct simulation-based analyses of effects of different policies on the diversity of postdoctoral researchers. Our model shows that capping the duration of postdoctoral careers, a policy proposed previously, favours international postdoctoral researchers. The analysis suggests that the leverage point to help the growth of domestic research workforce is in the pregraduate education area, and many policies implemented at the postgraduate level have minimal or unintended effects on diversity.
NASA Astrophysics Data System (ADS)
Cao, Dongpu; Khajepour, Amir; Song, Xubin
2011-08-01
Flexible-wheel (FW) suspension concept has been regarded to be one of the novel technologies for future planetary surface vehicles (PSVs). This study develops generalised models for fundamental stiffness and damping properties and power consumption characteristics of the FW suspension with and without considering wheel-hub dimensions. Compliance rolling resistance (CRR) coefficient is also defined and derived for the FW suspension. Based on the generalised models and two dimensionless measures, suspension properties are analysed for two FW suspension configurations. The sensitivity analysis is performed to investigate the effects of the design parameters and operating conditions on the CRR and power consumption characteristic of the FW suspension. The modelling generalisation permits analyses of fundamental properties and power consumption characteristics of different FW suspension designs in a uniform and very convenient manner, which would serve as a theoretical foundation for the design of FW suspensions for future PSVs.
Chen, Po-Yi; Yang, Chien-Ming; Morin, Charles M
2015-05-01
The purpose of this study is to examine the factor structure of the Insomnia Severity Index (ISI) across samples recruited from different countries. We tried to identify the most appropriate factor model for the ISI and further examined the measurement invariance property of the ISI across samples from different countries. Our analyses included one data set collected from a Taiwanese sample and two data sets obtained from samples in Hong Kong and Canada. The data set collected in Taiwan was analyzed with ordinal exploratory factor analysis (EFA) to obtain the appropriate factor model for the ISI. After that, we conducted a series of confirmatory factor analyses (CFAs), which is a special case of the structural equation model (SEM) that concerns the parameters in the measurement model, to the statistics collected in Canada and Hong Kong. The purposes of these CFA were to cross-validate the result obtained from EFA and further examine the cross-cultural measurement invariance of the ISI. The three-factor model outperforms other models in terms of global fit indices in Taiwan's population. Its external validity is also supported by confirmatory factor analyses. Furthermore, the measurement invariance analyses show that the strong invariance property between the samples from different cultures holds, providing evidence that the ISI results obtained in different cultures are comparable. The factorial validity of the ISI is stable in different populations. More importantly, its invariance property across cultures suggests that the ISI is a valid measure of the insomnia severity construct across countries. Copyright © 2014 Elsevier B.V. All rights reserved.
Oparaji, Uchenna; Sheu, Rong-Jiun; Bankhead, Mark; Austin, Jonathan; Patelli, Edoardo
2017-12-01
Artificial Neural Networks (ANNs) are commonly used in place of expensive models to reduce the computational burden required for uncertainty quantification, reliability and sensitivity analyses. ANN with selected architecture is trained with the back-propagation algorithm from few data representatives of the input/output relationship of the underlying model of interest. However, different performing ANNs might be obtained with the same training data as a result of the random initialization of the weight parameters in each of the network, leading to an uncertainty in selecting the best performing ANN. On the other hand, using cross-validation to select the best performing ANN based on the ANN with the highest R 2 value can lead to biassing in the prediction. This is as a result of the fact that the use of R 2 cannot determine if the prediction made by ANN is biased. Additionally, R 2 does not indicate if a model is adequate, as it is possible to have a low R 2 for a good model and a high R 2 for a bad model. Hence, in this paper, we propose an approach to improve the robustness of a prediction made by ANN. The approach is based on a systematic combination of identical trained ANNs, by coupling the Bayesian framework and model averaging. Additionally, the uncertainties of the robust prediction derived from the approach are quantified in terms of confidence intervals. To demonstrate the applicability of the proposed approach, two synthetic numerical examples are presented. Finally, the proposed approach is used to perform a reliability and sensitivity analyses on a process simulation model of a UK nuclear effluent treatment plant developed by National Nuclear Laboratory (NNL) and treated in this study as a black-box employing a set of training data as a test case. This model has been extensively validated against plant and experimental data and used to support the UK effluent discharge strategy. Copyright © 2017 Elsevier Ltd. All rights reserved.
McKenna, Stephen P; Ratcliffe, Julie; Meads, David M; Brazier, John E
2008-08-21
Pulmonary Hypertension is a severe and incurable disease with poor prognosis. A suite of new disease-specific measures--the Cambridge Pulmonary Hypertension Outcome Review (CAMPHOR) - was recently developed for use in this condition. The purpose of this study was to develop and validate a preference based measure from the CAMPHOR that could be used in cost-utility analyses. Items were selected that covered major issues covered by the CAMPHOR QoL scale (activities, travelling, dependence and communication). These were used to create 36 health states that were valued by 249 people representative of the UK adult population, using the time trade-off (TTO) technique. Data from the TTO interviews were analysed using both aggregate and individual level modelling. Finally, the original CAMPHOR validation data were used to validate the new preference based model. The predicted health state values ranged from 0.962 to 0.136. The mean level model selected for analyzing the data had good explanatory power (0.936), did not systematically over- or underestimate the observed mean health state values and showed no evidence of auto correlation in the prediction errors. The value of less than 1 reflects a background level of ill health in state 1111, as judged by the respondents. Scores derived from the new measure had excellent test-retest reliability (0.85) and construct validity. The CAMPHOR utility score appears better able to distinguish between WHO functional classes (II and III) than the EQ-5D and SF-6D. The tariff derived in this study can be used to classify an individual into a health state based on their responses to the CAMPHOR. The results of this study widen the evidence base for conducting economic evaluations of interventions designed to improve QoL for patients with PH.
Reliability evaluation of microgrid considering incentive-based demand response
NASA Astrophysics Data System (ADS)
Huang, Ting-Cheng; Zhang, Yong-Jun
2017-07-01
Incentive-based demand response (IBDR) can guide customers to adjust their behaviour of electricity and curtail load actively. Meanwhile, distributed generation (DG) and energy storage system (ESS) can provide time for the implementation of IBDR. The paper focus on the reliability evaluation of microgrid considering IBDR. Firstly, the mechanism of IBDR and its impact on power supply reliability are analysed. Secondly, the IBDR dispatch model considering customer’s comprehensive assessment and the customer response model are developed. Thirdly, the reliability evaluation method considering IBDR based on Monte Carlo simulation is proposed. Finally, the validity of the above models and method is studied through numerical tests on modified RBTS Bus6 test system. Simulation results demonstrated that IBDR can improve the reliability of microgrid.
Rönn, Minttu M; Wolf, Emory E; Chesson, Harrell; Menzies, Nicolas A; Galer, Kara; Gorwitz, Rachel; Gift, Thomas; Hsu, Katherine; Salomon, Joshua A
2017-05-01
Mathematical models of chlamydia transmission can help inform disease control policy decisions when direct empirical evaluation of alternatives is impractical. We reviewed published chlamydia models to understand the range of approaches used for policy analyses and how the studies have responded to developments in the field. We performed a literature review by searching Medline and Google Scholar (up to October 2015) to identify publications describing dynamic chlamydia transmission models used to address public health policy questions. We extracted information on modeling methodology, interventions, and key findings. We identified 47 publications (including two model comparison studies), which reported collectively on 29 distinct mathematical models. Nine models were individual-based, and 20 were deterministic compartmental models. The earliest studies evaluated the benefits of national-level screening programs and predicted potentially large benefits from increased screening. Subsequent trials and further modeling analyses suggested the impact might have been overestimated. Partner notification has been increasingly evaluated in mathematical modeling, whereas behavioral interventions have received relatively limited attention. Our review provides an overview of chlamydia transmission models and gives a perspective on how mathematical modeling has responded to increasing empirical evidence and addressed policy questions related to prevention of chlamydia infection and sequelae.
Use of XML and Java for collaborative petroleum reservoir modeling on the Internet
NASA Astrophysics Data System (ADS)
Victorine, John; Watney, W. Lynn; Bhattacharya, Saibal
2005-11-01
The GEMINI (Geo-Engineering Modeling through INternet Informatics) is a public-domain, web-based freeware that is made up of an integrated suite of 14 Java-based software tools to accomplish on-line, real-time geologic and engineering reservoir modeling. GEMINI facilitates distant collaborations for small company and academic clients, negotiating analyses of both single and multiple wells. The system operates on a single server and an enterprise database. External data sets must be uploaded into this database. Feedback from GEMINI users provided the impetus to develop Stand Alone Web Start Applications of GEMINI modules that reside in and operate from the user's PC. In this version, the GEMINI modules run as applets, which may reside in local user PCs, on the server, or Java Web Start. In this enhanced version, XML-based data handling procedures are used to access data from remote and local databases and save results for later access and analyses. The XML data handling process also integrates different stand-alone GEMINI modules enabling the user(s) to access multiple databases. It provides flexibility to the user to customize analytical approach, database location, and level of collaboration. An example integrated field-study using GEMINI modules and Stand Alone Web Start Applications is provided to demonstrate the versatile applicability of this freeware for cost-effective reservoir modeling.
Biotinylated lipid bilayer disks as model membranes for biosensor analyses.
Lundquist, Anna; Hansen, Søren B; Nordström, Helena; Danielson, U Helena; Edwards, Katarina
2010-10-15
The aim of this study was to investigate the potential of polyethylene glycol (PEG)-stabilized lipid bilayer disks as model membranes for surface plasmon resonance (SPR)-based biosensor analyses. Nanosized bilayer disks that included 1,2-distearoyl-sn-glycero-3-phosphoethanolamine-N-[biotinyl(polyethylene glycol)(2000)] (DSPE-PEG(2000)-biotin) were prepared and structurally characterized by cryo-transmission electron microscopy (cryo-TEM) imaging. The biotinylated disks were immobilized via streptavidin to three different types of sensor chips (CM3, CM4, and CM5) varying in their degree of carboxymethylation and thickness of the dextran matrix. The bilayer disks were found to interact with and bind stably to the streptavidin-coated sensor surfaces. As a first step toward the use of these bilayer disks as model membranes in SPR-based studies of membrane proteins, initial investigations were carried out with cyclooxygenases 1 and 2 (COX 1 and COX 2). Bilayer disks were preincubated with the respective protein and thereafter allowed to interact with the sensor surface. The signal resulting from the interaction was, in both cases, significantly enhanced as compared with the signal obtained when disks alone were injected over the surface. The results of the study suggest that bilayer disks constitute a new and promising type of model membranes for SPR-based biosensor studies. Copyright 2010 Elsevier Inc. All rights reserved.
Use of XML and Java for collaborative petroleum reservoir modeling on the Internet
Victorine, J.; Watney, W.L.; Bhattacharya, S.
2005-01-01
The GEMINI (Geo-Engineering Modeling through INternet Informatics) is a public-domain, web-based freeware that is made up of an integrated suite of 14 Java-based software tools to accomplish on-line, real-time geologic and engineering reservoir modeling. GEMINI facilitates distant collaborations for small company and academic clients, negotiating analyses of both single and multiple wells. The system operates on a single server and an enterprise database. External data sets must be uploaded into this database. Feedback from GEMINI users provided the impetus to develop Stand Alone Web Start Applications of GEMINI modules that reside in and operate from the user's PC. In this version, the GEMINI modules run as applets, which may reside in local user PCs, on the server, or Java Web Start. In this enhanced version, XML-based data handling procedures are used to access data from remote and local databases and save results for later access and analyses. The XML data handling process also integrates different stand-alone GEMINI modules enabling the user(s) to access multiple databases. It provides flexibility to the user to customize analytical approach, database location, and level of collaboration. An example integrated field-study using GEMINI modules and Stand Alone Web Start Applications is provided to demonstrate the versatile applicability of this freeware for cost-effective reservoir modeling. ?? 2005 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sterndorff, M.J.; O`Brien, P.
ROLF (Retrievable Offshore Loading Facility) has been proposed as an alternative offshore oil export tanker loading system for the North Sea. The system consists of a flexible riser ascending from the seabed in a lazy wave configuration to the bow of a dynamically positioned tanker. In order to supplant and support the numerical analyses performed to design the system, an extensive model test program was carried out in a 3D offshore basin at scale 1:50. A model riser with properties equivalent to the properties of the oil filled prototype riser installed in seawater was tested in several combinations of wavesmore » and current. During the tests the forces at the bow of the tanker and at the pipeline end manifold were measured together with the motions of the tanker and the riser. The riser motions were measured by means of a video based 3D motion monitoring system. Of special importance was accurate determination of the minimum bending radius for the riser. This was derived based on the measured riser motions. The results of the model tests were compared to numerical analyses by an MCS proprietary riser analysis program.« less
Heat Transfer Issues in Finite Element Analysis of Bounding Accidents in PPCS Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pampin, R.; Karditsas, P.J.
2005-05-15
Modelling of temperature excursions in structures of conceptual power plants during hypothetical worst-case accidents has been performed within the European Power Plant Conceptual Study (PPCS). A new, 3D finite elements (FE) based tool, coupling the different calculations to the same tokamak geometry, has been extensively used to conduct the neutron transport, activation and thermal analyses for all PPCS plant models. During a total loss of cooling, the usual assumption for the bounding accident, passive removal of the decay heat from activated materials depends on conduction and radiation heat exchange between components. This paper presents and discusses results obtained during themore » PPCS bounding accident thermal analyses, examining the following issues: (a) radiation heat exchange between the inner surfaces of the tokamak, (b) the presence of air within the cryostat volume, and the heat flow arising from the circulation pattern provided by temperature differences between various parts, and (c) the thermal conductivity of pebble beds, and its degradation due to exposure to neutron irradiation, affecting the heat transfer capability and thermal response of a blanket based on these components.« less
Potential impact of remote sensing data on sea-state analysis and prediction
NASA Technical Reports Server (NTRS)
Cardone, V. J.
1983-01-01
The severe North Atlantic storm which damaged the ocean liner Queen Elizabeth 2 (QE2) was studied to assess the impact of remotely sensed marine surface wind data obtained by SEASAT-A, on sea state specifications and forecasts. Alternate representations of the surface wind field in the QE2 storm were produced from the SEASAT enhanced data base, and from operational analyses based upon conventional data. The wind fields were used to drive a high resolution spectral ocean surface wave prediction model. Results show that sea state analyses would have been vastly improved during the period of storm formation and explosive development had remote sensing wind data been available in real time. A modest improvement in operational 12 to 24 hour wave forecasts would have followed automatically from the improved initial state specification made possible by the remote sensing data in both numerical and sea state prediction models. Significantly improved 24 to 48 hour wave forecasts require in addition to remote sensing data, refinement in the numerical and physical aspects of weather prediction models.
PASMet: a web-based platform for prediction, modelling and analyses of metabolic systems
Sriyudthsak, Kansuporn; Mejia, Ramon Francisco; Arita, Masanori; Hirai, Masami Yokota
2016-01-01
PASMet (Prediction, Analysis and Simulation of Metabolic networks) is a web-based platform for proposing and verifying mathematical models to understand the dynamics of metabolism. The advantages of PASMet include user-friendliness and accessibility, which enable biologists and biochemists to easily perform mathematical modelling. PASMet offers a series of user-functions to handle the time-series data of metabolite concentrations. The functions are organised into four steps: (i) Prediction of a probable metabolic pathway and its regulation; (ii) Construction of mathematical models; (iii) Simulation of metabolic behaviours; and (iv) Analysis of metabolic system characteristics. Each function contains various statistical and mathematical methods that can be used independently. Users who may not have enough knowledge of computing or programming can easily and quickly analyse their local data without software downloads, updates or installations. Users only need to upload their files in comma-separated values (CSV) format or enter their model equations directly into the website. Once the time-series data or mathematical equations are uploaded, PASMet automatically performs computation on server-side. Then, users can interactively view their results and directly download them to their local computers. PASMet is freely available with no login requirement at http://pasmet.riken.jp/ from major web browsers on Windows, Mac and Linux operating systems. PMID:27174940
When Does Model-Based Control Pay Off?
Kool, Wouter; Cushman, Fiery A; Gershman, Samuel J
2016-08-01
Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to "model-free" and "model-based" strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding. It is assumed that this trade-off between accuracy and computational demand plays an important role in the arbitration between the two strategies, but we show that the hallmark task for dissociating model-free and model-based strategies, as well as several related variants, do not embody such a trade-off. We describe five factors that reduce the effectiveness of the model-based strategy on these tasks by reducing its accuracy in estimating reward outcomes and decreasing the importance of its choices. Based on these observations, we describe a version of the task that formally and empirically obtains an accuracy-demand trade-off between model-free and model-based strategies. Moreover, we show that human participants spontaneously increase their reliance on model-based control on this task, compared to the original paradigm. Our novel task and our computational analyses may prove important in subsequent empirical investigations of how humans balance accuracy and demand.
SEM Based CARMA Time Series Modeling for Arbitrary N.
Oud, Johan H L; Voelkle, Manuel C; Driver, Charles C
2018-01-01
This article explains in detail the state space specification and estimation of first and higher-order autoregressive moving-average models in continuous time (CARMA) in an extended structural equation modeling (SEM) context for N = 1 as well as N > 1. To illustrate the approach, simulations will be presented in which a single panel model (T = 41 time points) is estimated for a sample of N = 1,000 individuals as well as for samples of N = 100 and N = 50 individuals, followed by estimating 100 separate models for each of the one-hundred N = 1 cases in the N = 100 sample. Furthermore, we will demonstrate how to test the difference between the full panel model and each N = 1 model by means of a subject-group-reproducibility test. Finally, the proposed analyses will be applied in an empirical example, in which the relationships between mood at work and mood at home are studied in a sample of N = 55 women. All analyses are carried out by ctsem, an R-package for continuous time modeling, interfacing to OpenMx.
The Threshold Bias Model: A Mathematical Model for the Nomothetic Approach of Suicide
Folly, Walter Sydney Dutra
2011-01-01
Background Comparative and predictive analyses of suicide data from different countries are difficult to perform due to varying approaches and the lack of comparative parameters. Methodology/Principal Findings A simple model (the Threshold Bias Model) was tested for comparative and predictive analyses of suicide rates by age. The model comprises of a six parameter distribution that was applied to the USA suicide rates by age for the years 2001 and 2002. Posteriorly, linear extrapolations are performed of the parameter values previously obtained for these years in order to estimate the values corresponding to the year 2003. The calculated distributions agreed reasonably well with the aggregate data. The model was also used to determine the age above which suicide rates become statistically observable in USA, Brazil and Sri Lanka. Conclusions/Significance The Threshold Bias Model has considerable potential applications in demographic studies of suicide. Moreover, since the model can be used to predict the evolution of suicide rates based on information extracted from past data, it will be of great interest to suicidologists and other researchers in the field of mental health. PMID:21909431