NASA Astrophysics Data System (ADS)
Joshi, Aditya; Lindsey, Brooks D.; Dayton, Paul A.; Pinton, Gianmarco; Muller, Marie
2017-05-01
Ultrasound contrast agents (UCA), such as microbubbles, enhance the scattering properties of blood, which is otherwise hypoechoic. The multiple scattering interactions of the acoustic field with UCA are poorly understood due to the complexity of the multiple scattering theories and the nonlinear microbubble response. The majority of bubble models describe the behavior of UCA as single, isolated microbubbles suspended in infinite medium. Multiple scattering models such as the independent scattering approximation can approximate phase velocity and attenuation for low scatterer volume fractions. However, all current models and simulation approaches only describe multiple scattering and nonlinear bubble dynamics separately. Here we present an approach that combines two existing models: (1) a full-wave model that describes nonlinear propagation and scattering interactions in a heterogeneous attenuating medium and (2) a Paul-Sarkar model that describes the nonlinear interactions between an acoustic field and microbubbles. These two models were solved numerically and combined with an iterative approach. The convergence of this combined model was explored in silico for 0.5 × 106 microbubbles ml-1, 1% and 2% bubble concentration by volume. The backscattering predicted by our modeling approach was verified experimentally with water tank measurements performed with a 128-element linear array transducer. An excellent agreement in terms of the fundamental and harmonic acoustic fields is shown. Additionally, our model correctly predicts the phase velocity and attenuation measured using through transmission and predicted by the independent scattering approximation.
Seaman, Shaun R; White, Ian R; Carpenter, James R
2015-01-01
Missing covariate data commonly occur in epidemiological and clinical research, and are often dealt with using multiple imputation. Imputation of partially observed covariates is complicated if the substantive model is non-linear (e.g. Cox proportional hazards model), or contains non-linear (e.g. squared) or interaction terms, and standard software implementations of multiple imputation may impute covariates from models that are incompatible with such substantive models. We show how imputation by fully conditional specification, a popular approach for performing multiple imputation, can be modified so that covariates are imputed from models which are compatible with the substantive model. We investigate through simulation the performance of this proposal, and compare it with existing approaches. Simulation results suggest our proposal gives consistent estimates for a range of common substantive models, including models which contain non-linear covariate effects or interactions, provided data are missing at random and the assumed imputation models are correctly specified and mutually compatible. Stata software implementing the approach is freely available. PMID:24525487
NASA Astrophysics Data System (ADS)
Chaudhari, Rajan; Heim, Andrew J.; Li, Zhijun
2015-05-01
Evidenced by the three-rounds of G-protein coupled receptors (GPCR) Dock competitions, improving homology modeling methods of helical transmembrane proteins including the GPCRs, based on templates of low sequence identity, remains an eminent challenge. Current approaches addressing this challenge adopt the philosophy of "modeling first, refinement next". In the present work, we developed an alternative modeling approach through the novel application of available multiple templates. First, conserved inter-residue interactions are derived from each additional template through conservation analysis of each template-target pairwise alignment. Then, these interactions are converted into distance restraints and incorporated in the homology modeling process. This approach was applied to modeling of the human β2 adrenergic receptor using the bovin rhodopsin and the human protease-activated receptor 1 as templates and improved model quality was demonstrated compared to the homology model generated by standard single-template and multiple-template methods. This method of "refined restraints first, modeling next", provides a fast and complementary way to the current modeling approaches. It allows rational identification and implementation of additional conserved distance restraints extracted from multiple templates and/or experimental data, and has the potential to be applicable to modeling of all helical transmembrane proteins.
Genomic-based multiple-trait evaluation in Eucalyptus grandis using dominant DArT markers.
Cappa, Eduardo P; El-Kassaby, Yousry A; Muñoz, Facundo; Garcia, Martín N; Villalba, Pamela V; Klápště, Jaroslav; Marcucci Poltri, Susana N
2018-06-01
We investigated the impact of combining the pedigree- and genomic-based relationship matrices in a multiple-trait individual-tree mixed model (a.k.a., multiple-trait combined approach) on the estimates of heritability and on the genomic correlations between growth and stem straightness in an open-pollinated Eucalyptus grandis population. Additionally, the added advantage of incorporating genomic information on the theoretical accuracies of parents and offspring breeding values was evaluated. Our results suggested that the use of the combined approach for estimating heritabilities and additive genetic correlations in multiple-trait evaluations is advantageous and including genomic information increases the expected accuracy of breeding values. Furthermore, the multiple-trait combined approach was proven to be superior to the single-trait combined approach in predicting breeding values, in particular for low-heritability traits. Finally, our results advocate the use of the combined approach in forest tree progeny testing trials, specifically when a multiple-trait individual-tree mixed model is considered. Copyright © 2018 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Lee, Sue Ann S.
2018-01-01
The goals of the present study were to (1) examine the effects of the multiple opposition phonological approach on improving phoneme production accuracy in children with severe phonological disorders and (2) explore whether the multiple opposition approach is feasible for the telepractice service delivery model. A multiple-baseline,…
NASA Astrophysics Data System (ADS)
Yahyaei, Mohsen; Bashiri, Mahdi
2017-12-01
The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.
NASA Astrophysics Data System (ADS)
Wagner, Jenny; Liesenborgs, Jori; Tessore, Nicolas
2018-04-01
Context. Local gravitational lensing properties, such as convergence and shear, determined at the positions of multiply imaged background objects, yield valuable information on the smaller-scale lensing matter distribution in the central part of galaxy clusters. Highly distorted multiple images with resolved brightness features like the ones observed in CL0024 allow us to study these local lensing properties and to tighten the constraints on the properties of dark matter on sub-cluster scale. Aim. We investigate to what precision local magnification ratios, J, ratios of convergences, f, and reduced shears, g = (g1, g2), can be determined independently of a lens model for the five resolved multiple images of the source at zs = 1.675 in CL0024. We also determine if a comparison to the respective results obtained by the parametric modelling tool Lenstool and by the non-parametric modelling tool Grale can detect biases in the models. For these lens models, we analyse the influence of the number and location of the constraints from multiple images on the lens properties at the positions of the five multiple images of the source at zs = 1.675. Methods: Our model-independent approach uses a linear mapping between the five resolved multiple images to determine the magnification ratios, ratios of convergences, and reduced shears at their positions. With constraints from up to six multiple image systems, we generate Lenstool and Grale models using the same image positions, cosmological parameters, and number of generated convergence and shear maps to determine the local values of J, f, and g at the same positions across all methods. Results: All approaches show strong agreement on the local values of J, f, and g. We find that Lenstool obtains the tightest confidence bounds even for convergences around one using constraints from six multiple-image systems, while the best Grale model is generated only using constraints from all multiple images with resolved brightness features and adding limited small-scale mass corrections. Yet, confidence bounds as large as the values themselves can occur for convergences close to one in all approaches. Conclusions: Our results agree with previous findings, support the light-traces-mass assumption, and the merger hypothesis for CL0024. Comparing the different approaches can detect model biases. The model-independent approach determines the local lens properties to a comparable precision in less than one second.
Voxelwise multivariate analysis of multimodality magnetic resonance imaging
Naylor, Melissa G.; Cardenas, Valerie A.; Tosun, Duygu; Schuff, Norbert; Weiner, Michael; Schwartzman, Armin
2015-01-01
Most brain magnetic resonance imaging (MRI) studies concentrate on a single MRI contrast or modality, frequently structural MRI. By performing an integrated analysis of several modalities, such as structural, perfusion-weighted, and diffusion-weighted MRI, new insights may be attained to better understand the underlying processes of brain diseases. We compare two voxelwise approaches: (1) fitting multiple univariate models, one for each outcome and then adjusting for multiple comparisons among the outcomes and (2) fitting a multivariate model. In both cases, adjustment for multiple comparisons is performed over all voxels jointly to account for the search over the brain. The multivariate model is able to account for the multiple comparisons over outcomes without assuming independence because the covariance structure between modalities is estimated. Simulations show that the multivariate approach is more powerful when the outcomes are correlated and, even when the outcomes are independent, the multivariate approach is just as powerful or more powerful when at least two outcomes are dependent on predictors in the model. However, multiple univariate regressions with Bonferroni correction remains a desirable alternative in some circumstances. To illustrate the power of each approach, we analyze a case control study of Alzheimer's disease, in which data from three MRI modalities are available. PMID:23408378
An integrative formal model of motivation and decision making: The MGPM*.
Ballard, Timothy; Yeo, Gillian; Loft, Shayne; Vancouver, Jeffrey B; Neal, Andrew
2016-09-01
We develop and test an integrative formal model of motivation and decision making. The model, referred to as the extended multiple-goal pursuit model (MGPM*), is an integration of the multiple-goal pursuit model (Vancouver, Weinhardt, & Schmidt, 2010) and decision field theory (Busemeyer & Townsend, 1993). Simulations of the model generated predictions regarding the effects of goal type (approach vs. avoidance), risk, and time sensitivity on prioritization. We tested these predictions in an experiment in which participants pursued different combinations of approach and avoidance goals under different levels of risk. The empirical results were consistent with the predictions of the MGPM*. Specifically, participants pursuing 1 approach and 1 avoidance goal shifted priority from the approach to the avoidance goal over time. Among participants pursuing 2 approach goals, those with low time sensitivity prioritized the goal with the larger discrepancy, whereas those with high time sensitivity prioritized the goal with the smaller discrepancy. Participants pursuing 2 avoidance goals generally prioritized the goal with the smaller discrepancy. Finally, all of these effects became weaker as the level of risk increased. We used quantitative model comparison to show that the MGPM* explained the data better than the original multiple-goal pursuit model, and that the major extensions from the original model were justified. The MGPM* represents a step forward in the development of a general theory of decision making during multiple-goal pursuit. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
ERIC Educational Resources Information Center
Tsai, Yu-Ling; Chang, Ching-Kuch
2009-01-01
This article reports an alternative approach, called the combinatorial model, to learning multiplicative identities, and investigates the effects of implementing results for this alternative approach. Based on realistic mathematics education theory, the new instructional materials or modules of the new approach were developed by the authors. From…
Voxelwise multivariate analysis of multimodality magnetic resonance imaging.
Naylor, Melissa G; Cardenas, Valerie A; Tosun, Duygu; Schuff, Norbert; Weiner, Michael; Schwartzman, Armin
2014-03-01
Most brain magnetic resonance imaging (MRI) studies concentrate on a single MRI contrast or modality, frequently structural MRI. By performing an integrated analysis of several modalities, such as structural, perfusion-weighted, and diffusion-weighted MRI, new insights may be attained to better understand the underlying processes of brain diseases. We compare two voxelwise approaches: (1) fitting multiple univariate models, one for each outcome and then adjusting for multiple comparisons among the outcomes and (2) fitting a multivariate model. In both cases, adjustment for multiple comparisons is performed over all voxels jointly to account for the search over the brain. The multivariate model is able to account for the multiple comparisons over outcomes without assuming independence because the covariance structure between modalities is estimated. Simulations show that the multivariate approach is more powerful when the outcomes are correlated and, even when the outcomes are independent, the multivariate approach is just as powerful or more powerful when at least two outcomes are dependent on predictors in the model. However, multiple univariate regressions with Bonferroni correction remain a desirable alternative in some circumstances. To illustrate the power of each approach, we analyze a case control study of Alzheimer's disease, in which data from three MRI modalities are available. Copyright © 2013 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Wholeben, Brent Edward
A rationale is presented for viewing the decision-making process inherent in determining budget reductions for educational programs as most effectively modeled by a graduated funding approach. The major tenets of the graduated budget reduction approach to educational fiscal policy include the development of multiple alternative reduction plans, or…
Electrification Futures Study Modeling Approach | Energy Analysis | NREL
Electrification Futures Study Modeling Approach Electrification Futures Study Modeling Approach To quantitatively answer the research questions of the Electrification Futures Study, researchers will use multiple accounting for infrastructure inertia through stock turnover. Load Modeling The Electrification Futures Study
Combining information from multiple flood projections in a hierarchical Bayesian framework
NASA Astrophysics Data System (ADS)
Le Vine, Nataliya
2016-04-01
This study demonstrates, in the context of flood frequency analysis, the potential of a recently proposed hierarchical Bayesian approach to combine information from multiple models. The approach explicitly accommodates shared multimodel discrepancy as well as the probabilistic nature of the flood estimates, and treats the available models as a sample from a hypothetical complete (but unobserved) set of models. The methodology is applied to flood estimates from multiple hydrological projections (the Future Flows Hydrology data set) for 135 catchments in the UK. The advantages of the approach are shown to be: (1) to ensure adequate "baseline" with which to compare future changes; (2) to reduce flood estimate uncertainty; (3) to maximize use of statistical information in circumstances where multiple weak predictions individually lack power, but collectively provide meaningful information; (4) to diminish the importance of model consistency when model biases are large; and (5) to explicitly consider the influence of the (model performance) stationarity assumption. Moreover, the analysis indicates that reducing shared model discrepancy is the key to further reduction of uncertainty in the flood frequency analysis. The findings are of value regarding how conclusions about changing exposure to flooding are drawn, and to flood frequency change attribution studies.
Falk, Carl F; Cai, Li
2016-06-01
We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.
Tian, Ting; McLachlan, Geoffrey J.; Dieters, Mark J.; Basford, Kaye E.
2015-01-01
It is a common occurrence in plant breeding programs to observe missing values in three-way three-mode multi-environment trial (MET) data. We proposed modifications of models for estimating missing observations for these data arrays, and developed a novel approach in terms of hierarchical clustering. Multiple imputation (MI) was used in four ways, multiple agglomerative hierarchical clustering, normal distribution model, normal regression model, and predictive mean match. The later three models used both Bayesian analysis and non-Bayesian analysis, while the first approach used a clustering procedure with randomly selected attributes and assigned real values from the nearest neighbour to the one with missing observations. Different proportions of data entries in six complete datasets were randomly selected to be missing and the MI methods were compared based on the efficiency and accuracy of estimating those values. The results indicated that the models using Bayesian analysis had slightly higher accuracy of estimation performance than those using non-Bayesian analysis but they were more time-consuming. However, the novel approach of multiple agglomerative hierarchical clustering demonstrated the overall best performances. PMID:26689369
Tian, Ting; McLachlan, Geoffrey J; Dieters, Mark J; Basford, Kaye E
2015-01-01
It is a common occurrence in plant breeding programs to observe missing values in three-way three-mode multi-environment trial (MET) data. We proposed modifications of models for estimating missing observations for these data arrays, and developed a novel approach in terms of hierarchical clustering. Multiple imputation (MI) was used in four ways, multiple agglomerative hierarchical clustering, normal distribution model, normal regression model, and predictive mean match. The later three models used both Bayesian analysis and non-Bayesian analysis, while the first approach used a clustering procedure with randomly selected attributes and assigned real values from the nearest neighbour to the one with missing observations. Different proportions of data entries in six complete datasets were randomly selected to be missing and the MI methods were compared based on the efficiency and accuracy of estimating those values. The results indicated that the models using Bayesian analysis had slightly higher accuracy of estimation performance than those using non-Bayesian analysis but they were more time-consuming. However, the novel approach of multiple agglomerative hierarchical clustering demonstrated the overall best performances.
Leyrat, Clémence; Seaman, Shaun R; White, Ian R; Douglas, Ian; Smeeth, Liam; Kim, Joseph; Resche-Rigon, Matthieu; Carpenter, James R; Williamson, Elizabeth J
2017-01-01
Inverse probability of treatment weighting is a popular propensity score-based approach to estimate marginal treatment effects in observational studies at risk of confounding bias. A major issue when estimating the propensity score is the presence of partially observed covariates. Multiple imputation is a natural approach to handle missing data on covariates: covariates are imputed and a propensity score analysis is performed in each imputed dataset to estimate the treatment effect. The treatment effect estimates from each imputed dataset are then combined to obtain an overall estimate. We call this method MIte. However, an alternative approach has been proposed, in which the propensity scores are combined across the imputed datasets (MIps). Therefore, there are remaining uncertainties about how to implement multiple imputation for propensity score analysis: (a) should we apply Rubin's rules to the inverse probability of treatment weighting treatment effect estimates or to the propensity score estimates themselves? (b) does the outcome have to be included in the imputation model? (c) how should we estimate the variance of the inverse probability of treatment weighting estimator after multiple imputation? We studied the consistency and balancing properties of the MIte and MIps estimators and performed a simulation study to empirically assess their performance for the analysis of a binary outcome. We also compared the performance of these methods to complete case analysis and the missingness pattern approach, which uses a different propensity score model for each pattern of missingness, and a third multiple imputation approach in which the propensity score parameters are combined rather than the propensity scores themselves (MIpar). Under a missing at random mechanism, complete case and missingness pattern analyses were biased in most cases for estimating the marginal treatment effect, whereas multiple imputation approaches were approximately unbiased as long as the outcome was included in the imputation model. Only MIte was unbiased in all the studied scenarios and Rubin's rules provided good variance estimates for MIte. The propensity score estimated in the MIte approach showed good balancing properties. In conclusion, when using multiple imputation in the inverse probability of treatment weighting context, MIte with the outcome included in the imputation model is the preferred approach.
This task addresses a number of issues that arise in multimedia modeling with an emphasis on interactions among the atmosphere and multiple other environmental media. Approaches for working with multiple types of models and the data sets are being developed. Proper software tool...
ERIC Educational Resources Information Center
Bartolucci, Francesco; Pennoni, Fulvia; Vittadini, Giorgio
2016-01-01
We extend to the longitudinal setting a latent class approach that was recently introduced by Lanza, Coffman, and Xu to estimate the causal effect of a treatment. The proposed approach enables an evaluation of multiple treatment effects on subpopulations of individuals from a dynamic perspective, as it relies on a latent Markov (LM) model that is…
Balliu, Brunilda; Tsonaka, Roula; Boehringer, Stefan; Houwing-Duistermaat, Jeanine
2015-03-01
Integrative omics, the joint analysis of outcome and multiple types of omics data, such as genomics, epigenomics, and transcriptomics data, constitute a promising approach for powerful and biologically relevant association studies. These studies often employ a case-control design, and often include nonomics covariates, such as age and gender, that may modify the underlying omics risk factors. An open question is how to best integrate multiple omics and nonomics information to maximize statistical power in case-control studies that ascertain individuals based on the phenotype. Recent work on integrative omics have used prospective approaches, modeling case-control status conditional on omics, and nonomics risk factors. Compared to univariate approaches, jointly analyzing multiple risk factors with a prospective approach increases power in nonascertained cohorts. However, these prospective approaches often lose power in case-control studies. In this article, we propose a novel statistical method for integrating multiple omics and nonomics factors in case-control association studies. Our method is based on a retrospective likelihood function that models the joint distribution of omics and nonomics factors conditional on case-control status. The new method provides accurate control of Type I error rate and has increased efficiency over prospective approaches in both simulated and real data. © 2015 Wiley Periodicals, Inc.
Multilevel joint competing risk models
NASA Astrophysics Data System (ADS)
Karunarathna, G. H. S.; Sooriyarachchi, M. R.
2017-09-01
Joint modeling approaches are often encountered for different outcomes of competing risk time to event and count in many biomedical and epidemiology studies in the presence of cluster effect. Hospital length of stay (LOS) has been the widely used outcome measure in hospital utilization due to the benchmark measurement for measuring multiple terminations such as discharge, transferred, dead and patients who have not completed the event of interest at the follow up period (censored) during hospitalizations. Competing risk models provide a method of addressing such multiple destinations since classical time to event models yield biased results when there are multiple events. In this study, the concept of joint modeling has been applied to the dengue epidemiology in Sri Lanka, 2006-2008 to assess the relationship between different outcomes of LOS and platelet count of dengue patients with the district cluster effect. Two key approaches have been applied to build up the joint scenario. In the first approach, modeling each competing risk separately using the binary logistic model, treating all other events as censored under the multilevel discrete time to event model, while the platelet counts are assumed to follow a lognormal regression model. The second approach is based on the endogeneity effect in the multilevel competing risks and count model. Model parameters were estimated using maximum likelihood based on the Laplace approximation. Moreover, the study reveals that joint modeling approach yield more precise results compared to fitting two separate univariate models, in terms of AIC (Akaike Information Criterion).
NASA Astrophysics Data System (ADS)
Wang, Q. J.; Robertson, D. E.; Chiew, F. H. S.
2009-05-01
Seasonal forecasting of streamflows can be highly valuable for water resources management. In this paper, a Bayesian joint probability (BJP) modeling approach for seasonal forecasting of streamflows at multiple sites is presented. A Box-Cox transformed multivariate normal distribution is proposed to model the joint distribution of future streamflows and their predictors such as antecedent streamflows and El Niño-Southern Oscillation indices and other climate indicators. Bayesian inference of model parameters and uncertainties is implemented using Markov chain Monte Carlo sampling, leading to joint probabilistic forecasts of streamflows at multiple sites. The model provides a parametric structure for quantifying relationships between variables, including intersite correlations. The Box-Cox transformed multivariate normal distribution has considerable flexibility for modeling a wide range of predictors and predictands. The Bayesian inference formulated allows the use of data that contain nonconcurrent and missing records. The model flexibility and data-handling ability means that the BJP modeling approach is potentially of wide practical application. The paper also presents a number of statistical measures and graphical methods for verification of probabilistic forecasts of continuous variables. Results for streamflows at three river gauges in the Murrumbidgee River catchment in southeast Australia show that the BJP modeling approach has good forecast quality and that the fitted model is consistent with observed data.
Performance of stochastic approaches for forecasting river water quality.
Ahmad, S; Khan, I H; Parida, B P
2001-12-01
This study analysed water quality data collected from the river Ganges in India from 1981 to 1990 for forecasting using stochastic models. Initially the box and whisker plots and Kendall's tau test were used to identify the trends during the study period. For detecting the possible intervention in the data the time series plots and cusum charts were used. The three approaches of stochastic modelling which account for the effect of seasonality in different ways. i.e. multiplicative autoregressive integrated moving average (ARIMA) model. deseasonalised model and Thomas-Fiering model were used to model the observed pattern in water quality. The multiplicative ARIMA model having both nonseasonal and seasonal components were, in general, identified as appropriate models. In the deseasonalised modelling approach, the lower order ARIMA models were found appropriate for the stochastic component. The set of Thomas-Fiering models were formed for each month for all water quality parameters. These models were then used to forecast the future values. The error estimates of forecasts from the three approaches were compared to identify the most suitable approach for the reliable forecast. The deseasonalised modelling approach was recommended for forecasting of water quality parameters of a river.
ERIC Educational Resources Information Center
Raykov, Tenko; Dimitrov, Dimiter M.; Marcoulides, George A.; Li, Tatyana; Menold, Natalja
2018-01-01
A latent variable modeling method for studying measurement invariance when evaluating latent constructs with multiple binary or binary scored items with no guessing is outlined. The approach extends the continuous indicator procedure described by Raykov and colleagues, utilizes similarly the false discovery rate approach to multiple testing, and…
Multiple Imputation of a Randomly Censored Covariate Improves Logistic Regression Analysis.
Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A
2016-01-01
Randomly censored covariates arise frequently in epidemiologic studies. The most commonly used methods, including complete case and single imputation or substitution, suffer from inefficiency and bias. They make strong parametric assumptions or they consider limit of detection censoring only. We employ multiple imputation, in conjunction with semi-parametric modeling of the censored covariate, to overcome these shortcomings and to facilitate robust estimation. We develop a multiple imputation approach for randomly censored covariates within the framework of a logistic regression model. We use the non-parametric estimate of the covariate distribution or the semiparametric Cox model estimate in the presence of additional covariates in the model. We evaluate this procedure in simulations, and compare its operating characteristics to those from the complete case analysis and a survival regression approach. We apply the procedures to an Alzheimer's study of the association between amyloid positivity and maternal age of onset of dementia. Multiple imputation achieves lower standard errors and higher power than the complete case approach under heavy and moderate censoring and is comparable under light censoring. The survival regression approach achieves the highest power among all procedures, but does not produce interpretable estimates of association. Multiple imputation offers a favorable alternative to complete case analysis and ad hoc substitution methods in the presence of randomly censored covariates within the framework of logistic regression.
Crotta, Matteo; Rizzi, Rita; Varisco, Giorgio; Daminelli, Paolo; Cunico, Elena Cosciani; Luini, Mario; Graber, Hans Ulrich; Paterlini, Franco; Guitian, Javier
2016-03-01
Quantitative microbial risk assessment (QMRA) models are extensively applied to inform management of a broad range of food safety risks. Inevitably, QMRA modeling involves an element of simplification of the biological process of interest. Two features that are frequently simplified or disregarded are the pathogenicity of multiple strains of a single pathogen and consumer behavior at the household level. In this study, we developed a QMRA model with a multiple-strain approach and a consumer phase module (CPM) based on uncertainty distributions fitted from field data. We modeled exposure to staphylococcal enterotoxin A in raw milk in Lombardy; a specific enterotoxin production module was thus included. The model is adaptable and could be used to assess the risk related to other pathogens in raw milk as well as other staphylococcal enterotoxins. The multiplestrain approach, implemented as a multinomial process, allowed the inclusion of variability and uncertainty with regard to pathogenicity at the bacterial level. Data from 301 questionnaires submitted to raw milk consumers were used to obtain uncertainty distributions for the CPM. The distributions were modeled to be easily updatable with further data or evidence. The sources of uncertainty due to the multiple-strain approach and the CPM were identified, and their impact on the output was assessed by comparing specific scenarios to the baseline. When the distributions reflecting the uncertainty in consumer behavior were fixed to the 95th percentile, the risk of exposure increased up to 160 times. This reflects the importance of taking into consideration the diversity of consumers' habits at the household level and the impact that the lack of knowledge about variables in the CPM can have on the final QMRA estimates. The multiple-strain approach lends itself to use in other food matrices besides raw milk and allows the model to better capture the complexity of the real world and to be capable of geographical specificity.
A new spatial multiple discrete-continuous modeling approach to land use change analysis.
DOT National Transportation Integrated Search
2013-09-01
This report formulates a multiple discrete-continuous probit (MDCP) land-use model within a : spatially explicit economic structural framework for land-use change decisions. The spatial : MDCP model is capable of predicting both the type and intensit...
ERIC Educational Resources Information Center
Chiu, Chung-Yi; Lynch, Ruth Torkelson; Chan, Fong; Rose, Lindsey
2012-01-01
The main objective of this study was to evaluate the health action process approach (HAPA) as a motivational model for dietary self-management for people with multiple sclerosis (MS). Quantitative descriptive research design using path analysis was used. Participants were 209 individuals with MS recruited from the National MS Society and a…
Sparse Group Penalized Integrative Analysis of Multiple Cancer Prognosis Datasets
Liu, Jin; Huang, Jian; Xie, Yang; Ma, Shuangge
2014-01-01
SUMMARY In cancer research, high-throughput profiling studies have been extensively conducted, searching for markers associated with prognosis. Because of the “large d, small n” characteristic, results generated from the analysis of a single dataset can be unsatisfactory. Recent studies have shown that integrative analysis, which simultaneously analyzes multiple datasets, can be more effective than single-dataset analysis and classic meta-analysis. In most of existing integrative analysis, the homogeneity model has been assumed, which postulates that different datasets share the same set of markers. Several approaches have been designed to reinforce this assumption. In practice, different datasets may differ in terms of patient selection criteria, profiling techniques, and many other aspects. Such differences may make the homogeneity model too restricted. In this study, we assume the heterogeneity model, under which different datasets are allowed to have different sets of markers. With multiple cancer prognosis datasets, we adopt the AFT (accelerated failure time) model to describe survival. This model may have the lowest computational cost among popular semiparametric survival models. For marker selection, we adopt a sparse group MCP (minimax concave penalty) approach. This approach has an intuitive formulation and can be computed using an effective group coordinate descent algorithm. Simulation study shows that it outperforms the existing approaches under both the homogeneity and heterogeneity models. Data analysis further demonstrates the merit of heterogeneity model and proposed approach. PMID:23938111
Targeted versus statistical approaches to selecting parameters for modelling sediment provenance
NASA Astrophysics Data System (ADS)
Laceby, J. Patrick
2017-04-01
One effective field-based approach to modelling sediment provenance is the source fingerprinting technique. Arguably, one of the most important steps for this approach is selecting the appropriate suite of parameters or fingerprints used to model source contributions. Accordingly, approaches to selecting parameters for sediment source fingerprinting will be reviewed. Thereafter, opportunities and limitations of these approaches and some future research directions will be presented. For properties to be effective tracers of sediment, they must discriminate between sources whilst behaving conservatively. Conservative behavior is characterized by constancy in sediment properties, where the properties of sediment sources remain constant, or at the very least, any variation in these properties should occur in a predictable and measurable way. Therefore, properties selected for sediment source fingerprinting should remain constant through sediment detachment, transportation and deposition processes, or vary in a predictable and measurable way. One approach to select conservative properties for sediment source fingerprinting is to identify targeted tracers, such as caesium-137, that provide specific source information (e.g. surface versus subsurface origins). A second approach is to use statistical tests to select an optimal suite of conservative properties capable of modelling sediment provenance. In general, statistical approaches use a combination of a discrimination (e.g. Kruskal Wallis H-test, Mann-Whitney U-test) and parameter selection statistics (e.g. Discriminant Function Analysis or Principle Component Analysis). The challenge is that modelling sediment provenance is often not straightforward and there is increasing debate in the literature surrounding the most appropriate approach to selecting elements for modelling. Moving forward, it would be beneficial if researchers test their results with multiple modelling approaches, artificial mixtures, and multiple lines of evidence to provide secondary support to their initial modelling results. Indeed, element selection can greatly impact modelling results and having multiple lines of evidence will help provide confidence when modelling sediment provenance.
A. Weiskittel; D. Maguire; R. Monserud
2007-01-01
Hybrid models offer the opportunity to improve future growth projections by combining advantages of both empirical and process-based modeling approaches. Hybrid models have been constructed in several regions and their performance relative to a purely empirical approach has varied. A hybrid model was constructed for intensively managed Douglas-fir plantations in the...
ERIC Educational Resources Information Center
Suh, Youngsuk; Talley, Anna E.
2015-01-01
This study compared and illustrated four differential distractor functioning (DDF) detection methods for analyzing multiple-choice items. The log-linear approach, two item response theory-model-based approaches with likelihood ratio tests, and the odds ratio approach were compared to examine the congruence among the four DDF detection methods.…
Validity and Realibility of Chemistry Systemic Multiple Choices Questions (CSMCQs)
ERIC Educational Resources Information Center
Priyambodo, Erfan; Marfuatun
2016-01-01
Nowadays, Rasch model analysis is used widely in social research, moreover in educational research. In this research, Rasch model is used to determine the validation and the reliability of systemic multiple choices question in chemistry teaching and learning. There were 30 multiple choices question with systemic approach for high school student…
Multiple-hypothesis multiple-model line tracking
NASA Astrophysics Data System (ADS)
Pace, Donald W.; Owen, Mark W.; Cox, Henry
2000-07-01
Passive sonar signal processing generally includes tracking of narrowband and/or broadband signature components observed on a Lofargram or on a Bearing-Time-Record (BTR) display. Fielded line tracking approaches to date have been recursive and single-hypthesis-oriented Kalman- or alpha-beta filters, with no mechanism for considering tracking alternatives beyond the most recent scan of measurements. While adaptivity is often built into the filter to handle changing track dynamics, these approaches are still extensions of single target tracking solutions to multiple target tracking environment. This paper describes an application of multiple-hypothesis, multiple target tracking technology to the sonar line tracking problem. A Multiple Hypothesis Line Tracker (MHLT) is developed which retains the recursive minimum-mean-square-error tracking behavior of a Kalman Filter in a maximum-a-posteriori delayed-decision multiple hypothesis context. Multiple line track filter states are developed and maintained using the interacting multiple model (IMM) state representation. Further, the data association and assignment problem is enhanced by considering line attribute information (line bandwidth and SNR) in addition to beam/bearing and frequency fit. MHLT results on real sonar data are presented to demonstrate the benefits of the multiple hypothesis approach. The utility of the system in cluttered environments and particularly in crossing line situations is shown.
Exploring the Use of Multiple Analogical Models when Teaching and Learning Chemical Equilibrium
ERIC Educational Resources Information Center
Harrison, Allan G.; De Jong, Onno
2005-01-01
This study describes the multiple analogical models used to introduce and teach Grade 12 chemical equilibrium. We examine the teacher's reasons for using models, explain each model's development during the lessons, and analyze the understandings students derived from the models. A case study approach was used and the data were drawn from the…
Burgette, Lane F; Reiter, Jerome P
2013-06-01
Multinomial outcomes with many levels can be challenging to model. Information typically accrues slowly with increasing sample size, yet the parameter space expands rapidly with additional covariates. Shrinking all regression parameters towards zero, as often done in models of continuous or binary response variables, is unsatisfactory, since setting parameters equal to zero in multinomial models does not necessarily imply "no effect." We propose an approach to modeling multinomial outcomes with many levels based on a Bayesian multinomial probit (MNP) model and a multiple shrinkage prior distribution for the regression parameters. The prior distribution encourages the MNP regression parameters to shrink toward a number of learned locations, thereby substantially reducing the dimension of the parameter space. Using simulated data, we compare the predictive performance of this model against two other recently-proposed methods for big multinomial models. The results suggest that the fully Bayesian, multiple shrinkage approach can outperform these other methods. We apply the multiple shrinkage MNP to simulating replacement values for areal identifiers, e.g., census tract indicators, in order to protect data confidentiality in public use datasets.
A Bayesian model averaging method for improving SMT phrase table
NASA Astrophysics Data System (ADS)
Duan, Nan
2013-03-01
Previous methods on improving translation quality by employing multiple SMT models usually carry out as a second-pass decision procedure on hypotheses from multiple systems using extra features instead of using features in existing models in more depth. In this paper, we propose translation model generalization (TMG), an approach that updates probability feature values for the translation model being used based on the model itself and a set of auxiliary models, aiming to alleviate the over-estimation problem and enhance translation quality in the first-pass decoding phase. We validate our approach for translation models based on auxiliary models built by two different ways. We also introduce novel probability variance features into the log-linear models for further improvements. We conclude our approach can be developed independently and integrated into current SMT pipeline directly. We demonstrate BLEU improvements on the NIST Chinese-to-English MT tasks for single-system decodings.
Due to the complexity of the processes contributing to beach bacteria concentrations, many researchers rely on statistical modeling, among which multiple linear regression (MLR) modeling is most widely used. Despite its ease of use and interpretation, there may be time dependence...
Specifying and Refining a Measurement Model for a Computer-Based Interactive Assessment
ERIC Educational Resources Information Center
Levy, Roy; Mislevy, Robert J.
2004-01-01
The challenges of modeling students' performance in computer-based interactive assessments include accounting for multiple aspects of knowledge and skill that arise in different situations and the conditional dependencies among multiple aspects of performance. This article describes a Bayesian approach to modeling and estimating cognitive models…
Mariel, Petr; Hoyos, David; Artabe, Alaitz; Guevara, C Angelo
2018-08-15
Endogeneity is an often neglected issue in empirical applications of discrete choice modelling despite its severe consequences in terms of inconsistent parameter estimation and biased welfare measures. This article analyses the performance of the multiple indicator solution method to deal with endogeneity arising from omitted explanatory variables in discrete choice models for environmental valuation. We also propose and illustrate a factor analysis procedure for the selection of the indicators in practice. Additionally, the performance of this method is compared with the recently proposed hybrid choice modelling framework. In an empirical application we find that the multiple indicator solution method and the hybrid model approach provide similar results in terms of welfare estimates, although the multiple indicator solution method is more parsimonious and notably easier to implement. The empirical results open a path to explore the performance of this method when endogeneity is thought to have a different cause or under a different set of indicators. Copyright © 2018 Elsevier B.V. All rights reserved.
FINDING A COMMON DATA REPRESENTATION AND INTERCHANGE APPROACH FOR MULTIMEDIA MODELS
Within many disciplines, multiple approaches are used to represent and access very similar data (e.g., a time series of values), often due to the lack of commonly accepted standards. When projects must use data from multiple disciplines, the problems quickly compound. Often sig...
2017-02-01
Reports an error in "An integrative formal model of motivation and decision making: The MGPM*" by Timothy Ballard, Gillian Yeo, Shayne Loft, Jeffrey B. Vancouver and Andrew Neal ( Journal of Applied Psychology , 2016[Sep], Vol 101[9], 1240-1265). Equation A3 contained an error. This correct equation is provided in the erratum. (The following abstract of the original article appeared in record 2016-28692-001.) We develop and test an integrative formal model of motivation and decision making. The model, referred to as the extended multiple-goal pursuit model (MGPM*), is an integration of the multiple-goal pursuit model (Vancouver, Weinhardt, & Schmidt, 2010) and decision field theory (Busemeyer & Townsend, 1993). Simulations of the model generated predictions regarding the effects of goal type (approach vs. avoidance), risk, and time sensitivity on prioritization. We tested these predictions in an experiment in which participants pursued different combinations of approach and avoidance goals under different levels of risk. The empirical results were consistent with the predictions of the MGPM*. Specifically, participants pursuing 1 approach and 1 avoidance goal shifted priority from the approach to the avoidance goal over time. Among participants pursuing 2 approach goals, those with low time sensitivity prioritized the goal with the larger discrepancy, whereas those with high time sensitivity prioritized the goal with the smaller discrepancy. Participants pursuing 2 avoidance goals generally prioritized the goal with the smaller discrepancy. Finally, all of these effects became weaker as the level of risk increased. We used quantitative model comparison to show that the MGPM* explained the data better than the original multiple-goal pursuit model, and that the major extensions from the original model were justified. The MGPM* represents a step forward in the development of a general theory of decision making during multiple-goal pursuit. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Departures From Optimality When Pursuing Multiple Approach or Avoidance Goals
2016-01-01
This article examines how people depart from optimality during multiple-goal pursuit. The authors operationalized optimality using dynamic programming, which is a mathematical model used to calculate expected value in multistage decisions. Drawing on prospect theory, they predicted that people are risk-averse when pursuing approach goals and are therefore more likely to prioritize the goal in the best position than the dynamic programming model suggests is optimal. The authors predicted that people are risk-seeking when pursuing avoidance goals and are therefore more likely to prioritize the goal in the worst position than is optimal. These predictions were supported by results from an experimental paradigm in which participants made a series of prioritization decisions while pursuing either 2 approach or 2 avoidance goals. This research demonstrates the usefulness of using decision-making theories and normative models to understand multiple-goal pursuit. PMID:26963081
Yang, Weichao; Xu, Kui; Lian, Jijian; Bin, Lingling; Ma, Chao
2018-05-01
Flood is a serious challenge that increasingly affects the residents as well as policymakers. Flood vulnerability assessment is becoming gradually relevant in the world. The purpose of this study is to develop an approach to reveal the relationship between exposure, sensitivity and adaptive capacity for better flood vulnerability assessment, based on the fuzzy comprehensive evaluation method (FCEM) and coordinated development degree model (CDDM). The approach is organized into three parts: establishment of index system, assessment of exposure, sensitivity and adaptive capacity, and multiple flood vulnerability assessment. Hydrodynamic model and statistical data are employed for the establishment of index system; FCEM is used to evaluate exposure, sensitivity and adaptive capacity; and CDDM is applied to express the relationship of the three components of vulnerability. Six multiple flood vulnerability types and four levels are proposed to assess flood vulnerability from multiple perspectives. Then the approach is applied to assess the spatiality of flood vulnerability in Hainan's eastern area, China. Based on the results of multiple flood vulnerability, a decision-making process for rational allocation of limited resources is proposed and applied to the study area. The study shows that multiple flood vulnerability assessment can evaluate vulnerability more completely, and help decision makers learn more information about making decisions in a more comprehensive way. In summary, this study provides a new way for flood vulnerability assessment and disaster prevention decision. Copyright © 2018 Elsevier Ltd. All rights reserved.
Distribution of model uncertainty across multiple data streams
NASA Astrophysics Data System (ADS)
Wutzler, Thomas
2014-05-01
When confronting biogeochemical models with a diversity of observational data streams, we are faced with the problem of weighing the data streams. Without weighing or multiple blocked cost functions, model uncertainty is allocated to the sparse data streams and possible bias in processes that are strongly constraint is exported to processes that are constrained by sparse data streams only. In this study we propose an approach that aims at making model uncertainty a factor of observations uncertainty, that is constant over all data streams. Further we propose an implementation based on Monte-Carlo Markov chain sampling combined with simulated annealing that is able to determine this variance factor. The method is exemplified both with very simple models, artificial data and with an inversion of the DALEC ecosystem carbon model against multiple observations of Howland forest. We argue that the presented approach is able to help and maybe resolve the problem of bias export to sparse data streams.
ERIC Educational Resources Information Center
Lee, Jimin; Hustad, Katherine C.; Weismer, Gary
2014-01-01
Purpose: Speech acoustic characteristics of children with cerebral palsy (CP) were examined with a multiple speech subsystems approach; speech intelligibility was evaluated using a prediction model in which acoustic measures were selected to represent three speech subsystems. Method: Nine acoustic variables reflecting different subsystems, and…
NASA Astrophysics Data System (ADS)
Leskiw, Donald M.; Zhau, Junmei
2000-06-01
This paper reports on results from an ongoing project to develop methodologies for representing and managing multiple, concurrent levels of detail and enabling high performance computing using parallel arrays within distributed object-based simulation frameworks. At this time we present the methodology for representing and managing multiple, concurrent levels of detail and modeling accuracy by using a representation based on the Kalman approach for estimation. The Kalman System Model equations are used to represent model accuracy, Kalman Measurement Model equations provide transformations between heterogeneous levels of detail, and interoperability among disparate abstractions is provided using a form of the Kalman Update equations.
Hidden Markov models of biological primary sequence information.
Baldi, P; Chauvin, Y; Hunkapiller, T; McClure, M A
1994-01-01
Hidden Markov model (HMM) techniques are used to model families of biological sequences. A smooth and convergent algorithm is introduced to iteratively adapt the transition and emission parameters of the models from the examples in a given family. The HMM approach is applied to three protein families: globins, immunoglobulins, and kinases. In all cases, the models derived capture the important statistical characteristics of the family and can be used for a number of tasks, including multiple alignments, motif detection, and classification. For K sequences of average length N, this approach yields an effective multiple-alignment algorithm which requires O(KN2) operations, linear in the number of sequences. PMID:8302831
Modeling is a useful tool for quantifying ecosystem services and understanding their temporal dynamics. Here we describe a hybrid regional modeling approach for sub-basins of the Calapooia watershed that incorporates both a precipitation-runoff model and an indexed regression mo...
Large-region acoustic source mapping using a movable array and sparse covariance fitting.
Zhao, Shengkui; Tuna, Cagdas; Nguyen, Thi Ngoc Tho; Jones, Douglas L
2017-01-01
Large-region acoustic source mapping is important for city-scale noise monitoring. Approaches using a single-position measurement scheme to scan large regions using small arrays cannot provide clean acoustic source maps, while deploying large arrays spanning the entire region of interest is prohibitively expensive. A multiple-position measurement scheme is applied to scan large regions at multiple spatial positions using a movable array of small size. Based on the multiple-position measurement scheme, a sparse-constrained multiple-position vectorized covariance matrix fitting approach is presented. In the proposed approach, the overall sample covariance matrix of the incoherent virtual array is first estimated using the multiple-position array data and then vectorized using the Khatri-Rao (KR) product. A linear model is then constructed for fitting the vectorized covariance matrix and a sparse-constrained reconstruction algorithm is proposed for recovering source powers from the model. The user parameter settings are discussed. The proposed approach is tested on a 30 m × 40 m region and a 60 m × 40 m region using simulated and measured data. Much cleaner acoustic source maps and lower sound pressure level errors are obtained compared to the beamforming approaches and the previous sparse approach [Zhao, Tuna, Nguyen, and Jones, Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Processing (ICASSP) (2016)].
Accounting for multiple sources of uncertainty in impact assessments: The example of the BRACE study
NASA Astrophysics Data System (ADS)
O'Neill, B. C.
2015-12-01
Assessing climate change impacts often requires the use of multiple scenarios, types of models, and data sources, leading to a large number of potential sources of uncertainty. For example, a single study might require a choice of a forcing scenario, climate model, bias correction and/or downscaling method, societal development scenario, model (typically several) for quantifying elements of societal development such as economic and population growth, biophysical model (such as for crop yields or hydrology), and societal impact model (e.g. economic or health model). Some sources of uncertainty are reduced or eliminated by the framing of the question. For example, it may be useful to ask what an impact outcome would be conditional on a given societal development pathway, forcing scenario, or policy. However many sources of uncertainty remain, and it is rare for all or even most of these sources to be accounted for. I use the example of a recent integrated project on the Benefits of Reduced Anthropogenic Climate changE (BRACE) to explore useful approaches to uncertainty across multiple components of an impact assessment. BRACE comprises 23 papers that assess the differences in impacts between two alternative climate futures: those associated with Representative Concentration Pathways (RCPs) 4.5 and 8.5. It quantifies difference in impacts in terms of extreme events, health, agriculture, tropical cyclones, and sea level rise. Methodologically, it includes climate modeling, statistical analysis, integrated assessment modeling, and sector-specific impact modeling. It employs alternative scenarios of both radiative forcing and societal development, but generally uses a single climate model (CESM), partially accounting for climate uncertainty by drawing heavily on large initial condition ensembles. Strengths and weaknesses of the approach to uncertainty in BRACE are assessed. Options under consideration for improving the approach include the use of perturbed physics ensembles of CESM, employing results from multiple climate models, and combining the results from single impact models with statistical representations of uncertainty across multiple models. A key consideration is the relationship between the question being addressed and the uncertainty approach.
A geospatial modelling approach to predict seagrass habitat recovery under multiple stressor regimes
Restoration of estuarine seagrass habitats requires a clear understanding of the modes of action of multiple interacting stressors including nutrients, climate change, coastal land-use change, and habitat modification. We have developed and demonstrated a geospatial modeling a...
Effect of Multiple Scattering on the Compton Recoil Current Generated in an EMP, Revisited
Farmer, William A.; Friedman, Alex
2015-06-18
Multiple scattering has historically been treated in EMP modeling through the obliquity factor. The validity of this approach is examined here. A simplified model problem, which correctly captures cyclotron motion, Doppler shifting due to the electron motion, and multiple scattering is first considered. The simplified problem is solved three ways: the obliquity factor, Monte-Carlo, and Fokker-Planck finite-difference. Because of the Doppler effect, skewness occurs in the distribution. It is demonstrated that the obliquity factor does not correctly capture this skewness, but the Monte-Carlo and Fokker-Planck finite-difference approaches do. Here, the obliquity factor and Fokker-Planck finite-difference approaches are then compared inmore » a fuller treatment, which includes the initial Klein-Nishina distribution of the electrons, and the momentum dependence of both drag and scattering. It is found that, in general, the obliquity factor is adequate for most situations. However, as the gamma energy increases and the Klein-Nishina becomes more peaked in the forward direction, skewness in the distribution causes greater disagreement between the obliquity factor and a more accurate model of multiple scattering.« less
Cognitive Models: The Missing Link to Learning Fraction Multiplication and Division
ERIC Educational Resources Information Center
de Castro, Belinda V.
2008-01-01
This quasi-experimental study aims to streamline cognitive models on fraction multiplication and division that contain the most worthwhile features of other existing models. Its exploratory nature and its approach to proof elicitation can be used to help establish its effectiveness in building students' understanding of fractions as compared to…
Specifying and Refining a Measurement Model for a Simulation-Based Assessment. CSE Report 619.
ERIC Educational Resources Information Center
Levy, Roy; Mislevy, Robert J.
2004-01-01
The challenges of modeling students' performance in simulation-based assessments include accounting for multiple aspects of knowledge and skill that arise in different situations and the conditional dependencies among multiple aspects of performance in a complex assessment. This paper describes a Bayesian approach to modeling and estimating…
Xing, Junliang; Ai, Haizhou; Liu, Liwei; Lao, Shihong
2011-06-01
Multiple object tracking (MOT) is a very challenging task yet of fundamental importance for many practical applications. In this paper, we focus on the problem of tracking multiple players in sports video which is even more difficult due to the abrupt movements of players and their complex interactions. To handle the difficulties in this problem, we present a new MOT algorithm which contributes both in the observation modeling level and in the tracking strategy level. For the observation modeling, we develop a progressive observation modeling process that is able to provide strong tracking observations and greatly facilitate the tracking task. For the tracking strategy, we propose a dual-mode two-way Bayesian inference approach which dynamically switches between an offline general model and an online dedicated model to deal with single isolated object tracking and multiple occluded object tracking integrally by forward filtering and backward smoothing. Extensive experiments on different kinds of sports videos, including football, basketball, as well as hockey, demonstrate the effectiveness and efficiency of the proposed method.
An improved Multimodel Approach for Global Sea Surface Temperature Forecasts
NASA Astrophysics Data System (ADS)
Khan, M. Z. K.; Mehrotra, R.; Sharma, A.
2014-12-01
The concept of ensemble combinations for formulating improved climate forecasts has gained popularity in recent years. However, many climate models share similar physics or modeling processes, which may lead to similar (or strongly correlated) forecasts. Recent approaches for combining forecasts that take into consideration differences in model accuracy over space and time have either ignored the similarity of forecast among the models or followed a pairwise dynamic combination approach. Here we present a basis for combining model predictions, illustrating the improvements that can be achieved if procedures for factoring in inter-model dependence are utilised. The utility of the approach is demonstrated by combining sea surface temperature (SST) forecasts from five climate models over a period of 1960-2005. The variable of interest, the monthly global sea surface temperature anomalies (SSTA) at a 50´50 latitude-longitude grid, is predicted three months in advance to demonstrate the utility of the proposed algorithm. Results indicate that the proposed approach offers consistent and significant improvements for majority of grid points compared to the case where the dependence among the models is ignored. Therefore, the proposed approach of combining multiple models by taking into account the existing interdependence, provides an attractive alternative to obtain improved climate forecast. In addition, an approach to combine seasonal forecasts from multiple climate models with varying periods of availability is also demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shipman, Galen M.
These are the slides for a presentation on programming models in HPC, at the Los Alamos National Laboratory's Parallel Computing Summer School. The following topics are covered: Flynn's Taxonomy of computer architectures; single instruction single data; single instruction multiple data; multiple instruction multiple data; address space organization; definition of Trinity (Intel Xeon-Phi is a MIMD architecture); single program multiple data; multiple program multiple data; ExMatEx workflow overview; definition of a programming model, programming languages, runtime systems; programming model and environments; MPI (Message Passing Interface); OpenMP; Kokkos (Performance Portable Thread-Parallel Programming Model); Kokkos abstractions, patterns, policies, and spaces; RAJA, a systematicmore » approach to node-level portability and tuning; overview of the Legion Programming Model; mapping tasks and data to hardware resources; interoperability: supporting task-level models; Legion S3D execution and performance details; workflow, integration of external resources into the programming model.« less
Merging information from multi-model flood projections in a hierarchical Bayesian framework
NASA Astrophysics Data System (ADS)
Le Vine, Nataliya
2016-04-01
Multi-model ensembles are becoming widely accepted for flood frequency change analysis. The use of multiple models results in large uncertainty around estimates of flood magnitudes, due to both uncertainty in model selection and natural variability of river flow. The challenge is therefore to extract the most meaningful signal from the multi-model predictions, accounting for both model quality and uncertainties in individual model estimates. The study demonstrates the potential of a recently proposed hierarchical Bayesian approach to combine information from multiple models. The approach facilitates explicit treatment of shared multi-model discrepancy as well as the probabilistic nature of the flood estimates, by treating the available models as a sample from a hypothetical complete (but unobserved) set of models. The advantages of the approach are: 1) to insure an adequate 'baseline' conditions with which to compare future changes; 2) to reduce flood estimate uncertainty; 3) to maximize use of statistical information in circumstances where multiple weak predictions individually lack power, but collectively provide meaningful information; 4) to adjust multi-model consistency criteria when model biases are large; and 5) to explicitly consider the influence of the (model performance) stationarity assumption. Moreover, the analysis indicates that reducing shared model discrepancy is the key to further reduction of uncertainty in the flood frequency analysis. The findings are of value regarding how conclusions about changing exposure to flooding are drawn, and to flood frequency change attribution studies.
ℓ(p)-Norm multikernel learning approach for stock market price forecasting.
Shao, Xigao; Wu, Kun; Liao, Bifeng
2012-01-01
Linear multiple kernel learning model has been used for predicting financial time series. However, ℓ(1)-norm multiple support vector regression is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures that generalize well, we adopt ℓ(p)-norm multiple kernel support vector regression (1 ≤ p < ∞) as a stock price prediction model. The optimization problem is decomposed into smaller subproblems, and the interleaved optimization strategy is employed to solve the regression model. The model is evaluated on forecasting the daily stock closing prices of Shanghai Stock Index in China. Experimental results show that our proposed model performs better than ℓ(1)-norm multiple support vector regression model.
Apportioning Sources of Riverine Nitrogen at Multiple Watershed Scales
NASA Astrophysics Data System (ADS)
Boyer, E. W.; Alexander, R. B.; Sebestyen, S. D.
2005-05-01
Loadings of reactive nitrogen (N) entering terrestrial landscapes have increased in recent decades due to anthropogenic activities associated with food and energy production. In the northeastern USA, this enhanced supply of N has been linked to many environmental concerns in both terrestrial and aquatic ecosystems, such as forest decline, lake and stream acidification, human respiratory problems, and coastal eutrophication. Thus N is a priority pollutant with regard to a whole host of air, land, and water quality issues, highlighting the need for methods to identify and quantify various N sources. Further, understanding precursor sources of N is critical to current and proposed public policies targeted at the reduction of N inputs to the terrestrial landscape and receiving waters. We present results from published and ongoing studies using multiple approaches to fingerprint sources of N in the northeastern USA, at watershed scales ranging from the headwaters to the coastal zone. The approaches include: 1) a mass balance model with a nitrogen-budgeting approach for analyses of large watersheds; 2) a spatially-referenced regression model with an empirical modeling approach for analyses of water quality at regional scales; and 3) a meta-analysis of monitoring data with a chemical tracer approach, utilizing concentrations of multiple elements and isotopic composition of N from water samples collected in the streams and rivers. We discuss the successes and limitations of these various approaches for apportioning contributions of N from multiple sources to receiving waters at regional scales.
Multiple-Instance Regression with Structured Data
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Lane, Terran; Roper, Alex
2008-01-01
We present a multiple-instance regression algorithm that models internal bag structure to identify the items most relevant to the bag labels. Multiple-instance regression (MIR) operates on a set of bags with real-valued labels, each containing a set of unlabeled items, in which the relevance of each item to its bag label is unknown. The goal is to predict the labels of new bags from their contents. Unlike previous MIR methods, MI-ClusterRegress can operate on bags that are structured in that they contain items drawn from a number of distinct (but unknown) distributions. MI-ClusterRegress simultaneously learns a model of the bag's internal structure, the relevance of each item, and a regression model that accurately predicts labels for new bags. We evaluated this approach on the challenging MIR problem of crop yield prediction from remote sensing data. MI-ClusterRegress provided predictions that were more accurate than those obtained with non-multiple-instance approaches or MIR methods that do not model the bag structure.
Multi-objective optimization for generating a weighted multi-model ensemble
NASA Astrophysics Data System (ADS)
Lee, H.
2017-12-01
Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.
ERIC Educational Resources Information Center
Friedman, Alinda; And Others
1982-01-01
Two experiments tested the limiting case of a multiple resources approach to resource allocation in information processing. Results contradict a single-capacity model, supporting the idea that the hemispheres' resource supplies are independent and have implications for both cerebral specialization and divided attention issues. (Author/PN)
Multivariate longitudinal data analysis with mixed effects hidden Markov models.
Raffa, Jesse D; Dubin, Joel A
2015-09-01
Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies. © 2015, The International Biometric Society.
Behavioral Modeling of Adversaries with Multiple Objectives in Counterterrorism.
Mazicioglu, Dogucan; Merrick, Jason R W
2018-05-01
Attacker/defender models have primarily assumed that each decisionmaker optimizes the cost of the damage inflicted and its economic repercussions from their own perspective. Two streams of recent research have sought to extend such models. One stream suggests that it is more realistic to consider attackers with multiple objectives, but this research has not included the adaption of the terrorist with multiple objectives to defender actions. The other stream builds off experimental studies that show that decisionmakers deviate from optimal rational behavior. In this article, we extend attacker/defender models to incorporate multiple objectives that a terrorist might consider in planning an attack. This includes the tradeoffs that a terrorist might consider and their adaption to defender actions. However, we must also consider experimental evidence of deviations from the rationality assumed in the commonly used expected utility model in determining such adaption. Thus, we model the attacker's behavior using multiattribute prospect theory to account for the attacker's multiple objectives and deviations from rationality. We evaluate our approach by considering an attacker with multiple objectives who wishes to smuggle radioactive material into the United States and a defender who has the option to implement a screening process to hinder the attacker. We discuss the problems with implementing such an approach, but argue that research in this area must continue to avoid misrepresenting terrorist behavior in determining optimal defensive actions. © 2017 Society for Risk Analysis.
Shifflett, Benjamin; Huang, Rong; Edland, Steven D
2017-01-01
Genotypic association studies are prone to inflated type I error rates if multiple hypothesis testing is performed, e.g., sequentially testing for recessive, multiplicative, and dominant risk. Alternatives to multiple hypothesis testing include the model independent genotypic χ 2 test, the efficiency robust MAX statistic, which corrects for multiple comparisons but with some loss of power, or a single Armitage test for multiplicative trend, which has optimal power when the multiplicative model holds but with some loss of power when dominant or recessive models underlie the genetic association. We used Monte Carlo simulations to describe the relative performance of these three approaches under a range of scenarios. All three approaches maintained their nominal type I error rates. The genotypic χ 2 and MAX statistics were more powerful when testing a strictly recessive genetic effect or when testing a dominant effect when the allele frequency was high. The Armitage test for multiplicative trend was most powerful for the broad range of scenarios where heterozygote risk is intermediate between recessive and dominant risk. Moreover, all tests had limited power to detect recessive genetic risk unless the sample size was large, and conversely all tests were relatively well powered to detect dominant risk. Taken together, these results suggest the general utility of the multiplicative trend test when the underlying genetic model is unknown.
Multiple imputation of missing covariates for the Cox proportional hazards cure model
Beesley, Lauren J; Bartlett, Jonathan W; Wolf, Gregory T; Taylor, Jeremy M G
2016-01-01
We explore several approaches for imputing partially observed covariates when the outcome of interest is a censored event time and when there is an underlying subset of the population that will never experience the event of interest. We call these subjects “cured,” and we consider the case where the data are modeled using a Cox proportional hazards (CPH) mixture cure model. We study covariate imputation approaches using fully conditional specification (FCS). We derive the exact conditional distribution and suggest a sampling scheme for imputing partially observed covariates in the CPH cure model setting. We also propose several approximations to the exact distribution that are simpler and more convenient to use for imputation. A simulation study demonstrates that the proposed imputation approaches outperform existing imputation approaches for survival data without a cure fraction in terms of bias in estimating CPH cure model parameters. We apply our multiple imputation techniques to a study of patients with head and neck cancer. PMID:27439726
NASA Astrophysics Data System (ADS)
Watanabe, S.; Kim, H.; Utsumi, N.
2017-12-01
This study aims to develop a new approach which projects hydrology under climate change using super ensemble experiments. The use of multiple ensemble is essential for the estimation of extreme, which is a major issue in the impact assessment of climate change. Hence, the super ensemble experiments are recently conducted by some research programs. While it is necessary to use multiple ensemble, the multiple calculations of hydrological simulation for each output of ensemble simulations needs considerable calculation costs. To effectively use the super ensemble experiments, we adopt a strategy to use runoff projected by climate models directly. The general approach of hydrological projection is to conduct hydrological model simulations which include land-surface and river routing process using atmospheric boundary conditions projected by climate models as inputs. This study, on the other hand, simulates only river routing model using runoff projected by climate models. In general, the climate model output is systematically biased so that a preprocessing which corrects such bias is necessary for impact assessments. Various bias correction methods have been proposed, but, to the best of our knowledge, no method has proposed for variables other than surface meteorology. Here, we newly propose a method for utilizing the projected future runoff directly. The developed method estimates and corrects the bias based on the pseudo-observation which is a result of retrospective offline simulation. We show an application of this approach to the super ensemble experiments conducted under the program of Half a degree Additional warming, Prognosis and Projected Impacts (HAPPI). More than 400 ensemble experiments from multiple climate models are available. The results of the validation using historical simulations by HAPPI indicates that the output of this approach can effectively reproduce retrospective runoff variability. Likewise, the bias of runoff from super ensemble climate projections is corrected, and the impact of climate change on hydrologic extremes is assessed in a cost-efficient way.
BAYESIAN METHODS FOR REGIONAL-SCALE EUTROPHICATION MODELS. (R830887)
We demonstrate a Bayesian classification and regression tree (CART) approach to link multiple environmental stressors to biological responses and quantify uncertainty in model predictions. Such an approach can: (1) report prediction uncertainty, (2) be consistent with the amou...
Building Regression Models: The Importance of Graphics.
ERIC Educational Resources Information Center
Dunn, Richard
1989-01-01
Points out reasons for using graphical methods to teach simple and multiple regression analysis. Argues that a graphically oriented approach has considerable pedagogic advantages in the exposition of simple and multiple regression. Shows that graphical methods may play a central role in the process of building regression models. (Author/LS)
Linking Multiple Databases: Term Project Using "Sentences" DBMS.
ERIC Educational Resources Information Center
King, Ronald S.; Rainwater, Stephen B.
This paper describes a methodology for use in teaching an introductory Database Management System (DBMS) course. Students master basic database concepts through the use of a multiple component project implemented in both relational and associative data models. The associative data model is a new approach for designing multi-user, Web-enabled…
A Qualitative Approach to Portfolios: The Early Assessment for Exceptional Potential Model.
ERIC Educational Resources Information Center
Shaklee, Beverly D.; Viechnicki, Karen J.
1995-01-01
The Early Assessment for Exceptional Potential portfolio assessment model assesses children as exceptional learners, users, generators, and pursuers of knowledge. It is based on use of authentic learning opportunities; interaction of assessment, curriculum, and instruction; multiple criteria derived from multiple sources; and systematic teacher…
ℓ p-Norm Multikernel Learning Approach for Stock Market Price Forecasting
Shao, Xigao; Wu, Kun; Liao, Bifeng
2012-01-01
Linear multiple kernel learning model has been used for predicting financial time series. However, ℓ 1-norm multiple support vector regression is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures that generalize well, we adopt ℓ p-norm multiple kernel support vector regression (1 ≤ p < ∞) as a stock price prediction model. The optimization problem is decomposed into smaller subproblems, and the interleaved optimization strategy is employed to solve the regression model. The model is evaluated on forecasting the daily stock closing prices of Shanghai Stock Index in China. Experimental results show that our proposed model performs better than ℓ 1-norm multiple support vector regression model. PMID:23365561
NASA Astrophysics Data System (ADS)
Voorhoeve, Robbert; van der Maas, Annemiek; Oomen, Tom
2018-05-01
Frequency response function (FRF) identification is often used as a basis for control systems design and as a starting point for subsequent parametric system identification. The aim of this paper is to develop a multiple-input multiple-output (MIMO) local parametric modeling approach for FRF identification of lightly damped mechanical systems with improved speed and accuracy. The proposed method is based on local rational models, which can efficiently handle the lightly-damped resonant dynamics. A key aspect herein is the freedom in the multivariable rational model parametrizations. Several choices for such multivariable rational model parametrizations are proposed and investigated. For systems with many inputs and outputs the required number of model parameters can rapidly increase, adversely affecting the performance of the local modeling approach. Therefore, low-order model structures are investigated. The structure of these low-order parametrizations leads to an undesired directionality in the identification problem. To address this, an iterative local rational modeling algorithm is proposed. As a special case recently developed SISO algorithms are recovered. The proposed approach is successfully demonstrated on simulations and on an active vibration isolation system benchmark, confirming good performance of the method using significantly less parameters compared with alternative approaches.
Godinez, William J; Rohr, Karl
2015-02-01
Tracking subcellular structures as well as viral structures displayed as 'particles' in fluorescence microscopy images yields quantitative information on the underlying dynamical processes. We have developed an approach for tracking multiple fluorescent particles based on probabilistic data association. The approach combines a localization scheme that uses a bottom-up strategy based on the spot-enhancing filter as well as a top-down strategy based on an ellipsoidal sampling scheme that uses the Gaussian probability distributions computed by a Kalman filter. The localization scheme yields multiple measurements that are incorporated into the Kalman filter via a combined innovation, where the association probabilities are interpreted as weights calculated using an image likelihood. To track objects in close proximity, we compute the support of each image position relative to the neighboring objects of a tracked object and use this support to recalculate the weights. To cope with multiple motion models, we integrated the interacting multiple model algorithm. The approach has been successfully applied to synthetic 2-D and 3-D images as well as to real 2-D and 3-D microscopy images, and the performance has been quantified. In addition, the approach was successfully applied to the 2-D and 3-D image data of the recent Particle Tracking Challenge at the IEEE International Symposium on Biomedical Imaging (ISBI) 2012.
Unscaled Bayes factors for multiple hypothesis testing in microarray experiments.
Bertolino, Francesco; Cabras, Stefano; Castellanos, Maria Eugenia; Racugno, Walter
2015-12-01
Multiple hypothesis testing collects a series of techniques usually based on p-values as a summary of the available evidence from many statistical tests. In hypothesis testing, under a Bayesian perspective, the evidence for a specified hypothesis against an alternative, conditionally on data, is given by the Bayes factor. In this study, we approach multiple hypothesis testing based on both Bayes factors and p-values, regarding multiple hypothesis testing as a multiple model selection problem. To obtain the Bayes factors we assume default priors that are typically improper. In this case, the Bayes factor is usually undetermined due to the ratio of prior pseudo-constants. We show that ignoring prior pseudo-constants leads to unscaled Bayes factor which do not invalidate the inferential procedure in multiple hypothesis testing, because they are used within a comparative scheme. In fact, using partial information from the p-values, we are able to approximate the sampling null distribution of the unscaled Bayes factor and use it within Efron's multiple testing procedure. The simulation study suggests that under normal sampling model and even with small sample sizes, our approach provides false positive and false negative proportions that are less than other common multiple hypothesis testing approaches based only on p-values. The proposed procedure is illustrated in two simulation studies, and the advantages of its use are showed in the analysis of two microarray experiments. © The Author(s) 2011.
Li, Miao; Li, Jun; Zhou, Yiyu
2015-12-08
The problem of jointly detecting and tracking multiple targets from the raw observations of an infrared focal plane array is a challenging task, especially for the case with uncertain target dynamics. In this paper a multi-model labeled multi-Bernoulli (MM-LMB) track-before-detect method is proposed within the labeled random finite sets (RFS) framework. The proposed track-before-detect method consists of two parts-MM-LMB filter and MM-LMB smoother. For the MM-LMB filter, original LMB filter is applied to track-before-detect based on target and measurement models, and is integrated with the interacting multiple models (IMM) approach to accommodate the uncertainty of target dynamics. For the MM-LMB smoother, taking advantage of the track labels and posterior model transition probability, the single-model single-target smoother is extended to a multi-model multi-target smoother. A Sequential Monte Carlo approach is also presented to implement the proposed method. Simulation results show the proposed method can effectively achieve tracking continuity for multiple maneuvering targets. In addition, compared with the forward filtering alone, our method is more robust due to its combination of forward filtering and backward smoothing.
Li, Miao; Li, Jun; Zhou, Yiyu
2015-01-01
The problem of jointly detecting and tracking multiple targets from the raw observations of an infrared focal plane array is a challenging task, especially for the case with uncertain target dynamics. In this paper a multi-model labeled multi-Bernoulli (MM-LMB) track-before-detect method is proposed within the labeled random finite sets (RFS) framework. The proposed track-before-detect method consists of two parts—MM-LMB filter and MM-LMB smoother. For the MM-LMB filter, original LMB filter is applied to track-before-detect based on target and measurement models, and is integrated with the interacting multiple models (IMM) approach to accommodate the uncertainty of target dynamics. For the MM-LMB smoother, taking advantage of the track labels and posterior model transition probability, the single-model single-target smoother is extended to a multi-model multi-target smoother. A Sequential Monte Carlo approach is also presented to implement the proposed method. Simulation results show the proposed method can effectively achieve tracking continuity for multiple maneuvering targets. In addition, compared with the forward filtering alone, our method is more robust due to its combination of forward filtering and backward smoothing. PMID:26670234
Ko, Yi-An; Mukherjee, Bhramar; Smith, Jennifer A; Kardia, Sharon L R; Allison, Matthew; Diez Roux, Ana V
2016-11-01
There has been an increased interest in identifying gene-environment interaction (G × E) in the context of multiple environmental exposures. Most G × E studies analyze one exposure at a time, but we are exposed to multiple exposures in reality. Efficient analysis strategies for complex G × E with multiple environmental factors in a single model are still lacking. Using the data from the Multiethnic Study of Atherosclerosis, we illustrate a two-step approach for modeling G × E with multiple environmental factors. First, we utilize common clustering and classification strategies (e.g., k-means, latent class analysis, classification and regression trees, Bayesian clustering using Dirichlet Process) to define subgroups corresponding to distinct environmental exposure profiles. Second, we illustrate the use of an additive main effects and multiplicative interaction model, instead of the conventional saturated interaction model using product terms of factors, to study G × E with the data-driven exposure subgroups defined in the first step. We demonstrate useful analytical approaches to translate multiple environmental exposures into one summary class. These tools not only allow researchers to consider several environmental exposures in G × E analysis but also provide some insight into how genes modify the effect of a comprehensive exposure profile instead of examining effect modification for each exposure in isolation.
A new adaptive multiple modelling approach for non-linear and non-stationary systems
NASA Astrophysics Data System (ADS)
Chen, Hao; Gong, Yu; Hong, Xia
2016-07-01
This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.
Multi-scale occupancy estimation and modelling using multiple detection methods
Nichols, James D.; Bailey, Larissa L.; O'Connell, Allan F.; Talancy, Neil W.; Grant, Evan H. Campbell; Gilbert, Andrew T.; Annand, Elizabeth M.; Husband, Thomas P.; Hines, James E.
2008-01-01
Occupancy estimation and modelling based on detection–nondetection data provide an effective way of exploring change in a species’ distribution across time and space in cases where the species is not always detected with certainty. Today, many monitoring programmes target multiple species, or life stages within a species, requiring the use of multiple detection methods. When multiple methods or devices are used at the same sample sites, animals can be detected by more than one method.We develop occupancy models for multiple detection methods that permit simultaneous use of data from all methods for inference about method-specific detection probabilities. Moreover, the approach permits estimation of occupancy at two spatial scales: the larger scale corresponds to species’ use of a sample unit, whereas the smaller scale corresponds to presence of the species at the local sample station or site.We apply the models to data collected on two different vertebrate species: striped skunks Mephitis mephitis and red salamanders Pseudotriton ruber. For striped skunks, large-scale occupancy estimates were consistent between two sampling seasons. Small-scale occupancy probabilities were slightly lower in the late winter/spring when skunks tend to conserve energy, and movements are limited to males in search of females for breeding. There was strong evidence of method-specific detection probabilities for skunks. As anticipated, large- and small-scale occupancy areas completely overlapped for red salamanders. The analyses provided weak evidence of method-specific detection probabilities for this species.Synthesis and applications. Increasingly, many studies are utilizing multiple detection methods at sampling locations. The modelling approach presented here makes efficient use of detections from multiple methods to estimate occupancy probabilities at two spatial scales and to compare detection probabilities associated with different detection methods. The models can be viewed as another variation of Pollock's robust design and may be applicable to a wide variety of scenarios where species occur in an area but are not always near the sampled locations. The estimation approach is likely to be especially useful in multispecies conservation programmes by providing efficient estimates using multiple detection devices and by providing device-specific detection probability estimates for use in survey design.
Unraveling multiple changes in complex climate time series using Bayesian inference
NASA Astrophysics Data System (ADS)
Berner, Nadine; Trauth, Martin H.; Holschneider, Matthias
2016-04-01
Change points in time series are perceived as heterogeneities in the statistical or dynamical characteristics of observations. Unraveling such transitions yields essential information for the understanding of the observed system. The precise detection and basic characterization of underlying changes is therefore of particular importance in environmental sciences. We present a kernel-based Bayesian inference approach to investigate direct as well as indirect climate observations for multiple generic transition events. In order to develop a diagnostic approach designed to capture a variety of natural processes, the basic statistical features of central tendency and dispersion are used to locally approximate a complex time series by a generic transition model. A Bayesian inversion approach is developed to robustly infer on the location and the generic patterns of such a transition. To systematically investigate time series for multiple changes occurring at different temporal scales, the Bayesian inversion is extended to a kernel-based inference approach. By introducing basic kernel measures, the kernel inference results are composed into a proxy probability to a posterior distribution of multiple transitions. Thus, based on a generic transition model a probability expression is derived that is capable to indicate multiple changes within a complex time series. We discuss the method's performance by investigating direct and indirect climate observations. The approach is applied to environmental time series (about 100 a), from the weather station in Tuscaloosa, Alabama, and confirms documented instrumentation changes. Moreover, the approach is used to investigate a set of complex terrigenous dust records from the ODP sites 659, 721/722 and 967 interpreted as climate indicators of the African region of the Plio-Pleistocene period (about 5 Ma). The detailed inference unravels multiple transitions underlying the indirect climate observations coinciding with established global climate events.
NASA Astrophysics Data System (ADS)
Gouveia, Diego; Baars, Holger; Seifert, Patric; Wandinger, Ulla; Barbosa, Henrique; Barja, Boris; Artaxo, Paulo; Lopes, Fabio; Landulfo, Eduardo; Ansmann, Albert
2018-04-01
Lidar measurements of cirrus clouds are highly influenced by multiple scattering (MS). We therefore developed an iterative approach to correct elastic backscatter lidar signals for multiple scattering to obtain best estimates of single-scattering cloud optical depth and lidar ratio as well as of the ice crystal effective radius. The approach is based on the exploration of the effect of MS on the molecular backscatter signal returned from above cloud top.
Chylek, Lily A.; Harris, Leonard A.; Tung, Chang-Shung; Faeder, James R.; Lopez, Carlos F.
2013-01-01
Rule-based modeling was developed to address the limitations of traditional approaches for modeling chemical kinetics in cell signaling systems. These systems consist of multiple interacting biomolecules (e.g., proteins), which themselves consist of multiple parts (e.g., domains, linear motifs, and sites of phosphorylation). Consequently, biomolecules that mediate information processing generally have the potential to interact in multiple ways, with the number of possible complexes and post-translational modification states tending to grow exponentially with the number of binary interactions considered. As a result, only large reaction networks capture all possible consequences of the molecular interactions that occur in a cell signaling system, which is problematic because traditional modeling approaches for chemical kinetics (e.g., ordinary differential equations) require explicit network specification. This problem is circumvented through representation of interactions in terms of local rules. With this approach, network specification is implicit and model specification is concise. Concise representation results in a coarse graining of chemical kinetics, which is introduced because all reactions implied by a rule inherit the rate law associated with that rule. Coarse graining can be appropriate if interactions are modular, and the coarseness of a model can be adjusted as needed. Rules can be specified using specialized model-specification languages, and recently developed tools designed for specification of rule-based models allow one to leverage powerful software engineering capabilities. A rule-based model comprises a set of rules, which can be processed by general-purpose simulation and analysis tools to achieve different objectives (e.g., to perform either a deterministic or stochastic simulation). PMID:24123887
Bakas, Spyridon; Zeng, Ke; Sotiras, Aristeidis; Rathore, Saima; Akbari, Hamed; Gaonkar, Bilwaj; Rozycki, Martin; Pati, Sarthak; Davatzikos, Christos
2016-01-01
We present an approach for segmenting low- and high-grade gliomas in multimodal magnetic resonance imaging volumes. The proposed approach is based on a hybrid generative-discriminative model. Firstly, a generative approach based on an Expectation-Maximization framework that incorporates a glioma growth model is used to segment the brain scans into tumor, as well as healthy tissue labels. Secondly, a gradient boosting multi-class classification scheme is used to refine tumor labels based on information from multiple patients. Lastly, a probabilistic Bayesian strategy is employed to further refine and finalize the tumor segmentation based on patient-specific intensity statistics from the multiple modalities. We evaluated our approach in 186 cases during the training phase of the BRAin Tumor Segmentation (BRATS) 2015 challenge and report promising results. During the testing phase, the algorithm was additionally evaluated in 53 unseen cases, achieving the best performance among the competing methods.
A Nonparametric Approach for Assessing Goodness-of-Fit of IRT Models in a Mixed Format Test
ERIC Educational Resources Information Center
Liang, Tie; Wells, Craig S.
2015-01-01
Investigating the fit of a parametric model plays a vital role in validating an item response theory (IRT) model. An area that has received little attention is the assessment of multiple IRT models used in a mixed-format test. The present study extends the nonparametric approach, proposed by Douglas and Cohen (2001), to assess model fit of three…
Internationally-Educated Health Professionals: A Distance Education Multiple Cultures Model
ERIC Educational Resources Information Center
Lum, Lillie
2006-01-01
Purpose: This paper aims to explore issues that must be addressed in post-secondary educational planning and delivery such that social cultural factors within the learning environment are recognized in ways that affirm the learner's cultural traditions. Design/methodology/approach: The adoption of a multiple cultures model of instructional design…
Predicting flight delay based on multiple linear regression
NASA Astrophysics Data System (ADS)
Ding, Yi
2017-08-01
Delay of flight has been regarded as one of the toughest difficulties in aviation control. How to establish an effective model to handle the delay prediction problem is a significant work. To solve the problem that the flight delay is difficult to predict, this study proposes a method to model the arriving flights and a multiple linear regression algorithm to predict delay, comparing with Naive-Bayes and C4.5 approach. Experiments based on a realistic dataset of domestic airports show that the accuracy of the proposed model approximates 80%, which is further improved than the Naive-Bayes and C4.5 approach approaches. The result testing shows that this method is convenient for calculation, and also can predict the flight delays effectively. It can provide decision basis for airport authorities.
Post-Stall Aerodynamic Modeling and Gain-Scheduled Control Design
NASA Technical Reports Server (NTRS)
Wu, Fen; Gopalarathnam, Ashok; Kim, Sungwan
2005-01-01
A multidisciplinary research e.ort that combines aerodynamic modeling and gain-scheduled control design for aircraft flight at post-stall conditions is described. The aerodynamic modeling uses a decambering approach for rapid prediction of post-stall aerodynamic characteristics of multiple-wing con.gurations using known section data. The approach is successful in bringing to light multiple solutions at post-stall angles of attack right during the iteration process. The predictions agree fairly well with experimental results from wind tunnel tests. The control research was focused on actuator saturation and .ight transition between low and high angles of attack regions for near- and post-stall aircraft using advanced LPV control techniques. The new control approaches maintain adequate control capability to handle high angle of attack aircraft control with stability and performance guarantee.
Blanton, Hart; Jaccard, James
2006-01-01
Theories that posit multiplicative relationships between variables are common in psychology. A. G. Greenwald et al. recently presented a theory that explicated relationships between group identification, group attitudes, and self-esteem. Their theory posits a multiplicative relationship between concepts when predicting a criterion variable. Greenwald et al. suggested analytic strategies to test their multiplicative model that researchers might assume are appropriate for testing multiplicative models more generally. The theory and analytic strategies of Greenwald et al. are used as a case study to show the strong measurement assumptions that underlie certain tests of multiplicative models. It is shown that the approach used by Greenwald et al. can lead to declarations of theoretical support when the theory is wrong as well as rejection of the theory when the theory is correct. A simple strategy for testing multiplicative models that makes weaker measurement assumptions than the strategy proposed by Greenwald et al. is suggested and discussed.
NASA Astrophysics Data System (ADS)
Bonelli, Francesco; Tuttafesta, Michele; Colonna, Gianpiero; Cutrone, Luigi; Pascazio, Giuseppe
2017-10-01
This paper describes the most advanced results obtained in the context of fluid dynamic simulations of high-enthalpy flows using detailed state-to-state air kinetics. Thermochemical non-equilibrium, typical of supersonic and hypersonic flows, was modeled by using both the accurate state-to-state approach and the multi-temperature model proposed by Park. The accuracy of the two thermochemical non-equilibrium models was assessed by comparing the results with experimental findings, showing better predictions provided by the state-to-state approach. To overcome the huge computational cost of the state-to-state model, a multiple-nodes GPU implementation, based on an MPI-CUDA approach, was employed and a comprehensive code performance analysis is presented. Both the pure MPI-CPU and the MPI-CUDA implementations exhibit excellent scalability performance. GPUs outperform CPUs computing especially when the state-to-state approach is employed, showing speed-ups, of the single GPU with respect to the single-core CPU, larger than 100 in both the case of one MPI process and multiple MPI process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moges, Edom; Demissie, Yonas; Li, Hong-Yi
2016-04-01
In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integratemore » expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.« less
Improving Multiple Fault Diagnosability using Possible Conflicts
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Bregon, Anibal; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino
2012-01-01
Multiple fault diagnosis is a difficult problem for dynamic systems. Due to fault masking, compensation, and relative time of fault occurrence, multiple faults can manifest in many different ways as observable fault signature sequences. This decreases diagnosability of multiple faults, and therefore leads to a loss in effectiveness of the fault isolation step. We develop a qualitative, event-based, multiple fault isolation framework, and derive several notions of multiple fault diagnosability. We show that using Possible Conflicts, a model decomposition technique that decouples faults from residuals, we can significantly improve the diagnosability of multiple faults compared to an approach using a single global model. We demonstrate these concepts and provide results using a multi-tank system as a case study.
NASA Astrophysics Data System (ADS)
Medina, Tait Runnfeldt
The increasing global reach of survey research provides sociologists with new opportunities to pursue theory building and refinement through comparative analysis. However, comparison across a broad array of diverse contexts introduces methodological complexities related to the development of constructs (i.e., measurement modeling) that if not adequately recognized and properly addressed undermine the quality of research findings and cast doubt on the validity of substantive conclusions. The motivation for this dissertation arises from a concern that the availability of cross-national survey data has outpaced sociologists' ability to appropriately analyze and draw meaningful conclusions from such data. I examine the implicit assumptions and detail the limitations of three commonly used measurement models in cross-national analysis---summative scale, pooled factor model, and multiple-group factor model with measurement invariance. Using the orienting lens of the double tension I argue that a new approach to measurement modeling that incorporates important cross-national differences into the measurement process is needed. Two such measurement models---multiple-group factor model with partial measurement invariance (Byrne, Shavelson and Muthen 1989) and the alignment method (Asparouhov and Muthen 2014; Muthen and Asparouhov 2014)---are discussed in detail and illustrated using a sociologically relevant substantive example. I demonstrate that the former approach is vulnerable to an identification problem that arbitrarily impacts substantive conclusions. I conclude that the alignment method is built on model assumptions that are consistent with theoretical understandings of cross-national comparability and provides an approach to measurement modeling and construct development that is uniquely suited for cross-national research. The dissertation makes three major contributions: First, it provides theoretical justification for a new cross-national measurement model and explicates a link between theoretical conceptions of cross-national comparability and a statistical method. Second, it provides a clear and detailed discussion of model identification in multiple-group confirmatory factor analysis that is missing from the literature. This discussion sets the stage for the introduction of the identification problem within multiple-group confirmatory factor analysis with partial measurement invariance and the alternative approach to model identification employed by the alignment method. Third, it offers the first pedagogical presentation of the alignment method using a sociologically relevant example.
A vector space model approach to identify genetically related diseases.
Sarkar, Indra Neil
2012-01-01
The relationship between diseases and their causative genes can be complex, especially in the case of polygenic diseases. Further exacerbating the challenges in their study is that many genes may be causally related to multiple diseases. This study explored the relationship between diseases through the adaptation of an approach pioneered in the context of information retrieval: vector space models. A vector space model approach was developed that bridges gene disease knowledge inferred across three knowledge bases: Online Mendelian Inheritance in Man, GenBank, and Medline. The approach was then used to identify potentially related diseases for two target diseases: Alzheimer disease and Prader-Willi Syndrome. In the case of both Alzheimer Disease and Prader-Willi Syndrome, a set of plausible diseases were identified that may warrant further exploration. This study furthers seminal work by Swanson, et al. that demonstrated the potential for mining literature for putative correlations. Using a vector space modeling approach, information from both biomedical literature and genomic resources (like GenBank) can be combined towards identification of putative correlations of interest. To this end, the relevance of the predicted diseases of interest in this study using the vector space modeling approach were validated based on supporting literature. The results of this study suggest that a vector space model approach may be a useful means to identify potential relationships between complex diseases, and thereby enable the coordination of gene-based findings across multiple complex diseases.
Wafer hotspot prevention using etch aware OPC correction
NASA Astrophysics Data System (ADS)
Hamouda, Ayman; Power, Dave; Salama, Mohamed; Chen, Ao
2016-03-01
As technology development advances into deep-sub-wavelength nodes, multiple patterning is becoming more essential to achieve the technology shrink requirements. Recently, Optical Proximity Correction (OPC) technology has proposed simultaneous correction of multiple mask-patterns to enable multiple patterning awareness during OPC correction. This is essential to prevent inter-layer hot-spots during the final pattern transfer. In state-of-art literature, multi-layer awareness is achieved using simultaneous resist-contour simulations to predict and correct for hot-spots during mask generation. However, this approach assumes a uniform etch shrink response for all patterns independent of their proximity, which isn't sufficient for the full prevention of inter-exposure hot-spot, for example different color space violations post etch or via coverage/enclosure post etch. In this paper, we explain the need to include the etch component during multiple patterning OPC. We also introduce a novel approach for Etch-aware simultaneous Multiple-patterning OPC, where we calibrate and verify a lumped model that includes the combined resist and etch responses. Adding this extra simulation condition during OPC is suitable for full chip processing from a computation intensity point of view. Also, using this model during OPC to predict and correct inter-exposures hot-spots is similar to previously proposed multiple-patterning OPC, yet our proposed approach more accurately corrects post-etch defects too.
Effort to Accelerate MBSE Adoption and Usage at JSC
NASA Technical Reports Server (NTRS)
Wang, Lui; Izygon, Michel; Okron, Shira; Garner, Larry; Wagner, Howard
2016-01-01
This paper describes the authors' experience in adopting Model Based System Engineering (MBSE) at the NASA/Johnson Space Center (JSC). Since 2009, NASA/JSC has been applying MBSE using the Systems Modeling Language (SysML) to a number of advanced projects. Models integrate views of the system from multiple perspectives, capturing the system design information for multiple stakeholders. This method has allowed engineers to better control changes, improve traceability from requirements to design and manage the numerous interactions between components. As the project progresses, the models become the official source of information and used by multiple stakeholders. Three major types of challenges that hamper the adoption of the MBSE technology are described. These challenges are addressed by a multipronged approach that includes educating the main stakeholders, implementing an organizational infrastructure that supports the adoption effort, defining a set of modeling guidelines to help engineers in their modeling effort, providing a toolset that support the generation of valuable products, and providing a library of reusable models. JSC project case studies are presented to illustrate how the proposed approach has been successfully applied.
NASA Astrophysics Data System (ADS)
Ogawa, Kenta; Konno, Yukiko; Yamamoto, Satoru; Matsunaga, Tsuneo; Tachikawa, Tetsushi; Komoda, Mako; Kashimura, Osamu; Rokugawa, Shuichi
2016-10-01
Hyperspectral Imager Suite (HISUI)[1] is a Japanese future spaceborne hyperspectral instrument being developed by Ministry of Economy, Trade, and Industry (METI) and will be delivered to ISS in 2018. In HISUI project, observation strategy is important especially for hyperspectral sensor, and relationship between the limitations of sensor operation and the planned observation scenarios have to be studied. We have developed concept of multiple algorithms approach. The concept is to use two (or more) algorithm models (Long Strip Model and Score Downfall Model) for selecting observing scenes from complex data acquisition requests with satisfactory of sensor constrains. We have tested the algorithm, and found that the performance of two models depends on remaining data acquisition requests, i.e. distribution score along with orbits. We conclude that the multiple algorithms approach will be make better collection plans for HISUI comparing with single fixed approach.
An Odds Ratio Approach for Detecting DDF under the Nested Logit Modeling Framework
ERIC Educational Resources Information Center
Terzi, Ragip; Suh, Youngsuk
2015-01-01
An odds ratio approach (ORA) under the framework of a nested logit model was proposed for evaluating differential distractor functioning (DDF) in multiple-choice items and was compared with an existing ORA developed under the nominal response model. The performances of the two ORAs for detecting DDF were investigated through an extensive…
NASA Astrophysics Data System (ADS)
Jin, Yongmei
In recent years, theoretical modeling and computational simulation of microstructure evolution and materials property has been attracting much attention. While significant advances have been made, two major challenges remain. One is the integration of multiple physical phenomena for simulation of complex materials behavior, the other is the bridging over multiple length and time scales in materials modeling and simulation. The research presented in this Thesis is focused mainly on tackling the first major challenge. In this Thesis, a unified Phase Field Microelasticity (PFM) approach is developed. This approach is an advanced version of the phase field method that takes into account the exact elasticity of arbitrarily anisotropic, elastically and structurally inhomogeneous systems. The proposed theory and models are applicable to infinite solids, elastic half-space, and finite bodies with arbitrary-shaped free surfaces, which may undergo various concomitant physical processes. The Phase Field Microelasticity approach is employed to formulate the theories and models of martensitic transformation, dislocation dynamics, and crack evolution in single crystal and polycrystalline solids. It is also used to study strain relaxation in heteroepitaxial thin films through misfit dislocation and surface roughening. Magnetic domain evolution in nanocrystalline thin films is also investigated. Numerous simulation studies are performed. Comparison with analytical predictions and experimental observations are presented. Agreement verities the theory and models as realistic simulation tools for computational materials science and engineering. The same Phase Field Microelasticity formalism of individual models of different physical phenomena makes it easy to integrate multiple physical processes into one unified simulation model, where multiple phenomena are treated as various relaxation modes that together act as one common cooperative phenomenon. The model does not impose a priori constraints on possible microstructure evolution paths. This gives the model predicting power, where material system itself "chooses" the optimal path for multiple processes. The advances made in this Thesis present a significant step forward to overcome the first challenge, mesoscale multi-physics modeling and simulation of materials. At the end of this Thesis, the way to tackle the second challenge, bridging over multiple length and time scales in materials modeling and simulation, is discussed based on connection between the mesoscale Phase Field Microelasticity modeling and microscopic atomistic calculation as well as macroscopic continuum theory.
NASA Astrophysics Data System (ADS)
Mariano, Adrian V.; Grossmann, John M.
2010-11-01
Reflectance-domain methods convert hyperspectral data from radiance to reflectance using an atmospheric compensation model. Material detection and identification are performed by comparing the compensated data to target reflectance spectra. We introduce two radiance-domain approaches, Single atmosphere Adaptive Cosine Estimator (SACE) and Multiple atmosphere ACE (MACE) in which the target reflectance spectra are instead converted into sensor-reaching radiance using physics-based models. For SACE, known illumination and atmospheric conditions are incorporated in a single atmospheric model. For MACE the conditions are unknown so the algorithm uses many atmospheric models to cover the range of environmental variability, and it approximates the result using a subspace model. This approach is sometimes called the invariant method, and requires the choice of a subspace dimension for the model. We compare these two radiance-domain approaches to a Reflectance-domain ACE (RACE) approach on a HYDICE image featuring concealed materials. All three algorithms use the ACE detector, and all three techniques are able to detect most of the hidden materials in the imagery. For MACE we observe a strong dependence on the choice of the material subspace dimension. Increasing this value can lead to a decline in performance.
Should multiple imputation be the method of choice for handling missing data in randomized trials?
Sullivan, Thomas R; White, Ian R; Salter, Amy B; Ryan, Philip; Lee, Katherine J
2016-01-01
The use of multiple imputation has increased markedly in recent years, and journal reviewers may expect to see multiple imputation used to handle missing data. However in randomized trials, where treatment group is always observed and independent of baseline covariates, other approaches may be preferable. Using data simulation we evaluated multiple imputation, performed both overall and separately by randomized group, across a range of commonly encountered scenarios. We considered both missing outcome and missing baseline data, with missing outcome data induced under missing at random mechanisms. Provided the analysis model was correctly specified, multiple imputation produced unbiased treatment effect estimates, but alternative unbiased approaches were often more efficient. When the analysis model overlooked an interaction effect involving randomized group, multiple imputation produced biased estimates of the average treatment effect when applied to missing outcome data, unless imputation was performed separately by randomized group. Based on these results, we conclude that multiple imputation should not be seen as the only acceptable way to handle missing data in randomized trials. In settings where multiple imputation is adopted, we recommend that imputation is carried out separately by randomized group. PMID:28034175
Should multiple imputation be the method of choice for handling missing data in randomized trials?
Sullivan, Thomas R; White, Ian R; Salter, Amy B; Ryan, Philip; Lee, Katherine J
2016-01-01
The use of multiple imputation has increased markedly in recent years, and journal reviewers may expect to see multiple imputation used to handle missing data. However in randomized trials, where treatment group is always observed and independent of baseline covariates, other approaches may be preferable. Using data simulation we evaluated multiple imputation, performed both overall and separately by randomized group, across a range of commonly encountered scenarios. We considered both missing outcome and missing baseline data, with missing outcome data induced under missing at random mechanisms. Provided the analysis model was correctly specified, multiple imputation produced unbiased treatment effect estimates, but alternative unbiased approaches were often more efficient. When the analysis model overlooked an interaction effect involving randomized group, multiple imputation produced biased estimates of the average treatment effect when applied to missing outcome data, unless imputation was performed separately by randomized group. Based on these results, we conclude that multiple imputation should not be seen as the only acceptable way to handle missing data in randomized trials. In settings where multiple imputation is adopted, we recommend that imputation is carried out separately by randomized group.
Automatic Prediction of Protein 3D Structures by Probabilistic Multi-template Homology Modeling.
Meier, Armin; Söding, Johannes
2015-10-01
Homology modeling predicts the 3D structure of a query protein based on the sequence alignment with one or more template proteins of known structure. Its great importance for biological research is owed to its speed, simplicity, reliability and wide applicability, covering more than half of the residues in protein sequence space. Although multiple templates have been shown to generally increase model quality over single templates, the information from multiple templates has so far been combined using empirically motivated, heuristic approaches. We present here a rigorous statistical framework for multi-template homology modeling. First, we find that the query proteins' atomic distance restraints can be accurately described by two-component Gaussian mixtures. This insight allowed us to apply the standard laws of probability theory to combine restraints from multiple templates. Second, we derive theoretically optimal weights to correct for the redundancy among related templates. Third, a heuristic template selection strategy is proposed. We improve the average GDT-ha model quality score by 11% over single template modeling and by 6.5% over a conventional multi-template approach on a set of 1000 query proteins. Robustness with respect to wrong constraints is likewise improved. We have integrated our multi-template modeling approach with the popular MODELLER homology modeling software in our free HHpred server http://toolkit.tuebingen.mpg.de/hhpred and also offer open source software for running MODELLER with the new restraints at https://bitbucket.org/soedinglab/hh-suite.
Kambayashi, Atsushi; Blume, Henning; Dressman, Jennifer B
2014-07-01
The objective of this research was to characterize the dissolution profile of a poorly soluble drug, diclofenac, from a commercially available multiple-unit enteric coated dosage form, Diclo-Puren® capsules, and to develop a predictive model for its oral pharmacokinetic profile. The paddle method was used to obtain the dissolution profiles of this dosage form in biorelevant media, with the exposure to simulated gastric conditions being varied in order to simulate the gastric emptying behavior of pellets. A modified Noyes-Whitney theory was subsequently fitted to the dissolution data. A physiologically-based pharmacokinetic (PBPK) model for multiple-unit dosage forms was designed using STELLA® software and coupled with the biorelevant dissolution profiles in order to simulate the plasma concentration profiles of diclofenac from Diclo-Puren® capsule in both the fasted and fed state in humans. Gastric emptying kinetics relevant to multiple-units pellets were incorporated into the PBPK model by setting up a virtual patient population to account for physiological variations in emptying kinetics. Using in vitro biorelevant dissolution coupled with in silico PBPK modeling and simulation it was possible to predict the plasma profile of this multiple-unit formulation of diclofenac after oral administration in both the fasted and fed state. This approach might be useful to predict variability in the plasma profiles for other drugs housed in multiple-unit dosage forms. Copyright © 2014 Elsevier B.V. All rights reserved.
A Participatory Action Research Approach To Evaluating Inclusive School Programs.
ERIC Educational Resources Information Center
Dymond, Stacy K.
2001-01-01
This article proposes a model for evaluating inclusive schools. Key elements of the model are inclusion of stakeholders in the evaluation process through a participatory action research approach, analysis of program processes and outcomes, use of multiple methods and measures, and obtaining perceptions from diverse stakeholder groups. (Contains…
Wang, Shuangquan; Sun, Huiyong; Liu, Hui; Li, Dan; Li, Youyong; Hou, Tingjun
2016-08-01
Blockade of human ether-à-go-go related gene (hERG) channel by compounds may lead to drug-induced QT prolongation, arrhythmia, and Torsades de Pointes (TdP), and therefore reliable prediction of hERG liability in the early stages of drug design is quite important to reduce the risk of cardiotoxicity-related attritions in the later development stages. In this study, pharmacophore modeling and machine learning approaches were combined to construct classification models to distinguish hERG active from inactive compounds based on a diverse data set. First, an optimal ensemble of pharmacophore hypotheses that had good capability to differentiate hERG active from inactive compounds was identified by the recursive partitioning (RP) approach. Then, the naive Bayesian classification (NBC) and support vector machine (SVM) approaches were employed to construct classification models by integrating multiple important pharmacophore hypotheses. The integrated classification models showed improved predictive capability over any single pharmacophore hypothesis, suggesting that the broad binding polyspecificity of hERG can only be well characterized by multiple pharmacophores. The best SVM model achieved the prediction accuracies of 84.7% for the training set and 82.1% for the external test set. Notably, the accuracies for the hERG blockers and nonblockers in the test set reached 83.6% and 78.2%, respectively. Analysis of significant pharmacophores helps to understand the multimechanisms of action of hERG blockers. We believe that the combination of pharmacophore modeling and SVM is a powerful strategy to develop reliable theoretical models for the prediction of potential hERG liability.
Multiple Damage Progression Paths in Model-Based Prognostics
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Goebel, Kai Frank
2011-01-01
Model-based prognostics approaches employ domain knowledge about a system, its components, and how they fail through the use of physics-based models. Component wear is driven by several different degradation phenomena, each resulting in their own damage progression path, overlapping to contribute to the overall degradation of the component. We develop a model-based prognostics methodology using particle filters, in which the problem of characterizing multiple damage progression paths is cast as a joint state-parameter estimation problem. The estimate is represented as a probability distribution, allowing the prediction of end of life and remaining useful life within a probabilistic framework that supports uncertainty management. We also develop a novel variance control mechanism that maintains an uncertainty bound around the hidden parameters to limit the amount of estimation uncertainty and, consequently, reduce prediction uncertainty. We construct a detailed physics-based model of a centrifugal pump, to which we apply our model-based prognostics algorithms. We illustrate the operation of the prognostic solution with a number of simulation-based experiments and demonstrate the performance of the chosen approach when multiple damage mechanisms are active
Statistical Methods for Generalized Linear Models with Covariates Subject to Detection Limits.
Bernhardt, Paul W; Wang, Huixia J; Zhang, Daowen
2015-05-01
Censored observations are a common occurrence in biomedical data sets. Although a large amount of research has been devoted to estimation and inference for data with censored responses, very little research has focused on proper statistical procedures when predictors are censored. In this paper, we consider statistical methods for dealing with multiple predictors subject to detection limits within the context of generalized linear models. We investigate and adapt several conventional methods and develop a new multiple imputation approach for analyzing data sets with predictors censored due to detection limits. We establish the consistency and asymptotic normality of the proposed multiple imputation estimator and suggest a computationally simple and consistent variance estimator. We also demonstrate that the conditional mean imputation method often leads to inconsistent estimates in generalized linear models, while several other methods are either computationally intensive or lead to parameter estimates that are biased or more variable compared to the proposed multiple imputation estimator. In an extensive simulation study, we assess the bias and variability of different approaches within the context of a logistic regression model and compare variance estimation methods for the proposed multiple imputation estimator. Lastly, we apply several methods to analyze the data set from a recently-conducted GenIMS study.
Design-Comparable Effect Sizes in Multiple Baseline Designs: A General Modeling Framework
ERIC Educational Resources Information Center
Pustejovsky, James E.; Hedges, Larry V.; Shadish, William R.
2014-01-01
In single-case research, the multiple baseline design is a widely used approach for evaluating the effects of interventions on individuals. Multiple baseline designs involve repeated measurement of outcomes over time and the controlled introduction of a treatment at different times for different individuals. This article outlines a general…
A framework for multi-criteria assessment of model enhancements
NASA Astrophysics Data System (ADS)
Francke, Till; Foerster, Saskia; Brosinsky, Arlena; Delgado, José; Güntner, Andreas; López-Tarazón, José A.; Bronstert, Axel
2016-04-01
Modellers are often faced with unsatisfactory model performance for a specific setup of a hydrological model. In these cases, the modeller may try to improve the setup by addressing selected causes for the model errors (i.e. data errors, structural errors). This leads to adding certain "model enhancements" (MEs), e.g. climate data based on more monitoring stations, improved calibration data, modifications in process formulations. However, deciding on which MEs to implement remains a matter of expert knowledge, guided by some sensitivity analysis at best. When multiple MEs have been implemented, a resulting improvement in model performance is not easily attributed, especially when considering different aspects of this improvement (e.g. better performance dynamics vs. reduced bias). In this study we present an approach for comparing the effect of multiple MEs in the face of multiple improvement aspects. A stepwise selection approach and structured plots help in addressing the multidimensionality of the problem. The approach is applied to a case study, which employs the meso-scale hydrosedimentological model WASA-SED for a sub-humid catchment. The results suggest that the effect of the MEs is quite diverse, with some MEs (e.g. augmented rainfall data) cause improvements for almost all aspects, while the effect of other MEs is restricted to few aspects or even deteriorate some. These specific results may not be generalizable. However, we suggest that based on studies like this, identifying the most promising MEs to implement may be facilitated.
Merging for Particle-Mesh Complex Particle Kinetic Modeling of the Multiple Plasma Beams
NASA Technical Reports Server (NTRS)
Lipatov, Alexander S.
2011-01-01
We suggest a merging procedure for the Particle-Mesh Complex Particle Kinetic (PMCPK) method in case of inter-penetrating flow (multiple plasma beams). We examine the standard particle-in-cell (PIC) and the PMCPK methods in the case of particle acceleration by shock surfing for a wide range of the control numerical parameters. The plasma dynamics is described by a hybrid (particle-ion-fluid-electron) model. Note that one may need a mesh if modeling with the computation of an electromagnetic field. Our calculations use specified, time-independent electromagnetic fields for the shock, rather than self-consistently generated fields. While a particle-mesh method is a well-verified approach, the CPK method seems to be a good approach for multiscale modeling that includes multiple regions with various particle/fluid plasma behavior. However, the CPK method is still in need of a verification for studying the basic plasma phenomena: particle heating and acceleration by collisionless shocks, magnetic field reconnection, beam dynamics, etc.
Samuel A. Cushman; Jesse S. Lewis; Erin L. Landguth
2014-01-01
There have been few assessments of the performance of alternative resistance surfaces, and little is known about how connectivity modeling approaches differ in their ability to predict organism movements. In this paper, we evaluate the performance of four connectivity modeling approaches applied to two resistance surfaces in predicting the locations of highway...
Jochems, Arthur; Deist, Timo M; van Soest, Johan; Eble, Michael; Bulens, Paul; Coucke, Philippe; Dries, Wim; Lambin, Philippe; Dekker, Andre
2016-12-01
One of the major hurdles in enabling personalized medicine is obtaining sufficient patient data to feed into predictive models. Combining data originating from multiple hospitals is difficult because of ethical, legal, political, and administrative barriers associated with data sharing. In order to avoid these issues, a distributed learning approach can be used. Distributed learning is defined as learning from data without the data leaving the hospital. Clinical data from 287 lung cancer patients, treated with curative intent with chemoradiation (CRT) or radiotherapy (RT) alone were collected from and stored in 5 different medical institutes (123 patients at MAASTRO (Netherlands, Dutch), 24 at Jessa (Belgium, Dutch), 34 at Liege (Belgium, Dutch and French), 48 at Aachen (Germany, German) and 58 at Eindhoven (Netherlands, Dutch)). A Bayesian network model is adapted for distributed learning (watch the animation: http://youtu.be/nQpqMIuHyOk). The model predicts dyspnea, which is a common side effect after radiotherapy treatment of lung cancer. We show that it is possible to use the distributed learning approach to train a Bayesian network model on patient data originating from multiple hospitals without these data leaving the individual hospital. The AUC of the model is 0.61 (95%CI, 0.51-0.70) on a 5-fold cross-validation and ranges from 0.59 to 0.71 on external validation sets. Distributed learning can allow the learning of predictive models on data originating from multiple hospitals while avoiding many of the data sharing barriers. Furthermore, the distributed learning approach can be used to extract and employ knowledge from routine patient data from multiple hospitals while being compliant to the various national and European privacy laws. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.
Whiteaker, Jeffrey R; Zhang, Heidi; Zhao, Lei; Wang, Pei; Kelly-Spratt, Karen S; Ivey, Richard G; Piening, Brian D; Feng, Li-Chia; Kasarda, Erik; Gurley, Kay E; Eng, Jimmy K; Chodosh, Lewis A; Kemp, Christopher J; McIntosh, Martin W; Paulovich, Amanda G
2007-10-01
Despite their potential to impact diagnosis and treatment of cancer, few protein biomarkers are in clinical use. Biomarker discovery is plagued with difficulties ranging from technological (inability to globally interrogate proteomes) to biological (genetic and environmental differences among patients and their tumors). We urgently need paradigms for biomarker discovery. To minimize biological variation and facilitate testing of proteomic approaches, we employed a mouse model of breast cancer. Specifically, we performed LC-MS/MS of tumor and normal mammary tissue from a conditional HER2/Neu-driven mouse model of breast cancer, identifying 6758 peptides representing >700 proteins. We developed a novel statistical approach (SASPECT) for prioritizing proteins differentially represented in LC-MS/MS datasets and identified proteins over- or under-represented in tumors. Using a combination of antibody-based approaches and multiple reaction monitoring-mass spectrometry (MRM-MS), we confirmed the overproduction of multiple proteins at the tissue level, identified fibulin-2 as a plasma biomarker, and extensively characterized osteopontin as a plasma biomarker capable of early disease detection in the mouse. Our results show that a staged pipeline employing shotgun-based comparative proteomics for biomarker discovery and multiple reaction monitoring for confirmation of biomarker candidates is capable of finding novel tissue and plasma biomarkers in a mouse model of breast cancer. Furthermore, the approach can be extended to find biomarkers relevant to human disease.
Lee, Y; Tien, J M
2001-01-01
We present mathematical models that determine the optimal parameters for strategically routing multidestination traffic in an end-to-end network setting. Multidestination traffic refers to a traffic type that can be routed to any one of a multiple number of destinations. A growing number of communication services is based on multidestination routing. In this parameter-driven approach, a multidestination call is routed to one of the candidate destination nodes in accordance with predetermined decision parameters associated with each candidate node. We present three different approaches: (1) a link utilization (LU) approach, (2) a network cost (NC) approach, and (3) a combined parametric (CP) approach. The LU approach provides the solution that would result in an optimally balanced link utilization, whereas the NC approach provides the least expensive way to route traffic to destinations. The CP approach, on the other hand, provides multiple solutions that help leverage link utilization and cost. The LU approach has in fact been implemented by a long distance carrier resulting in a considerable efficiency improvement in its international direct services, as summarized.
Quantitative Predictive Models for Systemic Toxicity (SOT)
Models to identify systemic and specific target organ toxicity were developed to help transition the field of toxicology towards computational models. By leveraging multiple data sources to incorporate read-across and machine learning approaches, a quantitative model of systemic ...
Note on Professor Sizer's Paper.
ERIC Educational Resources Information Center
Balderston, Frederick E.
1979-01-01
Issues suggested by John Sizer's paper, an overview of the assessment of institutional performance, include: the efficient-frontier approach, multiple-criterion decision-making models, performance analysis approached as path analysis, and assessment of academic quality. (JMD)
Multiple Model Adaptive Attitude Control of LEO Satellite with Angular Velocity Constraints
NASA Astrophysics Data System (ADS)
Shahrooei, Abolfazl; Kazemi, Mohammad Hosein
2018-04-01
In this paper, the multiple model adaptive control is utilized to improve the transient response of attitude control system for a rigid spacecraft. An adaptive output feedback control law is proposed for attitude control under angular velocity constraints and its almost global asymptotic stability is proved. The multiple model adaptive control approach is employed to counteract large uncertainty in parameter space of the inertia matrix. The nonlinear dynamics of a low earth orbit satellite is simulated and the proposed control algorithm is implemented. The reported results show the effectiveness of the suggested scheme.
Hydrological modelling in forested systems
This chapter provides a brief overview of forest hydrology modelling approaches for answering important global research and management questions. Many hundreds of hydrological models have been applied globally across multiple decades to represent and predict forest hydrological p...
Multiple perspective vulnerability analysis of the power network
NASA Astrophysics Data System (ADS)
Wang, Shuliang; Zhang, Jianhua; Duan, Na
2018-02-01
To understand the vulnerability of the power network from multiple perspectives, multi-angle and multi-dimensional vulnerability analysis as well as community based vulnerability analysis are proposed in this paper. Taking into account of central China power grid as an example, correlation analysis of different vulnerability models is discussed. Then, vulnerabilities produced by different vulnerability metrics under the given vulnerability models and failure scenarios are analyzed. At last, applying the community detecting approach, critical areas of central China power grid are identified, Vulnerable and robust communities on both topological and functional perspective are acquired and analyzed. The approach introduced in this paper can be used to help decision makers develop optimal protection strategies. It will be also useful to give a multiple vulnerability analysis of the other infrastructure systems.
Kenny, Joseph P.; Janssen, Curtis L.; Gordon, Mark S.; ...
2008-01-01
Cutting-edge scientific computing software is complex, increasingly involving the coupling of multiple packages to combine advanced algorithms or simulations at multiple physical scales. Component-based software engineering (CBSE) has been advanced as a technique for managing this complexity, and complex component applications have been created in the quantum chemistry domain, as well as several other simulation areas, using the component model advocated by the Common Component Architecture (CCA) Forum. While programming models do indeed enable sound software engineering practices, the selection of programming model is just one building block in a comprehensive approach to large-scale collaborative development which must also addressmore » interface and data standardization, and language and package interoperability. We provide an overview of the development approach utilized within the Quantum Chemistry Science Application Partnership, identifying design challenges, describing the techniques which we have adopted to address these challenges and highlighting the advantages which the CCA approach offers for collaborative development.« less
Mixture models for detecting differentially expressed genes in microarrays.
Jones, Liat Ben-Tovim; Bean, Richard; McLachlan, Geoffrey J; Zhu, Justin Xi
2006-10-01
An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local FDR (false discovery rate) is provided for each gene. An attractive feature of the mixture model approach is that it provides a framework for the estimation of the prior probability that a gene is not differentially expressed, and this probability can subsequently be used in forming a decision rule. The rule can also be formed to take the false negative rate into account. We apply this approach to a well-known publicly available data set on breast cancer, and discuss our findings with reference to other approaches.
Chen, Chen; Xie, Yuanchang
2016-06-01
Annual Average Daily Traffic (AADT) is often considered as a main covariate for predicting crash frequencies at urban and suburban intersections. A linear functional form is typically assumed for the Safety Performance Function (SPF) to describe the relationship between the natural logarithm of expected crash frequency and covariates derived from AADTs. Such a linearity assumption has been questioned by many researchers. This study applies Generalized Additive Models (GAMs) and Piecewise Linear Negative Binomial (PLNB) regression models to fit intersection crash data. Various covariates derived from minor-and major-approach AADTs are considered. Three different dependent variables are modeled, which are total multiple-vehicle crashes, rear-end crashes, and angle crashes. The modeling results suggest that a nonlinear functional form may be more appropriate. Also, the results show that it is important to take into consideration the joint safety effects of multiple covariates. Additionally, it is found that the ratio of minor to major-approach AADT has a varying impact on intersection safety and deserves further investigations. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Cronkite-Ratcliff, C.; Phelps, G. A.; Boucher, A.
2011-12-01
In many geologic settings, the pathways of groundwater flow are controlled by geologic heterogeneities which have complex geometries. Models of these geologic heterogeneities, and consequently, their effects on the simulated pathways of groundwater flow, are characterized by uncertainty. Multiple-point geostatistics, which uses a training image to represent complex geometric descriptions of geologic heterogeneity, provides a stochastic approach to the analysis of geologic uncertainty. Incorporating multiple-point geostatistics into numerical models provides a way to extend this analysis to the effects of geologic uncertainty on the results of flow simulations. We present two case studies to demonstrate the application of multiple-point geostatistics to numerical flow simulation in complex geologic settings with both static and dynamic conditioning data. Both cases involve the development of a training image from a complex geometric description of the geologic environment. Geologic heterogeneity is modeled stochastically by generating multiple equally-probable realizations, all consistent with the training image. Numerical flow simulation for each stochastic realization provides the basis for analyzing the effects of geologic uncertainty on simulated hydraulic response. The first case study is a hypothetical geologic scenario developed using data from the alluvial deposits in Yucca Flat, Nevada. The SNESIM algorithm is used to stochastically model geologic heterogeneity conditioned to the mapped surface geology as well as vertical drill-hole data. Numerical simulation of groundwater flow and contaminant transport through geologic models produces a distribution of hydraulic responses and contaminant concentration results. From this distribution of results, the probability of exceeding a given contaminant concentration threshold can be used as an indicator of uncertainty about the location of the contaminant plume boundary. The second case study considers a characteristic lava-flow aquifer system in Pahute Mesa, Nevada. A 3D training image is developed by using object-based simulation of parametric shapes to represent the key morphologic features of rhyolite lava flows embedded within ash-flow tuffs. In addition to vertical drill-hole data, transient pressure head data from aquifer tests can be used to constrain the stochastic model outcomes. The use of both static and dynamic conditioning data allows the identification of potential geologic structures that control hydraulic response. These case studies demonstrate the flexibility of the multiple-point geostatistics approach for considering multiple types of data and for developing sophisticated models of geologic heterogeneities that can be incorporated into numerical flow simulations.
Multiple-Input Subject-Specific Modeling of Plasma Glucose Concentration for Feedforward Control.
Kotz, Kaylee; Cinar, Ali; Mei, Yong; Roggendorf, Amy; Littlejohn, Elizabeth; Quinn, Laurie; Rollins, Derrick K
2014-11-26
The ability to accurately develop subject-specific, input causation models, for blood glucose concentration (BGC) for large input sets can have a significant impact on tightening control for insulin dependent diabetes. More specifically, for Type 1 diabetics (T1Ds), it can lead to an effective artificial pancreas (i.e., an automatic control system that delivers exogenous insulin) under extreme changes in critical disturbances. These disturbances include food consumption, activity variations, and physiological stress changes. Thus, this paper presents a free-living, outpatient, multiple-input, modeling method for BGC with strong causation attributes that is stable and guards against overfitting to provide an effective modeling approach for feedforward control (FFC). This approach is a Wiener block-oriented methodology, which has unique attributes for meeting critical requirements for effective, long-term, FFC.
Taravat, Alireza; Oppelt, Natascha
2014-01-01
Oil spills represent a major threat to ocean ecosystems and their environmental status. Previous studies have shown that Synthetic Aperture Radar (SAR), as its recording is independent of clouds and weather, can be effectively used for the detection and classification of oil spills. Dark formation detection is the first and critical stage in oil-spill detection procedures. In this paper, a novel approach for automated dark-spot detection in SAR imagery is presented. A new approach from the combination of adaptive Weibull Multiplicative Model (WMM) and MultiLayer Perceptron (MLP) neural networks is proposed to differentiate between dark spots and the background. The results have been compared with the results of a model combining non-adaptive WMM and pulse coupled neural networks. The presented approach overcomes the non-adaptive WMM filter setting parameters by developing an adaptive WMM model which is a step ahead towards a full automatic dark spot detection. The proposed approach was tested on 60 ENVISAT and ERS2 images which contained dark spots. For the overall dataset, an average accuracy of 94.65% was obtained. Our experimental results demonstrate that the proposed approach is very robust and effective where the non-adaptive WMM & pulse coupled neural network (PCNN) model generates poor accuracies. PMID:25474376
Abidi, Samina
2017-10-26
Clinical management of comorbidities is a challenge, especially in a clinical decision support setting, as it requires the safe and efficient reconciliation of multiple disease-specific clinical procedures to formulate a comorbid therapeutic plan that is both effective and safe for the patient. In this paper we pursue the integration of multiple disease-specific Clinical Practice Guidelines (CPG) in order to manage co-morbidities within a computerized Clinical Decision Support System (CDSS). We present a CPG integration framework-termed as COMET (Comorbidity Ontological Modeling & ExecuTion) that manifests a knowledge management approach to model, computerize and integrate multiple CPG to yield a comorbid CPG knowledge model that upon execution can provide evidence-based recommendations for handling comorbid patients. COMET exploits semantic web technologies to achieve (a) CPG knowledge synthesis to translate a paper-based CPG to disease-specific clinical pathways (CP) that include specialized co-morbidity management procedures based on input from domain experts; (b) CPG knowledge modeling to computerize the disease-specific CP using a Comorbidity CPG ontology; (c) CPG knowledge integration by aligning multiple ontologically-modeled CP to develop a unified comorbid CPG knowledge model; and (e) CPG knowledge execution using reasoning engines to derive CPG-mediated recommendations for managing patients with comorbidities. We present a web-accessible COMET CDSS that provides family physicians with CPG-mediated comorbidity decision support to manage Atrial Fibrillation and Chronic Heart Failure. We present our qualitative and quantitative analysis of the knowledge content and usability of COMET CDSS.
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Lindgren, Eric A.
2018-04-01
This paper expands on the objective and motivation for NDE-based characterization and includes a discussion of the current approach using model-assisted inversion being pursued within the Air Force Research Laboratory (AFRL). This includes a discussion of the multiple model-based methods that can be used, including physics-based models, deep machine learning, and heuristic approaches. The benefits and drawbacks of each method is reviewed and the potential to integrate multiple methods is discussed. Initial successes are included to highlight the ability to obtain quantitative values of damage. Additional steps remaining to realize this capability with statistical metrics of accuracy are discussed, and how these results can be used to enable probabilistic life management are addressed. The outcome of this initiative will realize the long-term desired capability of NDE methods to provide quantitative characterization to accelerate certification of new materials and enhance life management of engineered systems.
Geiser, Christian; Bishop, Jacob; Lockhart, Ginger; Shiffman, Saul; Grenard, Jerry L.
2013-01-01
Latent state-trait (LST) and latent growth curve (LGC) models are frequently used in the analysis of longitudinal data. Although it is well-known that standard single-indicator LGC models can be analyzed within either the structural equation modeling (SEM) or multilevel (ML; hierarchical linear modeling) frameworks, few researchers realize that LST and multivariate LGC models, which use multiple indicators at each time point, can also be specified as ML models. In the present paper, we demonstrate that using the ML-SEM rather than the SL-SEM framework to estimate the parameters of these models can be practical when the study involves (1) a large number of time points, (2) individually-varying times of observation, (3) unequally spaced time intervals, and/or (4) incomplete data. Despite the practical advantages of the ML-SEM approach under these circumstances, there are also some limitations that researchers should consider. We present an application to an ecological momentary assessment study (N = 158 youths with an average of 23.49 observations of positive mood per person) using the software Mplus (Muthén and Muthén, 1998–2012) and discuss advantages and disadvantages of using the ML-SEM approach to estimate the parameters of LST and multiple-indicator LGC models. PMID:24416023
Validation and calibration of structural models that combine information from multiple sources.
Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A
2017-02-01
Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.
NASA Astrophysics Data System (ADS)
Kikuchi, C.; Ferre, P. A.; Vrugt, J. A.
2011-12-01
Hydrologic models are developed, tested, and refined based on the ability of those models to explain available hydrologic data. The optimization of model performance based upon mismatch between model outputs and real world observations has been extensively studied. However, identification of plausible models is sensitive not only to the models themselves - including model structure and model parameters - but also to the location, timing, type, and number of observations used in model calibration. Therefore, careful selection of hydrologic observations has the potential to significantly improve the performance of hydrologic models. In this research, we seek to reduce prediction uncertainty through optimization of the data collection process. A new tool - multiple model analysis with discriminatory data collection (MMA-DDC) - was developed to address this challenge. In this approach, multiple hydrologic models are developed and treated as competing hypotheses. Potential new data are then evaluated on their ability to discriminate between competing hypotheses. MMA-DDC is well-suited for use in recursive mode, in which new observations are continuously used in the optimization of subsequent observations. This new approach was applied to a synthetic solute transport experiment, in which ranges of parameter values constitute the multiple hydrologic models, and model predictions are calculated using likelihood-weighted model averaging. MMA-DDC was used to determine the optimal location, timing, number, and type of new observations. From comparison with an exhaustive search of all possible observation sequences, we find that MMA-DDC consistently selects observations which lead to the highest reduction in model prediction uncertainty. We conclude that using MMA-DDC to evaluate potential observations may significantly improve the performance of hydrologic models while reducing the cost associated with collecting new data.
NASA Astrophysics Data System (ADS)
Zhang, Xiaodong; Huang, Guo H.
2011-12-01
Groundwater pollution has gathered more and more attention in the past decades. Conducting an assessment of groundwater contamination risk is desired to provide sound bases for supporting risk-based management decisions. Therefore, the objective of this study is to develop an integrated fuzzy stochastic approach to evaluate risks of BTEX-contaminated groundwater under multiple uncertainties. It consists of an integrated interval fuzzy subsurface modeling system (IIFMS) and an integrated fuzzy second-order stochastic risk assessment (IFSOSRA) model. The IIFMS is developed based on factorial design, interval analysis, and fuzzy sets approach to predict contaminant concentrations under hybrid uncertainties. Two input parameters (longitudinal dispersivity and porosity) are considered to be uncertain with known fuzzy membership functions, and intrinsic permeability is considered to be an interval number with unknown distribution information. A factorial design is conducted to evaluate interactive effects of the three uncertain factors on the modeling outputs through the developed IIFMS. The IFSOSRA model can systematically quantify variability and uncertainty, as well as their hybrids, presented as fuzzy, stochastic and second-order stochastic parameters in health risk assessment. The developed approach haw been applied to the management of a real-world petroleum-contaminated site within a western Canada context. The results indicate that multiple uncertainties, under a combination of information with various data-quality levels, can be effectively addressed to provide supports in identifying proper remedial efforts. A unique contribution of this research is the development of an integrated fuzzy stochastic approach for handling various forms of uncertainties associated with simulation and risk assessment efforts.
Courses of action for effects based operations using evolutionary algorithms
NASA Astrophysics Data System (ADS)
Haider, Sajjad; Levis, Alexander H.
2006-05-01
This paper presents an Evolutionary Algorithms (EAs) based approach to identify effective courses of action (COAs) in Effects Based Operations. The approach uses Timed Influence Nets (TINs) as the underlying mathematical model to capture a dynamic uncertain situation. TINs provide a concise graph-theoretic probabilistic approach to specify the cause and effect relationships that exist among the variables of interest (actions, desired effects, and other uncertain events) in a problem domain. The purpose of building these TIN models is to identify and analyze several alternative courses of action. The current practice is to use trial and error based techniques which are not only labor intensive but also produce sub-optimal results and are not capable of modeling constraints among actionable events. The EA based approach presented in this paper is aimed to overcome these limitations. The approach generates multiple COAs that are close enough in terms of achieving the desired effect. The purpose of generating multiple COAs is to give several alternatives to a decision maker. Moreover, the alternate COAs could be generalized based on the relationships that exist among the actions and their execution timings. The approach also allows a system analyst to capture certain types of constraints among actionable events.
NASA Astrophysics Data System (ADS)
Trucu, Dumitru
2016-09-01
In this comprehensive review concerning the modelling of human behaviours in crowd dynamics [3], the authors explore a wide range of mathematical approaches spanning over multiple scales that are suitable to describe emerging crowd behaviours in extreme situations. Focused on deciphering the key aspects leading to emerging crowd patterns evolutions in challenging times such as those requiring an evacuation on a complex venue, the authors address this complex dynamics at both microscale (individual level), mesoscale (probability distributions of interacting individuals), and macroscale (population level), ultimately aiming to gain valuable understanding and knowledge that would inform decision making in managing crisis situations.
Extending Data Worth Analyses to Select Multiple Observations Targeting Multiple Forecasts.
Vilhelmsen, Troels N; Ferré, Ty P A
2018-05-01
Hydrological models are often set up to provide specific forecasts of interest. Owing to the inherent uncertainty in data used to derive model structure and used to constrain parameter variations, the model forecasts will be uncertain. Additional data collection is often performed to minimize this forecast uncertainty. Given our common financial restrictions, it is critical that we identify data with maximal information content with respect to forecast of interest. In practice, this often devolves to qualitative decisions based on expert opinion. However, there is no assurance that this will lead to optimal design, especially for complex hydrogeological problems. Specifically, these complexities include considerations of multiple forecasts, shared information among potential observations, information content of existing data, and the assumptions and simplifications underlying model construction. In the present study, we extend previous data worth analyses to include: simultaneous selection of multiple new measurements and consideration of multiple forecasts of interest. We show how the suggested approach can be used to optimize data collection. This can be used in a manner that suggests specific measurement sets or that produces probability maps indicating areas likely to be informative for specific forecasts. Moreover, we provide examples documenting that sequential measurement election approaches often lead to suboptimal designs and that estimates of data covariance should be included when selecting future measurement sets. © 2017, National Ground Water Association.
Garner, Joseph P.; Thogerson, Collette M.; Dufour, Brett D.; Würbel, Hanno; Murray, James D.; Mench, Joy A.
2011-01-01
The NIMH's new strategic plan, with its emphasis on the “4P's” (Prediction, Preemption, Personalization, & Populations) and biomarker-based medicine requires a radical shift in animal modeling methodology. In particular 4P's models will be non-determinant (i.e. disease severity will depend on secondary environmental and genetic factors); and validated by reverse-translation of animal homologues to human biomarkers. A powerful consequence of the biomarker approach is that different closely-related disorders have a unique fingerprint of biomarkers. Animals can be validated as a highly-specific model of a single disorder by matching this `fingerprint'; or as a model of a symptom seen in multiple disorders by matching common biomarkers. Here we illustrate this approach with two Abnormal Repetitive Behaviors (ARBs) in mice: stereotypies; and barbering (hair pulling). We developed animal versions of the neuropsychological biomarkers that distinguish human ARBs, and tested the fingerprint of the different mouse ARBs. As predicted, the two mouse ARBs were associated with different biomarkers. Both barbering and stereotypy could be discounted as models of OCD (even though they are widely used as such), due to the absence of limbic biomarkers which are characteristic of OCD and hence are necessary for a valid model. Conversely barbering matched the fingerprint of trichotillomania (i.e. selective deficits in set-shifting), suggesting it may be a highly specific model of this disorder. In contrast stereotypies were correlated only with a biomarker (deficits in response shifting) correlated with stereotypies in multiple disorders, suggesting that animal stereotypies model stereotypies in multiple disorders. PMID:21219937
Wang, Fugui; Mladenoff, David J; Forrester, Jodi A; Blanco, Juan A; Schelle, Robert M; Peckham, Scott D; Keough, Cindy; Lucash, Melissa S; Gower, Stith T
The effects of forest management on soil carbon (C) and nitrogen (N) dynamics vary by harvest type and species. We simulated long-term effects of bole-only harvesting of aspen (Populus tremuloides) on stand productivity and interaction of CN cycles with a multiple model approach. Five models, Biome-BGC, CENTURY, FORECAST, LANDIS-II with Century-based soil dynamics, and PnET-CN, were run for 350 yr with seven harvesting events on nutrient-poor, sandy soils representing northwestern Wisconsin, United States. Twenty CN state and flux variables were summarized from the models' outputs and statistically analyzed using ordination and variance analysis methods. The multiple models' averages suggest that bole-only harvest would not significantly affect long-term site productivity of aspen, though declines in soil organic matter and soil N were significant. Along with direct N removal by harvesting, extensive leaching after harvesting before canopy closure was another major cause of N depletion. These five models were notably different in output values of the 20 variables examined, although there were some similarities for certain variables. PnET-CN produced unique results for every variable, and CENTURY showed fewer outliers and similar temporal patterns to the mean of all models. In general, we demonstrated that when there are no site-specific data for fine-scale calibration and evaluation of a single model, the multiple model approach may be a more robust approach for long-term simulations. In addition, multimodeling may also improve the calibration and evaluation of an individual model.
Multiple commodities in statistical microeconomics: Model and market
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Yu, Miao; Du, Xin
2016-11-01
A statistical generalization of microeconomics has been made in Baaquie (2013). In Baaquie et al. (2015), the market behavior of single commodities was analyzed and it was shown that market data provides strong support for the statistical microeconomic description of commodity prices. The case of multiple commodities is studied and a parsimonious generalization of the single commodity model is made for the multiple commodities case. Market data shows that the generalization can accurately model the simultaneous correlation functions of up to four commodities. To accurately model five or more commodities, further terms have to be included in the model. This study shows that the statistical microeconomics approach is a comprehensive and complete formulation of microeconomics, and which is independent to the mainstream formulation of microeconomics.
Bennetts, Victor Hernandez; Schaffernicht, Erik; Pomareda, Victor; Lilienthal, Achim J; Marco, Santiago; Trincavelli, Marco
2014-09-17
In this paper, we address the task of gas distribution modeling in scenarios where multiple heterogeneous compounds are present. Gas distribution modeling is particularly useful in emission monitoring applications where spatial representations of the gaseous patches can be used to identify emission hot spots. In realistic environments, the presence of multiple chemicals is expected and therefore, gas discrimination has to be incorporated in the modeling process. The approach presented in this work addresses the task of gas distribution modeling by combining different non selective gas sensors. Gas discrimination is addressed with an open sampling system, composed by an array of metal oxide sensors and a probabilistic algorithm tailored to uncontrolled environments. For each of the identified compounds, the mapping algorithm generates a calibrated gas distribution model using the classification uncertainty and the concentration readings acquired with a photo ionization detector. The meta parameters of the proposed modeling algorithm are automatically learned from the data. The approach was validated with a gas sensitive robot patrolling outdoor and indoor scenarios, where two different chemicals were released simultaneously. The experimental results show that the generated multi compound maps can be used to accurately predict the location of emitting gas sources.
Inferring Ice Thickness from a Glacier Dynamics Model and Multiple Surface Datasets.
NASA Astrophysics Data System (ADS)
Guan, Y.; Haran, M.; Pollard, D.
2017-12-01
The future behavior of the West Antarctic Ice Sheet (WAIS) may have a major impact on future climate. For instance, ice sheet melt may contribute significantly to global sea level rise. Understanding the current state of WAIS is therefore of great interest. WAIS is drained by fast-flowing glaciers which are major contributors to ice loss. Hence, understanding the stability and dynamics of glaciers is critical for predicting the future of the ice sheet. Glacier dynamics are driven by the interplay between the topography, temperature and basal conditions beneath the ice. A glacier dynamics model describes the interactions between these processes. We develop a hierarchical Bayesian model that integrates multiple ice sheet surface data sets with a glacier dynamics model. Our approach allows us to (1) infer important parameters describing the glacier dynamics, (2) learn about ice sheet thickness, and (3) account for errors in the observations and the model. Because we have relatively dense and accurate ice thickness data from the Thwaites Glacier in West Antarctica, we use these data to validate the proposed approach. The long-term goal of this work is to have a general model that may be used to study multiple glaciers in the Antarctic.
Accounting for heterogeneity in meta-analysis using a multiplicative model-an empirical study.
Mawdsley, David; Higgins, Julian P T; Sutton, Alex J; Abrams, Keith R
2017-03-01
In meta-analysis, the random-effects model is often used to account for heterogeneity. The model assumes that heterogeneity has an additive effect on the variance of effect sizes. An alternative model, which assumes multiplicative heterogeneity, has been little used in the medical statistics community, but is widely used by particle physicists. In this paper, we compare the two models using a random sample of 448 meta-analyses drawn from the Cochrane Database of Systematic Reviews. In general, differences in goodness of fit are modest. The multiplicative model tends to give results that are closer to the null, with a narrower confidence interval. Both approaches make different assumptions about the outcome of the meta-analysis. In our opinion, the selection of the more appropriate model will often be guided by whether the multiplicative model's assumption of a single effect size is plausible. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Accuracy and performance of 3D mask models in optical projection lithography
NASA Astrophysics Data System (ADS)
Agudelo, Viviana; Evanschitzky, Peter; Erdmann, Andreas; Fühner, Tim; Shao, Feng; Limmer, Steffen; Fey, Dietmar
2011-04-01
Different mask models have been compared: rigorous electromagnetic field (EMF) modeling, rigorous EMF modeling with decomposition techniques and the thin mask approach (Kirchhoff approach) to simulate optical diffraction from different mask patterns in projection systems for lithography. In addition, each rigorous model was tested for two different formulations for partially coherent imaging: The Hopkins assumption and rigorous simulation of mask diffraction orders for multiple illumination angles. The aim of this work is to closely approximate results of the rigorous EMF method by the thin mask model enhanced with pupil filtering techniques. The validity of this approach for different feature sizes, shapes and illumination conditions is investigated.
A regenerative approach to the treatment of multiple sclerosis.
Deshmukh, Vishal A; Tardif, Virginie; Lyssiotis, Costas A; Green, Chelsea C; Kerman, Bilal; Kim, Hyung Joon; Padmanabhan, Krishnan; Swoboda, Jonathan G; Ahmad, Insha; Kondo, Toru; Gage, Fred H; Theofilopoulos, Argyrios N; Lawson, Brian R; Schultz, Peter G; Lairson, Luke L
2013-10-17
Progressive phases of multiple sclerosis are associated with inhibited differentiation of the progenitor cell population that generates the mature oligodendrocytes required for remyelination and disease remission. To identify selective inducers of oligodendrocyte differentiation, we performed an image-based screen for myelin basic protein (MBP) expression using primary rat optic-nerve-derived progenitor cells. Here we show that among the most effective compounds identifed was benztropine, which significantly decreases clinical severity in the experimental autoimmune encephalomyelitis (EAE) model of relapsing-remitting multiple sclerosis when administered alone or in combination with approved immunosuppressive treatments for multiple sclerosis. Evidence from a cuprizone-induced model of demyelination, in vitro and in vivo T-cell assays and EAE adoptive transfer experiments indicated that the observed efficacy of this drug results directly from an enhancement of remyelination rather than immune suppression. Pharmacological studies indicate that benztropine functions by a mechanism that involves direct antagonism of M1 and/or M3 muscarinic receptors. These studies should facilitate the development of effective new therapies for the treatment of multiple sclerosis that complement established immunosuppressive approaches.
NASA Astrophysics Data System (ADS)
McInerney, David; Thyer, Mark; Kavetski, Dmitri; Kuczera, George
2016-04-01
Appropriate representation of residual errors in hydrological modelling is essential for accurate and reliable probabilistic streamflow predictions. In particular, residual errors of hydrological predictions are often heteroscedastic, with large errors associated with high runoff events. Although multiple approaches exist for representing this heteroscedasticity, few if any studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating a range of approaches for representing heteroscedasticity in residual errors. These approaches include the 'direct' weighted least squares approach and 'transformational' approaches, such as logarithmic, Box-Cox (with and without fitting the transformation parameter), logsinh and the inverse transformation. The study reports (1) theoretical comparison of heteroscedasticity approaches, (2) empirical evaluation of heteroscedasticity approaches using a range of multiple catchments / hydrological models / performance metrics and (3) interpretation of empirical results using theory to provide practical guidance on the selection of heteroscedasticity approaches. Importantly, for hydrological practitioners, the results will simplify the choice of approaches to represent heteroscedasticity. This will enhance their ability to provide hydrological probabilistic predictions with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality).
Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.
Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen
2017-11-01
A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R 2 , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.
ERIC Educational Resources Information Center
Chiu, Chung-Yi; Fitzgerald, Sandra D.; Strand, David M.; Muller, Veronica; Brooks, Jessica; Chan, Fong
2012-01-01
The main objective of this study was to determine whether motivational and volitional variables identified in the health action process approach (HAPA) model can be used to successfully differentiate people with multiple sclerosis (MS) in different stages of change for exercise and physical activity. Ex-post-facto design using multiple…
Zhu; Dale
2000-10-01
/ Regional resource use planning relies on key regional stakeholder groups using and having equitable access to appropriate social, economic, and environmental information and assessment tools. Decision support systems (DSS) can improve stakeholder access to such information and analysis tools. Regional resource use planning, however, is a complex process involving multiple issues, multiple assessment criteria, multiple stakeholders, and multiple values. There is a need for an approach to DSS development that can assist in understanding and modeling complex problem situations in regional resource use so that areas where DSSs could provide effective support can be identified, and the user requirements can be well established. This paper presents an approach based on the soft systems methodology for identifying DSS opportunities for regional resource use planning, taking the Central Highlands Region of Queensland, Australia, as a case study.
Kim, Eun Sook; Cao, Chunhua
2015-01-01
Considering that group comparisons are common in social science, we examined two latent group mean testing methods when groups of interest were either at the between or within level of multilevel data: multiple-group multilevel confirmatory factor analysis (MG ML CFA) and multilevel multiple-indicators multiple-causes modeling (ML MIMIC). The performance of these methods were investigated through three Monte Carlo studies. In Studies 1 and 2, either factor variances or residual variances were manipulated to be heterogeneous between groups. In Study 3, which focused on within-level multiple-group analysis, six different model specifications were considered depending on how to model the intra-class group correlation (i.e., correlation between random effect factors for groups within cluster). The results of simulations generally supported the adequacy of MG ML CFA and ML MIMIC for multiple-group analysis with multilevel data. The two methods did not show any notable difference in the latent group mean testing across three studies. Finally, a demonstration with real data and guidelines in selecting an appropriate approach to multilevel multiple-group analysis are provided.
Multiple Input Design for Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene
2003-01-01
A method for designing multiple inputs for real-time dynamic system identification in the frequency domain was developed and demonstrated. The designed inputs are mutually orthogonal in both the time and frequency domains, with reduced peak factors to provide good information content for relatively small amplitude excursions. The inputs are designed for selected frequency ranges, and therefore do not require a priori models. The experiment design approach was applied to identify linear dynamic models for the F-15 ACTIVE aircraft, which has multiple control effectors.
NASA Astrophysics Data System (ADS)
Saeed, R. A.; Galybin, A. N.; Popov, V.
2013-01-01
This paper discusses condition monitoring and fault diagnosis in Francis turbine based on integration of numerical modelling with several different artificial intelligence (AI) techniques. In this study, a numerical approach for fluid-structure (turbine runner) analysis is presented. The results of numerical analysis provide frequency response functions (FRFs) data sets along x-, y- and z-directions under different operating load and different position and size of faults in the structure. To extract features and reduce the dimensionality of the obtained FRF data, the principal component analysis (PCA) has been applied. Subsequently, the extracted features are formulated and fed into multiple artificial neural networks (ANN) and multiple adaptive neuro-fuzzy inference systems (ANFIS) in order to identify the size and position of the damage in the runner and estimate the turbine operating conditions. The results demonstrated the effectiveness of this approach and provide satisfactory accuracy even when the input data are corrupted with certain level of noise.
Rortais, Agnès; Arnold, Gérard; Dorne, Jean-Lou; More, Simon J; Sperandio, Giorgio; Streissl, Franz; Szentes, Csaba; Verdonck, Frank
2017-06-01
Current approaches to risk assessment in bees do not take into account co-exposures from multiple stressors. The European Food Safety Authority (EFSA) is deploying resources and efforts to move towards a holistic risk assessment approach of multiple stressors in bees. This paper describes the general principles of pesticide risk assessment in bees, including recent developments at EFSA dealing with risk assessment of single and multiple pesticide residues and biological hazards. The EFSA Guidance Document on the risk assessment of plant protection products in bees highlights the need for the inclusion of an uncertainty analysis, other routes of exposures and multiple stressors such as chemical mixtures and biological agents. The EFSA risk assessment on the survival, spread and establishment of the small hive beetle, Aethina tumida, an invasive alien species, is provided with potential insights for other bee pests such as the Asian hornet, Vespa velutina. Furthermore, data gaps are identified at each step of the risk assessment, and recommendations are made for future research that could be supported under the framework of Horizon 2020. Finally, the recent work conducted at EFSA is presented, under the overarching MUST-B project ("EU efforts towards the development of a holistic approach for the risk assessment on MUltiple STressors in Bees") comprising a toolbox for harmonised data collection under field conditions and a mechanistic model to assess effects from pesticides and other stressors such as biological agents and beekeeping management practices, at the colony level and in a spatially complex landscape. Future perspectives at EFSA include the development of a data model to collate high quality data to calibrate and validate the model to be used as a regulatory tool. Finally, the evidence collected within the framework of MUST-B will support EFSA's activities on the development of a holistic approach to the risk assessment of multiple stressors in bees. In conclusion, EFSA calls for collaborative action at the EU level to establish a common and open access database to serve multiple purposes and different stakeholders. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Neves, Marco A. C.; Simões, Sérgio; Sá e Melo, M. Luisa
2010-12-01
CXCR4 is a G-protein coupled receptor for CXCL12 that plays an important role in human immunodeficiency virus infection, cancer growth and metastasization, immune cell trafficking and WHIM syndrome. In the absence of an X-ray crystal structure, theoretical modeling of the CXCR4 receptor remains an important tool for structure-function analysis and to guide the discovery of new antagonists with potential clinical use. In this study, the combination of experimental data and molecular modeling approaches allowed the development of optimized ligand-receptor models useful for elucidation of the molecular determinants of small molecule binding and functional antagonism. The ligand-guided homology modeling approach used in this study explicitly re-shaped the CXCR4 binding pocket in order to improve discrimination between known CXCR4 antagonists and random decoys. Refinement based on multiple test-sets with small compounds from single chemotypes provided the best early enrichment performance. These results provide an important tool for structure-based drug design and virtual ligand screening of new CXCR4 antagonists.
Detection of epistatic effects with logic regression and a classical linear regression model.
Malina, Magdalena; Ickstadt, Katja; Schwender, Holger; Posch, Martin; Bogdan, Małgorzata
2014-02-01
To locate multiple interacting quantitative trait loci (QTL) influencing a trait of interest within experimental populations, usually methods as the Cockerham's model are applied. Within this framework, interactions are understood as the part of the joined effect of several genes which cannot be explained as the sum of their additive effects. However, if a change in the phenotype (as disease) is caused by Boolean combinations of genotypes of several QTLs, this Cockerham's approach is often not capable to identify them properly. To detect such interactions more efficiently, we propose a logic regression framework. Even though with the logic regression approach a larger number of models has to be considered (requiring more stringent multiple testing correction) the efficient representation of higher order logic interactions in logic regression models leads to a significant increase of power to detect such interactions as compared to a Cockerham's approach. The increase in power is demonstrated analytically for a simple two-way interaction model and illustrated in more complex settings with simulation study and real data analysis.
Multiscale Modeling in the Clinic: Drug Design and Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clancy, Colleen E.; An, Gary; Cannon, William R.
A wide range of length and time scales are relevant to pharmacology, especially in drug development, drug design and drug delivery. Therefore, multi-scale computational modeling and simulation methods and paradigms that advance the linkage of phenomena occurring at these multiple scales have become increasingly important. Multi-scale approaches present in silico opportunities to advance laboratory research to bedside clinical applications in pharmaceuticals research. This is achievable through the capability of modeling to reveal phenomena occurring across multiple spatial and temporal scales, which are not otherwise readily accessible to experimentation. The resultant models, when validated, are capable of making testable predictions tomore » guide drug design and delivery. In this review we describe the goals, methods, and opportunities of multi-scale modeling in drug design and development. We demonstrate the impact of multiple scales of modeling in this field. We indicate the common mathematical techniques employed for multi-scale modeling approaches used in pharmacology and present several examples illustrating the current state-of-the-art regarding drug development for: Excitable Systems (Heart); Cancer (Metastasis and Differentiation); Cancer (Angiogenesis and Drug Targeting); Metabolic Disorders; and Inflammation and Sepsis. We conclude with a focus on barriers to successful clinical translation of drug development, drug design and drug delivery multi-scale models.« less
NASA Astrophysics Data System (ADS)
Erfanian, A.; Fomenko, L.; Wang, G.
2016-12-01
Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling
The Role of Multimodel Combination in Improving Streamflow Prediction
NASA Astrophysics Data System (ADS)
Arumugam, S.; Li, W.
2008-12-01
Model errors are the inevitable part in any prediction exercise. One approach that is currently gaining attention to reduce model errors is by optimally combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictability. In this study, we present a new approach to combine multiple hydrological models by evaluating their predictability contingent on the predictor state. We combine two hydrological models, 'abcd' model and Variable Infiltration Capacity (VIC) model, with each model's parameter being estimated by two different objective functions to develop multimodel streamflow predictions. The performance of multimodel predictions is compared with individual model predictions using correlation, root mean square error and Nash-Sutcliffe coefficient. To quantify precisely under what conditions the multimodel predictions result in improved predictions, we evaluate the proposed algorithm by testing it against streamflow generated from a known model ('abcd' model or VIC model) with errors being homoscedastic or heteroscedastic. Results from the study show that streamflow simulated from individual models performed better than multimodels under almost no model error. Under increased model error, the multimodel consistently performed better than the single model prediction in terms of all performance measures. The study also evaluates the proposed algorithm for streamflow predictions in two humid river basins from NC as well as in two arid basins from Arizona. Through detailed validation in these four sites, the study shows that multimodel approach better predicts the observed streamflow in comparison to the single model predictions.
NASA Astrophysics Data System (ADS)
Werner, K.; Liu, F. M.; Ostapchenko, S.; Pierog, T.
2004-11-01
After discussing conceptual problems with the conventional string model, we present a new approach, based on a theoretically consistent multiple scattering formalism. First results for proton-proton scattering at 158 GeV are discussed.
Modeling Rabbit Responses to Single and Multiple Aerosol ...
Journal Article Survival models are developed here to predict response and time-to-response for mortality in rabbits following exposures to single or multiple aerosol doses of Bacillus anthracis spores. Hazard function models were developed for a multiple dose dataset to predict the probability of death through specifying dose-response functions and the time between exposure and the time-to-death (TTD). Among the models developed, the best-fitting survival model (baseline model) has an exponential dose-response model with a Weibull TTD distribution. Alternative models assessed employ different underlying dose-response functions and use the assumption that, in a multiple dose scenario, earlier doses affect the hazard functions of each subsequent dose. In addition, published mechanistic models are analyzed and compared with models developed in this paper. None of the alternative models that were assessed provided a statistically significant improvement in fit over the baseline model. The general approach utilizes simple empirical data analysis to develop parsimonious models with limited reliance on mechanistic assumptions. The baseline model predicts TTDs consistent with reported results from three independent high-dose rabbit datasets. More accurate survival models depend upon future development of dose-response datasets specifically designed to assess potential multiple dose effects on response and time-to-response. The process used in this paper to dev
Methodological issues underlying multiple decrement life table analysis.
Mode, C J; Avery, R C; Littman, G S; Potter, R G
1977-02-01
In this paper, the actuarial method of multiple decrement life table analysis of censored, longitudinal data is examined. The discussion is organized in terms of the first segment of usage of an intrauterine device. Weaknesses of the actuarial approach are pointed out, and an alternative approach, based on the classical model of competing risks, is proposed. Finally, the actuarial and the alternative method of analyzing censored data are compared, using data from the Taichung Medical Study on Intrauterine Devices.
An Experimental Approach to Mathematical Modeling in Biology
ERIC Educational Resources Information Center
Ledder, Glenn
2008-01-01
The simplest age-structured population models update a population vector via multiplication by a matrix. These linear models offer an opportunity to introduce mathematical modeling to students of limited mathematical sophistication and background. We begin with a detailed discussion of mathematical modeling, particularly in a biological context.…
A Bayesian trans-dimensional approach for the fusion of multiple geophysical datasets
NASA Astrophysics Data System (ADS)
JafarGandomi, Arash; Binley, Andrew
2013-09-01
We propose a Bayesian fusion approach to integrate multiple geophysical datasets with different coverage and sensitivity. The fusion strategy is based on the capability of various geophysical methods to provide enough resolution to identify either subsurface material parameters or subsurface structure, or both. We focus on electrical resistivity as the target material parameter and electrical resistivity tomography (ERT), electromagnetic induction (EMI), and ground penetrating radar (GPR) as the set of geophysical methods. However, extending the approach to different sets of geophysical parameters and methods is straightforward. Different geophysical datasets are entered into a trans-dimensional Markov chain Monte Carlo (McMC) search-based joint inversion algorithm. The trans-dimensional property of the McMC algorithm allows dynamic parameterisation of the model space, which in turn helps to avoid bias of the post-inversion results towards a particular model. Given that we are attempting to develop an approach that has practical potential, we discretize the subsurface into an array of one-dimensional earth-models. Accordingly, the ERT data that are collected by using two-dimensional acquisition geometry are re-casted to a set of equivalent vertical electric soundings. Different data are inverted either individually or jointly to estimate one-dimensional subsurface models at discrete locations. We use Shannon's information measure to quantify the information obtained from the inversion of different combinations of geophysical datasets. Information from multiple methods is brought together via introducing joint likelihood function and/or constraining the prior information. A Bayesian maximum entropy approach is used for spatial fusion of spatially dispersed estimated one-dimensional models and mapping of the target parameter. We illustrate the approach with a synthetic dataset and then apply it to a field dataset. We show that the proposed fusion strategy is successful not only in enhancing the subsurface information but also as a survey design tool to identify the appropriate combination of the geophysical tools and show whether application of an individual method for further investigation of a specific site is beneficial.
A Statistical Multimodel Ensemble Approach to Improving Long-Range Forecasting in Pakistan
2012-03-01
Impact of global warming on monsoon variability in Pakistan. J. Anim. Pl. Sci., 21, no. 1, 107–110. Gillies, S., T. Murphree, and D. Meyer, 2012...are generated by multiple regression models that relate globally distributed oceanic and atmospheric predictors to local predictands. The...generated by multiple regression models that relate globally distributed oceanic and atmospheric predictors to local predictands. The predictands are
A Mixtures-of-Trees Framework for Multi-Label Classification
Hong, Charmgil; Batal, Iyad; Hauskrecht, Milos
2015-01-01
We propose a new probabilistic approach for multi-label classification that aims to represent the class posterior distribution P(Y|X). Our approach uses a mixture of tree-structured Bayesian networks, which can leverage the computational advantages of conditional tree-structured models and the abilities of mixtures to compensate for tree-structured restrictions. We develop algorithms for learning the model from data and for performing multi-label predictions using the learned model. Experiments on multiple datasets demonstrate that our approach outperforms several state-of-the-art multi-label classification methods. PMID:25927011
Multielevation calibration of frequency-domain electromagnetic data
Minsley, Burke J.; Kass, M. Andy; Hodges, Greg; Smith, Bruce D.
2014-01-01
Systematic calibration errors must be taken into account because they can substantially impact the accuracy of inverted subsurface resistivity models derived from frequency-domain electromagnetic data, resulting in potentially misleading interpretations. We have developed an approach that uses data acquired at multiple elevations over the same location to assess calibration errors. A significant advantage is that this method does not require prior knowledge of subsurface properties from borehole or ground geophysical data (though these can be readily incorporated if available), and is, therefore, well suited to remote areas. The multielevation data were used to solve for calibration parameters and a single subsurface resistivity model that are self consistent over all elevations. The deterministic and Bayesian formulations of the multielevation approach illustrate parameter sensitivity and uncertainty using synthetic- and field-data examples. Multiplicative calibration errors (gain and phase) were found to be better resolved at high frequencies and when data were acquired over a relatively conductive area, whereas additive errors (bias) were reasonably resolved over conductive and resistive areas at all frequencies. The Bayesian approach outperformed the deterministic approach when estimating calibration parameters using multielevation data at a single location; however, joint analysis of multielevation data at multiple locations using the deterministic algorithm yielded the most accurate estimates of calibration parameters. Inversion results using calibration-corrected data revealed marked improvement in misfit, lending added confidence to the interpretation of these models.
Categorical Variables in Multiple Regression: Some Cautions.
ERIC Educational Resources Information Center
O'Grady, Kevin E.; Medoff, Deborah R.
1988-01-01
Limitations of dummy coding and nonsense coding as methods of coding categorical variables for use as predictors in multiple regression analysis are discussed. The combination of these approaches often yields estimates and tests of significance that are not intended by researchers for inclusion in their models. (SLD)
Pleiotropy Analysis of Quantitative Traits at Gene Level by Multivariate Functional Linear Models
Wang, Yifan; Liu, Aiyi; Mills, James L.; Boehnke, Michael; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao; Wu, Colin O.; Fan, Ruzong
2015-01-01
In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai–Bartlett trace, Hotelling–Lawley trace, and Wilks’s Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. PMID:25809955
Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.
Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong
2015-05-01
In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.
An Integrated Approach to Damage Accommodation in Flight Control
NASA Technical Reports Server (NTRS)
Boskovic, Jovan D.; Knoebel, Nathan; Mehra, Raman K.; Gregory, Irene
2008-01-01
In this paper we present an integrated approach to in-flight damage accommodation in flight control. The approach is based on Multiple Models, Switching and Tuning (MMST), and consists of three steps: In the first step the main objective is to acquire a realistic aircraft damage model. Modeling of in-flight damage is a highly complex problem since there is a large number of issues that need to be addressed. One of the most important one is that there is strong coupling between structural dynamics, aerodynamics, and flight control. These effects cannot be studied separately due to this coupling. Once a realistic damage model is available, in the second step a large number of models corresponding to different damage cases are generated. One possibility is to generate many linear models and interpolate between them to cover a large portion of the flight envelope. Once these models have been generated, we will implement a recently developed-Model Set Reduction (MSR) technique. The technique is based on parameterizing damage in terms of uncertain parameters, and uses concepts from robust control theory to arrive at a small number of "centered" models such that the controllers corresponding to these models assure desired stability and robustness properties over a subset in the parametric space. By devising a suitable model placement strategy, the entire parametric set is covered with a relatively small number of models and controllers. The third step consists of designing a Multiple Models, Switching and Tuning (MMST) strategy for estimating the current operating regime (damage case) of the aircraft, and switching to the corresponding controller to achieve effective damage accommodation and the desired performance. In the paper present a comprehensive approach to damage accommodation using Model Set Design,MMST, and Variable Structure compensation for coupling nonlinearities. The approach was evaluated on a model of F/A-18 aircraft dynamics under control effector damage, augmented by nonlinear cross-coupling terms and a structural dynamics model. The proposed approach achieved excellent performance under severe damage effects.
Fuzzy adaptive interacting multiple model nonlinear filter for integrated navigation sensor fusion.
Tseng, Chien-Hao; Chang, Chih-Wen; Jwo, Dah-Jing
2011-01-01
In this paper, the application of the fuzzy interacting multiple model unscented Kalman filter (FUZZY-IMMUKF) approach to integrated navigation processing for the maneuvering vehicle is presented. The unscented Kalman filter (UKF) employs a set of sigma points through deterministic sampling, such that a linearization process is not necessary, and therefore the errors caused by linearization as in the traditional extended Kalman filter (EKF) can be avoided. The nonlinear filters naturally suffer, to some extent, the same problem as the EKF for which the uncertainty of the process noise and measurement noise will degrade the performance. As a structural adaptation (model switching) mechanism, the interacting multiple model (IMM), which describes a set of switching models, can be utilized for determining the adequate value of process noise covariance. The fuzzy logic adaptive system (FLAS) is employed to determine the lower and upper bounds of the system noise through the fuzzy inference system (FIS). The resulting sensor fusion strategy can efficiently deal with the nonlinear problem for the vehicle navigation. The proposed FUZZY-IMMUKF algorithm shows remarkable improvement in the navigation estimation accuracy as compared to the relatively conventional approaches such as the UKF and IMMUKF.
Tomographic PIV: particles versus blobs
NASA Astrophysics Data System (ADS)
Champagnat, Frédéric; Cornic, Philippe; Cheminet, Adam; Leclaire, Benjamin; Le Besnerais, Guy; Plyer, Aurélien
2014-08-01
We present an alternative approach to tomographic particle image velocimetry (tomo-PIV) that seeks to recover nearly single voxel particles rather than blobs of extended size. The baseline of our approach is a particle-based representation of image data. An appropriate discretization of this representation yields an original linear forward model with a weight matrix built with specific samples of the system’s point spread function (PSF). Such an approach requires only a few voxels to explain the image appearance, therefore it favors much more sparsely reconstructed volumes than classic tomo-PIV. The proposed forward model is general and flexible and can be embedded in a classical multiplicative algebraic reconstruction technique (MART) or a simultaneous multiplicative algebraic reconstruction technique (SMART) inversion procedure. We show, using synthetic PIV images and by way of a large exploration of the generating conditions and a variety of performance metrics, that the model leads to better results than the classical tomo-PIV approach, in particular in the case of seeding densities greater than 0.06 particles per pixel and of PSFs characterized by a standard deviation larger than 0.8 pixels.
Matching purpose with practice: revolutionising nurse education with mita.
Denny, Margaret; Weber, Ellen F; Wells, John; Stokes, Olga Redmond; Lane, Paula; Denieffe, Suzanne
2008-01-01
Multiple intelligences have only recently entered the teaching dialogue in nurse education and research. It is argued that despite the rhetoric of a student centred approach nurse education remains wedded to conventional teaching approaches that fail to engage with the individual and unwittingly silence the student's voice. This paper will examine the concept of multiple intelligences (MI) and outline Gardner's contention that the brain functions using eight intelligences which can be employed to improve learning at an individual level. It will then outline the use of MI using a five phase model, developed by Weber, known as a multiple intelligence teaching approach (MITA). It is contended that MITA has great potential in nurse education, particularly in terms of reinforcing learning beyond the educational domain and into the individual's professional development and clinical practice.
Feature and Region Selection for Visual Learning.
Zhao, Ji; Wang, Liantao; Cabral, Ricardo; De la Torre, Fernando
2016-03-01
Visual learning problems, such as object classification and action recognition, are typically approached using extensions of the popular bag-of-words (BoWs) model. Despite its great success, it is unclear what visual features the BoW model is learning. Which regions in the image or video are used to discriminate among classes? Which are the most discriminative visual words? Answering these questions is fundamental for understanding existing BoW models and inspiring better models for visual recognition. To answer these questions, this paper presents a method for feature selection and region selection in the visual BoW model. This allows for an intermediate visualization of the features and regions that are important for visual learning. The main idea is to assign latent weights to the features or regions, and jointly optimize these latent variables with the parameters of a classifier (e.g., support vector machine). There are four main benefits of our approach: 1) our approach accommodates non-linear additive kernels, such as the popular χ(2) and intersection kernel; 2) our approach is able to handle both regions in images and spatio-temporal regions in videos in a unified way; 3) the feature selection problem is convex, and both problems can be solved using a scalable reduced gradient method; and 4) we point out strong connections with multiple kernel learning and multiple instance learning approaches. Experimental results in the PASCAL VOC 2007, MSR Action Dataset II and YouTube illustrate the benefits of our approach.
New approach based on tetrahedral-mesh geometry for accurate 4D Monte Carlo patient-dose calculation
NASA Astrophysics Data System (ADS)
Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Kim, Seonghoon; Sohn, Jason W.
2015-02-01
In the present study, to achieve accurate 4D Monte Carlo dose calculation in radiation therapy, we devised a new approach that combines (1) modeling of the patient body using tetrahedral-mesh geometry based on the patient’s 4D CT data, (2) continuous movement/deformation of the tetrahedral patient model by interpolation of deformation vector fields acquired through deformable image registration, and (3) direct transportation of radiation particles during the movement and deformation of the tetrahedral patient model. The results of our feasibility study show that it is certainly possible to construct 4D patient models (= phantoms) with sufficient accuracy using the tetrahedral-mesh geometry and to directly transport radiation particles during continuous movement and deformation of the tetrahedral patient model. This new approach not only produces more accurate dose distribution in the patient but also replaces the current practice of using multiple 3D voxel phantoms and combining multiple dose distributions after Monte Carlo simulations. For routine clinical application of our new approach, the use of fast automatic segmentation algorithms is a must. In order to achieve, simultaneously, both dose accuracy and computation speed, the number of tetrahedrons for the lungs should be optimized. Although the current computation speed of our new 4D Monte Carlo simulation approach is slow (i.e. ~40 times slower than that of the conventional dose accumulation approach), this problem is resolvable by developing, in Geant4, a dedicated navigation class optimized for particle transportation in tetrahedral-mesh geometry.
Application of model predictive control for optimal operation of wind turbines
NASA Astrophysics Data System (ADS)
Yuan, Yuan; Cao, Pei; Tang, J.
2017-04-01
For large-scale wind turbines, reducing maintenance cost is a major challenge. Model predictive control (MPC) is a promising approach to deal with multiple conflicting objectives using the weighed sum approach. In this research, model predictive control method is applied to wind turbine to find an optimal balance between multiple objectives, such as the energy capture, loads on turbine components, and the pitch actuator usage. The actuator constraints are integrated into the objective function at the control design stage. The analysis is carried out in both the partial load region and full load region, and the performances are compared with those of a baseline gain scheduling PID controller. The application of this strategy achieves enhanced balance of component loads, the average power and actuator usages in partial load region.
Comparing multiple imputation methods for systematically missing subject-level data.
Kline, David; Andridge, Rebecca; Kaizar, Eloise
2017-06-01
When conducting research synthesis, the collection of studies that will be combined often do not measure the same set of variables, which creates missing data. When the studies to combine are longitudinal, missing data can occur on the observation-level (time-varying) or the subject-level (non-time-varying). Traditionally, the focus of missing data methods for longitudinal data has been on missing observation-level variables. In this paper, we focus on missing subject-level variables and compare two multiple imputation approaches: a joint modeling approach and a sequential conditional modeling approach. We find the joint modeling approach to be preferable to the sequential conditional approach, except when the covariance structure of the repeated outcome for each individual has homogenous variance and exchangeable correlation. Specifically, the regression coefficient estimates from an analysis incorporating imputed values based on the sequential conditional method are attenuated and less efficient than those from the joint method. Remarkably, the estimates from the sequential conditional method are often less efficient than a complete case analysis, which, in the context of research synthesis, implies that we lose efficiency by combining studies. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Multiple point statistical simulation using uncertain (soft) conditional data
NASA Astrophysics Data System (ADS)
Hansen, Thomas Mejer; Vu, Le Thanh; Mosegaard, Klaus; Cordua, Knud Skou
2018-05-01
Geostatistical simulation methods have been used to quantify spatial variability of reservoir models since the 80s. In the last two decades, state of the art simulation methods have changed from being based on covariance-based 2-point statistics to multiple-point statistics (MPS), that allow simulation of more realistic Earth-structures. In addition, increasing amounts of geo-information (geophysical, geological, etc.) from multiple sources are being collected. This pose the problem of integration of these different sources of information, such that decisions related to reservoir models can be taken on an as informed base as possible. In principle, though difficult in practice, this can be achieved using computationally expensive Monte Carlo methods. Here we investigate the use of sequential simulation based MPS simulation methods conditional to uncertain (soft) data, as a computational efficient alternative. First, it is demonstrated that current implementations of sequential simulation based on MPS (e.g. SNESIM, ENESIM and Direct Sampling) do not account properly for uncertain conditional information, due to a combination of using only co-located information, and a random simulation path. Then, we suggest two approaches that better account for the available uncertain information. The first make use of a preferential simulation path, where more informed model parameters are visited preferentially to less informed ones. The second approach involves using non co-located uncertain information. For different types of available data, these approaches are demonstrated to produce simulation results similar to those obtained by the general Monte Carlo based approach. These methods allow MPS simulation to condition properly to uncertain (soft) data, and hence provides a computationally attractive approach for integration of information about a reservoir model.
Multiple model cardinalized probability hypothesis density filter
NASA Astrophysics Data System (ADS)
Georgescu, Ramona; Willett, Peter
2011-09-01
The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.
Using constraints and their value for optimization of large ODE systems
Domijan, Mirela; Rand, David A.
2015-01-01
We provide analytical tools to facilitate a rigorous assessment of the quality and value of the fit of a complex model to data. We use this to provide approaches to model fitting, parameter estimation, the design of optimization functions and experimental optimization. This is in the context where multiple constraints are used to select or optimize a large model defined by differential equations. We illustrate the approach using models of circadian clocks and the NF-κB signalling system. PMID:25673300
Hydrological modelling in forested systems | Science ...
This chapter provides a brief overview of forest hydrology modelling approaches for answering important global research and management questions. Many hundreds of hydrological models have been applied globally across multiple decades to represent and predict forest hydrological processes. The focus of this chapter is on process-based models and approaches, specifically 'forest hydrology models'; that is, physically based simulation tools that quantify compartments of the forest hydrological cycle. Physically based models can be considered those that describe the conservation of mass, momentum and/or energy. The purpose of this chapter is to provide a brief overview of forest hydrology modeling approaches for answering important global research and management questions. The focus of this chapter is on process-based models and approaches, specifically “forest hydrology models”, i.e., physically-based simulation tools that quantify compartments of the forest hydrological cycle.
NASA Astrophysics Data System (ADS)
Xu, Jiuping; Li, Jun
2002-09-01
In this paper a class of stochastic multiple-objective programming problems with one quadratic, several linear objective functions and linear constraints has been introduced. The former model is transformed into a deterministic multiple-objective nonlinear programming model by means of the introduction of random variables' expectation. The reference direction approach is used to deal with linear objectives and results in a linear parametric optimization formula with a single linear objective function. This objective function is combined with the quadratic function using the weighted sums. The quadratic problem is transformed into a linear (parametric) complementary problem, the basic formula for the proposed approach. The sufficient and necessary conditions for (properly, weakly) efficient solutions and some construction characteristics of (weakly) efficient solution sets are obtained. An interactive algorithm is proposed based on reference direction and weighted sums. Varying the parameter vector on the right-hand side of the model, the DM can freely search the efficient frontier with the model. An extended portfolio selection model is formed when liquidity is considered as another objective to be optimized besides expectation and risk. The interactive approach is illustrated with a practical example.
NASA Astrophysics Data System (ADS)
Lehmann, Rüdiger; Lösler, Michael
2017-12-01
Geodetic deformation analysis can be interpreted as a model selection problem. The null model indicates that no deformation has occurred. It is opposed to a number of alternative models, which stipulate different deformation patterns. A common way to select the right model is the usage of a statistical hypothesis test. However, since we have to test a series of deformation patterns, this must be a multiple test. As an alternative solution for the test problem, we propose the p-value approach. Another approach arises from information theory. Here, the Akaike information criterion (AIC) or some alternative is used to select an appropriate model for a given set of observations. Both approaches are discussed and applied to two test scenarios: A synthetic levelling network and the Delft test data set. It is demonstrated that they work but behave differently, sometimes even producing different results. Hypothesis tests are well-established in geodesy, but may suffer from an unfavourable choice of the decision error rates. The multiple test also suffers from statistical dependencies between the test statistics, which are neglected. Both problems are overcome by applying information criterions like AIC.
Decision-analytic modeling studies: An overview for clinicians using multiple myeloma as an example.
Rochau, U; Jahn, B; Qerimi, V; Burger, E A; Kurzthaler, C; Kluibenschaedl, M; Willenbacher, E; Gastl, G; Willenbacher, W; Siebert, U
2015-05-01
The purpose of this study was to provide a clinician-friendly overview of decision-analytic models evaluating different treatment strategies for multiple myeloma (MM). We performed a systematic literature search to identify studies evaluating MM treatment strategies using mathematical decision-analytic models. We included studies that were published as full-text articles in English, and assessed relevant clinical endpoints, and summarized methodological characteristics (e.g., modeling approaches, simulation techniques, health outcomes, perspectives). Eleven decision-analytic modeling studies met our inclusion criteria. Five different modeling approaches were adopted: decision-tree modeling, Markov state-transition modeling, discrete event simulation, partitioned-survival analysis and area-under-the-curve modeling. Health outcomes included survival, number-needed-to-treat, life expectancy, and quality-adjusted life years. Evaluated treatment strategies included novel agent-based combination therapies, stem cell transplantation and supportive measures. Overall, our review provides a comprehensive summary of modeling studies assessing treatment of MM and highlights decision-analytic modeling as an important tool for health policy decision making. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Advanced Multiple Processor Configuration Study. Final Report.
ERIC Educational Resources Information Center
Clymer, S. J.
This summary of a study on multiple processor configurations includes the objectives, background, approach, and results of research undertaken to provide the Air Force with a generalized model of computer processor combinations for use in the evaluation of proposed flight training simulator computational designs. An analysis of a real-time flight…
Representation and presentation of requirements knowledge
NASA Technical Reports Server (NTRS)
Johnson, W. L.; Feather, Martin S.; Harris, David R.
1992-01-01
An approach to representation and presentation of knowledge used in the ARIES, an experimental requirements/specification environment, is described. The approach applies the notion of a representation architecture to the domain of software engineering and incorporates a strong coupling to a transformation system. It is characterized by a single highly expressive underlying representation, interfaced simultaneously to multiple presentations, each with notations of differing degrees of expressivity. This enables analysts to use multiple languages for describing systems and have these descriptions yield a single consistent model of the system.
Quantifying and Reducing Uncertainty in Correlated Multi-Area Short-Term Load Forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yannan; Hou, Zhangshuan; Meng, Da
2016-07-17
In this study, we represent and reduce the uncertainties in short-term electric load forecasting by integrating time series analysis tools including ARIMA modeling, sequential Gaussian simulation, and principal component analysis. The approaches are mainly focusing on maintaining the inter-dependency between multiple geographically related areas. These approaches are applied onto cross-correlated load time series as well as their forecast errors. Multiple short-term prediction realizations are then generated from the reduced uncertainty ranges, which are useful for power system risk analyses.
Garner, Joseph P; Thogerson, Collette M; Dufour, Brett D; Würbel, Hanno; Murray, James D; Mench, Joy A
2011-06-01
The NIMH's new strategic plan, with its emphasis on the "4P's" (Prediction, Pre-emption, Personalization, and Populations) and biomarker-based medicine requires a radical shift in animal modeling methodology. In particular 4P's models will be non-determinant (i.e. disease severity will depend on secondary environmental and genetic factors); and validated by reverse-translation of animal homologues to human biomarkers. A powerful consequence of the biomarker approach is that different closely related disorders have a unique fingerprint of biomarkers. Animals can be validated as a highly specific model of a single disorder by matching this 'fingerprint'; or as a model of a symptom seen in multiple disorders by matching common biomarkers. Here we illustrate this approach with two Abnormal Repetitive Behaviors (ARBs) in mice: stereotypies and barbering (hair pulling). We developed animal versions of the neuropsychological biomarkers that distinguish human ARBs, and tested the fingerprint of the different mouse ARBs. As predicted, the two mouse ARBs were associated with different biomarkers. Both barbering and stereotypy could be discounted as models of OCD (even though they are widely used as such), due to the absence of limbic biomarkers which are characteristic of OCD and hence are necessary for a valid model. Conversely barbering matched the fingerprint of trichotillomania (i.e. selective deficits in set-shifting), suggesting it may be a highly specific model of this disorder. In contrast stereotypies were correlated only with a biomarker (deficits in response shifting) correlated with stereotypies in multiple disorders, suggesting that animal stereotypies model stereotypies in multiple disorders. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Thornton, P. E.; Nacp Site Synthesis Participants
2010-12-01
The North American Carbon Program (NACP) synthesis effort includes an extensive intercomparison of modeled and observed ecosystem states and fluxes preformed with multiple models across multiple sites. The participating models span a range of complexity and intended application, while the participating sites cover a broad range of natural and managed ecosystems in North America, from the subtropics to arctic tundra, and coastal to interior climates. A unique characteristic of this collaborative effort is that multiple independent observations are available at all sites: fluxes are measured with the eddy covariance technique, and standard biometric and field sampling methods provide estimates of standing stock and annual production in multiple categories. In addition, multiple modeling approaches are employed to make predictions at each site, varying, for example, in the use of diagnostic vs. prognostic leaf area index. Given multiple independent observational constraints and multiple classes of model, we evaluate the internal consistency of observations at each site, and use this information to extend previously derived estimates of uncertainty in the flux observations. Model results are then compared with all available observations and models are ranked according to their consistency with each type of observation (high frequency flux measurement, carbon stock, annual production). We demonstrate a range of internal consistency across the sites, and show that some models which perform well against one observational metric perform poorly against others. We use this analysis to construct a hypothesis for combining eddy covariance, biometrics, and other standard physiological and ecological measurements which, as data collection proceeded over several years, would present an increasingly challenging target for next generation models.
NASA Astrophysics Data System (ADS)
Lakra, Suchita; Mandal, Sanjoy
2017-06-01
A quadruple micro-optical ring resonator (QMORR) with multiple output bus waveguides is mathematically modeled and analyzed by making use of the delay-line signal processing approach in Z-domain and Mason's gain formula. The performances of QMORR with two output bus waveguides with vertical coupling are analyzed. This proposed structure is capable of providing wider free spectral response from both the output buses with appreciable cross talk. Thus, this configuration could provide increased capacity to insert a large number of communication channels. The simulated frequency response characteristic and its dispersion and group delay characteristics are graphically presented using the MATLAB environment.
A multi-scale approach to designing therapeutics for tuberculosis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linderman, Jennifer J.; Cilfone, Nicholas A.; Pienaar, Elsje
Approximately one third of the world’s population is infected with Mycobacterium tuberculosis. Limited information about how the immune system fights M. tuberculosis and what constitutes protection from the bacteria impact our ability to develop effective therapies for tuberculosis. We present an in vivo systems biology approach that integrates data from multiple model systems and over multiple length and time scales into a comprehensive multi-scale and multi-compartment view of the in vivo immune response to M. tuberculosis. Lastly, we describe computational models that can be used to study (a) immunomodulation with the cytokines tumor necrosis factor and interleukin 10, (b) oralmore » and inhaled antibiotics, and (c) the effect of vaccination.« less
A multi-scale approach to designing therapeutics for tuberculosis
Linderman, Jennifer J.; Cilfone, Nicholas A.; Pienaar, Elsje; ...
2015-04-20
Approximately one third of the world’s population is infected with Mycobacterium tuberculosis. Limited information about how the immune system fights M. tuberculosis and what constitutes protection from the bacteria impact our ability to develop effective therapies for tuberculosis. We present an in vivo systems biology approach that integrates data from multiple model systems and over multiple length and time scales into a comprehensive multi-scale and multi-compartment view of the in vivo immune response to M. tuberculosis. Lastly, we describe computational models that can be used to study (a) immunomodulation with the cytokines tumor necrosis factor and interleukin 10, (b) oralmore » and inhaled antibiotics, and (c) the effect of vaccination.« less
Control structural interaction testbed: A model for multiple flexible body verification
NASA Technical Reports Server (NTRS)
Chory, M. A.; Cohen, A. L.; Manning, R. A.; Narigon, M. L.; Spector, V. A.
1993-01-01
Conventional end-to-end ground tests for verification of control system performance become increasingly complicated with the development of large, multiple flexible body spacecraft structures. The expense of accurately reproducing the on-orbit dynamic environment and the attendant difficulties in reducing and accounting for ground test effects limits the value of these tests. TRW has developed a building block approach whereby a combination of analysis, simulation, and test has replaced end-to-end performance verification by ground test. Tests are performed at the component, subsystem, and system level on engineering testbeds. These tests are aimed at authenticating models to be used in end-to-end performance verification simulations: component and subassembly engineering tests and analyses establish models and critical parameters, unit level engineering and acceptance tests refine models, and subsystem level tests confirm the models' overall behavior. The Precision Control of Agile Spacecraft (PCAS) project has developed a control structural interaction testbed with a multibody flexible structure to investigate new methods of precision control. This testbed is a model for TRW's approach to verifying control system performance. This approach has several advantages: (1) no allocation for test measurement errors is required, increasing flight hardware design allocations; (2) the approach permits greater latitude in investigating off-nominal conditions and parametric sensitivities; and (3) the simulation approach is cost effective, because the investment is in understanding the root behavior of the flight hardware and not in the ground test equipment and environment.
NASA Technical Reports Server (NTRS)
Peddle, Derek R.; Huemmrich, K. Fred; Hall, Forrest G.; Masek, Jeffrey G.; Soenen, Scott A.; Jackson, Chris D.
2011-01-01
Canopy reflectance model inversion using look-up table approaches provides powerful and flexible options for deriving improved forest biophysical structural information (BSI) compared with traditional statistical empirical methods. The BIOPHYS algorithm is an improved, physically-based inversion approach for deriving BSI for independent use and validation and for monitoring, inventory and quantifying forest disturbance as well as input to ecosystem, climate and carbon models. Based on the multiple-forward mode (MFM) inversion approach, BIOPHYS results were summarized from different studies (Minnesota/NASA COVER; Virginia/LEDAPS; Saskatchewan/BOREAS), sensors (airborne MMR; Landsat; MODIS) and models (GeoSail; GOMS). Applications output included forest density, height, crown dimension, branch and green leaf area, canopy cover, disturbance estimates based on multi-temporal chronosequences, and structural change following recovery from forest fires over the last century. Good correspondences with validation field data were obtained. Integrated analyses of multiple solar and view angle imagery further improved retrievals compared with single pass data. Quantifying ecosystem dynamics such as the area and percent of forest disturbance, early regrowth and succession provide essential inputs to process-driven models of carbon flux. BIOPHYS is well suited for large-area, multi-temporal applications involving multiple image sets and mosaics for assessing vegetation disturbance and quantifying biophysical structural dynamics and change. It is also suitable for integration with forest inventory, monitoring, updating, and other programs.
Learning of Rule Ensembles for Multiple Attribute Ranking Problems
NASA Astrophysics Data System (ADS)
Dembczyński, Krzysztof; Kotłowski, Wojciech; Słowiński, Roman; Szeląg, Marcin
In this paper, we consider the multiple attribute ranking problem from a Machine Learning perspective. We propose two approaches to statistical learning of an ensemble of decision rules from decision examples provided by the Decision Maker in terms of pairwise comparisons of some objects. The first approach consists in learning a preference function defining a binary preference relation for a pair of objects. The result of application of this function on all pairs of objects to be ranked is then exploited using the Net Flow Score procedure, giving a linear ranking of objects. The second approach consists in learning a utility function for single objects. The utility function also gives a linear ranking of objects. In both approaches, the learning is based on the boosting technique. The presented approaches to Preference Learning share good properties of the decision rule preference model and have good performance in the massive-data learning problems. As Preference Learning and Multiple Attribute Decision Aiding share many concepts and methodological issues, in the introduction, we review some aspects bridging these two fields. To illustrate the two approaches proposed in this paper, we solve with them a toy example concerning the ranking of a set of cars evaluated by multiple attributes. Then, we perform a large data experiment on real data sets. The first data set concerns credit rating. Since recent research in the field of Preference Learning is motivated by the increasing role of modeling preferences in recommender systems and information retrieval, we chose two other massive data sets from this area - one comes from movie recommender system MovieLens, and the other concerns ranking of text documents from 20 Newsgroups data set.
Linear mixed-effects modeling approach to FMRI group analysis
Chen, Gang; Saad, Ziad S.; Britton, Jennifer C.; Pine, Daniel S.; Cox, Robert W.
2013-01-01
Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance–covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance–covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity for activation detection. The importance of hypothesis formulation is also illustrated in the simulations. Comparisons with alternative group analysis approaches and the limitations of LME are discussed in details. PMID:23376789
Linear mixed-effects modeling approach to FMRI group analysis.
Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W
2013-06-01
Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity for activation detection. The importance of hypothesis formulation is also illustrated in the simulations. Comparisons with alternative group analysis approaches and the limitations of LME are discussed in details. Published by Elsevier Inc.
Predicting Trophic Interactions and Habitat Utilization in the California Current Ecosystem
2013-09-30
in the California Current Ecosystem Jerome Fiechter UC Santa Cruz Institute of Marine Sciences 1156 High Street Santa Cruz, CA 95064 phone... Ecosystem (CCLME), the long-term goal of our modeling approach is to better understand and characterize biological “hotspots” (i.e., the aggregation of...multiple marine organisms over multiple trophic levels) off the U.S. west coast and in other regions where similar fully-coupled ecosystem models may
Le Bihan, Nicolas; Margerin, Ludovic
2009-07-01
In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity of waves transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using compound Poisson processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.
Autonomous detection of crowd anomalies in multiple-camera surveillance feeds
NASA Astrophysics Data System (ADS)
Nordlöf, Jonas; Andersson, Maria
2016-10-01
A novel approach for autonomous detection of anomalies in crowded environments is presented in this paper. The proposed models uses a Gaussian mixture probability hypothesis density (GM-PHD) filter as feature extractor in conjunction with different Gaussian mixture hidden Markov models (GM-HMMs). Results, based on both simulated and recorded data, indicate that this method can track and detect anomalies on-line in individual crowds through multiple camera feeds in a crowded environment.
Spectral decompositions of multiple time series: a Bayesian non-parametric approach.
Macaro, Christian; Prado, Raquel
2014-01-01
We consider spectral decompositions of multiple time series that arise in studies where the interest lies in assessing the influence of two or more factors. We write the spectral density of each time series as a sum of the spectral densities associated to the different levels of the factors. We then use Whittle's approximation to the likelihood function and follow a Bayesian non-parametric approach to obtain posterior inference on the spectral densities based on Bernstein-Dirichlet prior distributions. The prior is strategically important as it carries identifiability conditions for the models and allows us to quantify our degree of confidence in such conditions. A Markov chain Monte Carlo (MCMC) algorithm for posterior inference within this class of frequency-domain models is presented.We illustrate the approach by analyzing simulated and real data via spectral one-way and two-way models. In particular, we present an analysis of functional magnetic resonance imaging (fMRI) brain responses measured in individuals who participated in a designed experiment to study pain perception in humans.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Michael I.
This article describes a new path for compliance with ASHRAE Standard 90.1-2016. The new approach will lead to increased flexibility for designers, multiple uses for the same building energy models, increased recognition of energy saving design strategies, and lower energy modeling costs.
Multiple imputation for handling missing outcome data when estimating the relative risk.
Sullivan, Thomas R; Lee, Katherine J; Ryan, Philip; Salter, Amy B
2017-09-06
Multiple imputation is a popular approach to handling missing data in medical research, yet little is known about its applicability for estimating the relative risk. Standard methods for imputing incomplete binary outcomes involve logistic regression or an assumption of multivariate normality, whereas relative risks are typically estimated using log binomial models. It is unclear whether misspecification of the imputation model in this setting could lead to biased parameter estimates. Using simulated data, we evaluated the performance of multiple imputation for handling missing data prior to estimating adjusted relative risks from a correctly specified multivariable log binomial model. We considered an arbitrary pattern of missing data in both outcome and exposure variables, with missing data induced under missing at random mechanisms. Focusing on standard model-based methods of multiple imputation, missing data were imputed using multivariate normal imputation or fully conditional specification with a logistic imputation model for the outcome. Multivariate normal imputation performed poorly in the simulation study, consistently producing estimates of the relative risk that were biased towards the null. Despite outperforming multivariate normal imputation, fully conditional specification also produced somewhat biased estimates, with greater bias observed for higher outcome prevalences and larger relative risks. Deleting imputed outcomes from analysis datasets did not improve the performance of fully conditional specification. Both multivariate normal imputation and fully conditional specification produced biased estimates of the relative risk, presumably since both use a misspecified imputation model. Based on simulation results, we recommend researchers use fully conditional specification rather than multivariate normal imputation and retain imputed outcomes in the analysis when estimating relative risks. However fully conditional specification is not without its shortcomings, and so further research is needed to identify optimal approaches for relative risk estimation within the multiple imputation framework.
A management and optimisation model for water supply planning in water deficit areas
NASA Astrophysics Data System (ADS)
Molinos-Senante, María; Hernández-Sancho, Francesc; Mocholí-Arce, Manuel; Sala-Garrido, Ramón
2014-07-01
The integrated water resources management approach has proven to be a suitable option for efficient, equitable and sustainable water management. In water-poor regions experiencing acute and/or chronic shortages, optimisation techniques are a useful tool for supporting the decision process of water allocation. In order to maximise the value of water use, an optimisation model was developed which involves multiple supply sources (conventional and non-conventional) and multiple users. Penalties, representing monetary losses in the event of an unfulfilled water demand, have been incorporated into the objective function. This model represents a novel approach which considers water distribution efficiency and the physical connections between water supply and demand points. Subsequent empirical testing using data from a Spanish Mediterranean river basin demonstrated the usefulness of the global optimisation model to solve existing water imbalances at the river basin level.
A systematic study of multiple minerals precipitation modelling in wastewater treatment.
Kazadi Mbamba, Christian; Tait, Stephan; Flores-Alsina, Xavier; Batstone, Damien J
2015-11-15
Mineral solids precipitation is important in wastewater treatment. However approaches to minerals precipitation modelling are varied, often empirical, and mostly focused on single precipitate classes. A common approach, applicable to multi-species precipitates, is needed to integrate into existing wastewater treatment models. The present study systematically tested a semi-mechanistic modelling approach, using various experimental platforms with multiple minerals precipitation. Experiments included dynamic titration with addition of sodium hydroxide to synthetic wastewater, and aeration to progressively increase pH and induce precipitation in real piggery digestate and sewage sludge digestate. The model approach consisted of an equilibrium part for aqueous phase reactions and a kinetic part for minerals precipitation. The model was fitted to dissolved calcium, magnesium, total inorganic carbon and phosphate. Results indicated that precipitation was dominated by the mineral struvite, forming together with varied and minor amounts of calcium phosphate and calcium carbonate. The model approach was noted to have the advantage of requiring a minimal number of fitted parameters, so the model was readily identifiable. Kinetic rate coefficients, which were statistically fitted, were generally in the range 0.35-11.6 h(-1) with confidence intervals of 10-80% relative. Confidence regions for the kinetic rate coefficients were often asymmetric with model-data residuals increasing more gradually with larger coefficient values. This suggests that a large kinetic coefficient could be used when actual measured data is lacking for a particular precipitate-matrix combination. Correlation between the kinetic rate coefficients of different minerals was low, indicating that parameter values for individual minerals could be independently fitted (keeping all other model parameters constant). Implementation was therefore relatively flexible, and would be readily expandable to include other minerals. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Taravat, A.; Del Frate, F.
2013-09-01
As a major aspect of marine pollution, oil release into the sea has serious biological and environmental impacts. Among remote sensing systems (which is a tool that offers a non-destructive investigation method), synthetic aperture radar (SAR) can provide valuable synoptic information about the position and size of the oil spill due to its wide area coverage and day/night, and all-weather capabilities. In this paper we present a new automated method for oil-spill monitoring. A new approach is based on the combination of Weibull Multiplicative Model and machine learning techniques to differentiate between dark spots and the background. First, the filter created based on Weibull Multiplicative Model is applied to each sub-image. Second, the sub-image is segmented by two different neural networks techniques (Pulsed Coupled Neural Networks and Multilayer Perceptron Neural Networks). As the last step, a very simple filtering process is used to eliminate the false targets. The proposed approaches were tested on 20 ENVISAT and ERS2 images which contained dark spots. The same parameters were used in all tests. For the overall dataset, the average accuracies of 94.05 % and 95.20 % were obtained for PCNN and MLP methods, respectively. The average computational time for dark-spot detection with a 256 × 256 image in about 4 s for PCNN segmentation using IDL software which is the fastest one in this field at present. Our experimental results demonstrate that the proposed approach is very fast, robust and effective. The proposed approach can be applied to the future spaceborne SAR images.
NASA Astrophysics Data System (ADS)
Qi, Wei; Liu, Junguo; Yang, Hong; Sweetapple, Chris
2018-03-01
Global precipitation products are very important datasets in flow simulations, especially in poorly gauged regions. Uncertainties resulting from precipitation products, hydrological models and their combinations vary with time and data magnitude, and undermine their application to flow simulations. However, previous studies have not quantified these uncertainties individually and explicitly. This study developed an ensemble-based dynamic Bayesian averaging approach (e-Bay) for deterministic discharge simulations using multiple global precipitation products and hydrological models. In this approach, the joint probability of precipitation products and hydrological models being correct is quantified based on uncertainties in maximum and mean estimation, posterior probability is quantified as functions of the magnitude and timing of discharges, and the law of total probability is implemented to calculate expected discharges. Six global fine-resolution precipitation products and two hydrological models of different complexities are included in an illustrative application. e-Bay can effectively quantify uncertainties and therefore generate better deterministic discharges than traditional approaches (weighted average methods with equal and varying weights and maximum likelihood approach). The mean Nash-Sutcliffe Efficiency values of e-Bay are up to 0.97 and 0.85 in training and validation periods respectively, which are at least 0.06 and 0.13 higher than traditional approaches. In addition, with increased training data, assessment criteria values of e-Bay show smaller fluctuations than traditional approaches and its performance becomes outstanding. The proposed e-Bay approach bridges the gap between global precipitation products and their pragmatic applications to discharge simulations, and is beneficial to water resources management in ungauged or poorly gauged regions across the world.
Bridging paradigms: hybrid mechanistic-discriminative predictive models.
Doyle, Orla M; Tsaneva-Atansaova, Krasimira; Harte, James; Tiffin, Paul A; Tino, Peter; Díaz-Zuccarini, Vanessa
2013-03-01
Many disease processes are extremely complex and characterized by multiple stochastic processes interacting simultaneously. Current analytical approaches have included mechanistic models and machine learning (ML), which are often treated as orthogonal viewpoints. However, to facilitate truly personalized medicine, new perspectives may be required. This paper reviews the use of both mechanistic models and ML in healthcare as well as emerging hybrid methods, which are an exciting and promising approach for biologically based, yet data-driven advanced intelligent systems.
2002-08-01
the measurement noise, as well as the physical model of the forward scattered electric field. The Bayesian algorithms for the Uncertain Permittivity...received at multiple sensors. In this research project a tissue- model -based signal-detection theory approach for the detection of mammary tumors in the...oriented information processors. In this research project a tissue- model - based signal detection theory approach for the detection of mammary tumors in the
McFarquhar, Martyn; McKie, Shane; Emsley, Richard; Suckling, John; Elliott, Rebecca; Williams, Stephen
2016-01-01
Repeated measurements and multimodal data are common in neuroimaging research. Despite this, conventional approaches to group level analysis ignore these repeated measurements in favour of multiple between-subject models using contrasts of interest. This approach has a number of drawbacks as certain designs and comparisons of interest are either not possible or complex to implement. Unfortunately, even when attempting to analyse group level data within a repeated-measures framework, the methods implemented in popular software packages make potentially unrealistic assumptions about the covariance structure across the brain. In this paper, we describe how this issue can be addressed in a simple and efficient manner using the multivariate form of the familiar general linear model (GLM), as implemented in a new MATLAB toolbox. This multivariate framework is discussed, paying particular attention to methods of inference by permutation. Comparisons with existing approaches and software packages for dependent group-level neuroimaging data are made. We also demonstrate how this method is easily adapted for dependency at the group level when multiple modalities of imaging are collected from the same individuals. Follow-up of these multimodal models using linear discriminant functions (LDA) is also discussed, with applications to future studies wishing to integrate multiple scanning techniques into investigating populations of interest. PMID:26921716
NASA Astrophysics Data System (ADS)
Holt, Jason; Icarus Allen, J.; Anderson, Thomas R.; Brewin, Robert; Butenschön, Momme; Harle, James; Huse, Geir; Lehodey, Patrick; Lindemann, Christian; Memery, Laurent; Salihoglu, Baris; Senina, Inna; Yool, Andrew
2014-12-01
It has long been recognised that there are strong interactions and feedbacks between climate, upper ocean biogeochemistry and marine food webs, and also that food web structure and phytoplankton community distribution are important determinants of variability in carbon production and export from the euphotic zone. Numerical models provide a vital tool to explore these interactions, given their capability to investigate multiple connected components of the system and the sensitivity to multiple drivers, including potential future conditions. A major driver for ecosystem model development is the demand for quantitative tools to support ecosystem-based management initiatives. The purpose of this paper is to review approaches to the modelling of marine ecosystems with a focus on the North Atlantic Ocean and its adjacent shelf seas, and to highlight the challenges they face and suggest ways forward. We consider the state of the art in simulating oceans and shelf sea physics, planktonic and higher trophic level ecosystems, and look towards building an integrative approach with these existing tools. We note how the different approaches have evolved historically and that many of the previous obstacles to harmonisation may no longer be present. We illustrate this with examples from the on-going and planned modelling effort in the Integrative Modelling Work Package of the EURO-BASIN programme.
NASA Astrophysics Data System (ADS)
Danner, Travis W.
Developing technology systems requires all manner of investment---engineering talent, prototypes, test facilities, and more. Even for simple design problems the investment can be substantial; for complex technology systems, the development costs can be staggering. The profitability of a corporation in a technology-driven industry is crucially dependent on maximizing the effectiveness of research and development investment. Decision-makers charged with allocation of this investment are forced to choose between the further evolution of existing technologies and the pursuit of revolutionary technologies. At risk on the one hand is excessive investment in an evolutionary technology which has only limited availability for further improvement. On the other hand, the pursuit of a revolutionary technology may mean abandoning momentum and the potential for substantial evolutionary improvement resulting from the years of accumulated knowledge. The informed answer to this question, evolutionary or revolutionary, requires knowledge of the expected rate of improvement and the potential a technology offers for further improvement. This research is dedicated to formulating the assessment and forecasting tools necessary to acquire this knowledge. The same physical laws and principles that enable the development and improvement of specific technologies also limit the ultimate capability of those technologies. Researchers have long used this concept as the foundation for modeling technological advancement through extrapolation by analogy to biological growth models. These models are employed to depict technology development as it asymptotically approaches limits established by the fundamental principles on which the technological approach is based. This has proven an effective and accurate approach to modeling and forecasting simple single-attribute technologies. With increased system complexity and the introduction of multiple system objectives, however, the usefulness of this modeling technique begins to diminish. With the introduction of multiple objectives, researchers often abandon technology growth models for scoring models and technology frontiers. While both approaches possess advantages over current growth models for the assessment of multi-objective technologies, each lacks a necessary dimension for comprehensive technology assessment. By collapsing multiple system metrics into a single, non-intuitive technology measure, scoring models provide a succinct framework for multi-objective technology assessment and forecasting. Yet, with no consideration of physical limits, scoring models provide no insight as to the feasibility of a particular combination of system capabilities. They only indicate that a given combination of system capabilities yields a particular score. Conversely, technology frontiers are constructed with the distinct objective of providing insight into the feasibility of system capability combinations. Yet again, upper limits to overall system performance are ignored. Furthermore, the data required to forecast subsequent technology frontiers is often inhibitive. In an attempt to reincorporate the fundamental nature of technology advancement as bound by physical principles, researchers have sought to normalize multi-objective systems whereby the variability of a single system objective is eliminated as a result of changes in the remaining objectives. This drastically limits the applicability of the resulting technology model because it is only applicable for a single setting of all other system attributes. Attempts to maintain the interaction between the growth curves of each technical objective of a complex system have thus far been limited to qualitative and subjective consideration. This research proposes the formulation of multidimensional growth models as an approach to simulating the advancement of multi-objective technologies towards their upper limits. Multidimensional growth models were formulated by noticing and exploiting the correlation between technology growth models and technology frontiers. Both are frontiers in actuality. The technology growth curve is a frontier between capability levels of a single attribute and time, while a technology frontier is a frontier between the capability levels of two or more attributes. Multidimensional growth models are formulated by exploiting the mathematical significance of this correlation. The result is a model that can capture both the interaction between multiple system attributes and their expected rates of improvement over time. The fundamental nature of technology development is maintained, and interdependent growth curves are generated for each system metric with minimal data requirements. Being founded on the basic nature of technology advancement, relative to physical limits, the availability for further improvement can be determined for a single metric relative to other system measures of merit. A by-product of this modeling approach is a single n-dimensional technology frontier linking all n system attributes with time. This provides an environment capable of forecasting future system capability in the form of advancing technology frontiers. The ability of a multidimensional growth model to capture the expected improvement of a specific technological approach is dependent on accurately identifying the physical limitations to each pertinent attribute. This research investigates two potential approaches to identifying those physical limits, a physics-based approach and a regression-based approach. The regression-based approach has found limited acceptance among forecasters, although it does show potential for estimating upper limits with a specified degree of uncertainty. Forecasters have long favored physics-based approaches for establishing the upper limit to unidimensional growth models. The task of accurately identifying upper limits has become increasingly difficult with the extension of growth models into multiple dimensions. A lone researcher may be able to identify the physical limitation to a single attribute of a simple system; however, as system complexity and the number of attributes increases, the attention of researchers from multiple fields of study is required. Thus, limit identification is itself an area of research and development requiring some level of investment. Whether estimated by physics or regression-based approaches, predicted limits will always have some degree of uncertainty. This research takes the approach of quantifying the impact of that uncertainty on model forecasts rather than heavily endorsing a single technique to limit identification. In addition to formulating the multidimensional growth model, this research provides a systematic procedure for applying that model to specific technology architectures. Researchers and decision-makers are able to investigate the potential for additional improvement within that technology architecture and to estimate the expected cost of each incremental improvement relative to the cost of past improvements. In this manner, multidimensional growth models provide the necessary information to set reasonable program goals for the further evolution of a particular technological approach or to establish the need for revolutionary approaches in light of the constraining limits of conventional approaches.
Edwards, J R; Scully, J A; Brtek, M D
2000-12-01
Research into the changing nature of work requires comprehensive models of work design. One such model is the interdisciplinary framework (M. A. Campion, 1988), which integrates 4 work-design approaches (motivational, mechanistic, biological, perceptual-motor) and links each approach to specific outcomes. Unfortunately, studies of this framework have used methods that disregard measurement error, overlook dimensions within each work-design approach, and treat each approach and outcome separately. This study reanalyzes data from M. A. Campion (1988), using structural equation models that incorporate measurement error, specify multiple dimensions for each work-design approach, and examine the work-design approaches and outcomes jointly. Results show that previous studies underestimate relationships between work-design approaches and outcomes and that dimensions within each approach exhibit relationships with outcomes that differ in magnitude and direction.
Bayesian modelling of lung function data from multiple-breath washout tests.
Mahar, Robert K; Carlin, John B; Ranganathan, Sarath; Ponsonby, Anne-Louise; Vuillermin, Peter; Vukcevic, Damjan
2018-05-30
Paediatric respiratory researchers have widely adopted the multiple-breath washout (MBW) test because it allows assessment of lung function in unsedated infants and is well suited to longitudinal studies of lung development and disease. However, a substantial proportion of MBW tests in infants fail current acceptability criteria. We hypothesised that a model-based approach to analysing the data, in place of traditional simple empirical summaries, would enable more efficient use of these tests. We therefore developed a novel statistical model for infant MBW data and applied it to 1197 tests from 432 individuals from a large birth cohort study. We focus on Bayesian estimation of the lung clearance index, the most commonly used summary of lung function from MBW tests. Our results show that the model provides an excellent fit to the data and shed further light on statistical properties of the standard empirical approach. Furthermore, the modelling approach enables the lung clearance index to be estimated by using tests with different degrees of completeness, something not possible with the standard approach. Our model therefore allows previously unused data to be used rather than discarded, as well as routine use of shorter tests without significant loss of precision. Beyond our specific application, our work illustrates a number of important aspects of Bayesian modelling in practice, such as the importance of hierarchical specifications to account for repeated measurements and the value of model checking via posterior predictive distributions. Copyright © 2018 John Wiley & Sons, Ltd.
The SAGE Model of Social Psychological Research.
Power, Séamus A; Velez, Gabriel; Qadafi, Ahmad; Tennant, Joseph
2018-05-01
We propose a SAGE model for social psychological research. Encapsulated in our acronym is a proposal to have a synthetic approach to social psychological research, in which qualitative methods are augmentative to quantitative ones, qualitative methods can be generative of new experimental hypotheses, and qualitative methods can capture experiences that evade experimental reductionism. We remind social psychological researchers that psychology was founded in multiple methods of investigation at multiple levels of analysis. We discuss historical examples and our own research as contemporary examples of how a SAGE model can operate in part or as an integrated whole. The implications of our model are discussed.
NASA Astrophysics Data System (ADS)
Kar, Somnath; Choudhury, Subikash; Muhuri, Sanjib; Ghosh, Premomoy
2017-01-01
Satisfactory description of data by hydrodynamics-motivated models, as has been reported recently by experimental collaborations at the LHC, confirm "collectivity" in high-multiplicity proton-proton (p p ) collisions. Notwithstanding this, a detailed study of high-multiplicity p p data in other approaches or models is essential for better understanding of the specific phenomenon. In this study, the focus is on a pQCD-inspired multiparton interaction (MPI) model, including a color reconnection (CR) scheme as implemented in the Monte Carlo code, PYTHIA8 tune 4C. The MPI with the color reconnection reproduces the dependence of the mean transverse momentum ⟨pT⟩ on the charged particle multiplicity Nch in p p collisions at the LHC, providing an alternate explanation to the signature of "hydrodynamic collectivity" in p p data. It is, therefore, worth exploring how this model responds to other related features of high-multiplicity p p events. This comparative study with recent experimental results demonstrates the limitations of the model in explaining some of the prominent features of the final-state charged particles up to the intermediate-pT (pT<2.0 GeV /c ) range in high-multiplicity p p events.
How models can support ecosystem-based management of coral reefs
NASA Astrophysics Data System (ADS)
Weijerman, Mariska; Fulton, Elizabeth A.; Janssen, Annette B. G.; Kuiper, Jan J.; Leemans, Rik; Robson, Barbara J.; van de Leemput, Ingrid A.; Mooij, Wolf M.
2015-11-01
Despite the importance of coral reef ecosystems to the social and economic welfare of coastal communities, the condition of these marine ecosystems have generally degraded over the past decades. With an increased knowledge of coral reef ecosystem processes and a rise in computer power, dynamic models are useful tools in assessing the synergistic effects of local and global stressors on ecosystem functions. We review representative approaches for dynamically modeling coral reef ecosystems and categorize them as minimal, intermediate and complex models. The categorization was based on the leading principle for model development and their level of realism and process detail. This review aims to improve the knowledge of concurrent approaches in coral reef ecosystem modeling and highlights the importance of choosing an appropriate approach based on the type of question(s) to be answered. We contend that minimal and intermediate models are generally valuable tools to assess the response of key states to main stressors and, hence, contribute to understanding ecological surprises. As has been shown in freshwater resources management, insight into these conceptual relations profoundly influences how natural resource managers perceive their systems and how they manage ecosystem recovery. We argue that adaptive resource management requires integrated thinking and decision support, which demands a diversity of modeling approaches. Integration can be achieved through complimentary use of models or through integrated models that systemically combine all relevant aspects in one model. Such whole-of-system models can be useful tools for quantitatively evaluating scenarios. These models allow an assessment of the interactive effects of multiple stressors on various, potentially conflicting, management objectives. All models simplify reality and, as such, have their weaknesses. While minimal models lack multidimensionality, system models are likely difficult to interpret as they require many efforts to decipher the numerous interactions and feedback loops. Given the breadth of questions to be tackled when dealing with coral reefs, the best practice approach uses multiple model types and thus benefits from the strength of different models types.
Detection of Uniform and Nonuniform Differential Item Functioning by Item-Focused Trees
ERIC Educational Resources Information Center
Berger, Moritz; Tutz, Gerhard
2016-01-01
Detection of differential item functioning (DIF) by use of the logistic modeling approach has a long tradition. One big advantage of the approach is that it can be used to investigate nonuniform (NUDIF) as well as uniform DIF (UDIF). The classical approach allows one to detect DIF by distinguishing between multiple groups. We propose an…
Aerostructural interaction in a collaborative MDO environment
NASA Astrophysics Data System (ADS)
Ciampa, Pier Davide; Nagel, Björn
2014-10-01
The work presents an approach for aircraft design and optimization, developed to account for fluid-structure interactions in MDO applications. The approach makes use of a collaborative distributed design environment, and focuses on the influence of multiple physics based aerostructural models, on the overall aircraft synthesis and optimization. The approach is tested for the design of large transportation aircraft.
Siddique, Juned; Harel, Ofer; Crespi, Catherine M.; Hedeker, Donald
2014-01-01
The true missing data mechanism is never known in practice. We present a method for generating multiple imputations for binary variables that formally incorporates missing data mechanism uncertainty. Imputations are generated from a distribution of imputation models rather than a single model, with the distribution reflecting subjective notions of missing data mechanism uncertainty. Parameter estimates and standard errors are obtained using rules for nested multiple imputation. Using simulation, we investigate the impact of missing data mechanism uncertainty on post-imputation inferences and show that incorporating this uncertainty can increase the coverage of parameter estimates. We apply our method to a longitudinal smoking cessation trial where nonignorably missing data were a concern. Our method provides a simple approach for formalizing subjective notions regarding nonresponse and can be implemented using existing imputation software. PMID:24634315
Gravitational decoupling and the Picard-Lefschetz approach
NASA Astrophysics Data System (ADS)
Brown, Jon; Cole, Alex; Shiu, Gary; Cottrell, William
2018-01-01
In this work, we consider tunneling between nonmetastable states in gravitational theories. Such processes arise in various contexts, e.g., in inflationary scenarios where the inflaton potential involves multiple fields or multiple branches. They are also relevant for bubble wall nucleation in some cosmological settings. However, we show that the transition amplitudes computed using the Euclidean method generally do not approach the corresponding field theory limit as Mp→∞ . This implies that in the Euclidean framework, there is no systematic expansion in powers of GN for such processes. Such considerations also carry over directly to no-boundary scenarios involving Hawking-Turok instantons. In this note, we illustrate this failure of decoupling in the Euclidean approach with a simple model of axion monodromy and then argue that the situation can be remedied with a Lorentzian prescription such as the Picard-Lefschetz theory. As a proof of concept, we illustrate with a simple model how tunneling transition amplitudes can be calculated using the Picard-Lefschetz approach.
Thorlund, Kristian; Thabane, Lehana; Mills, Edward J
2013-01-11
Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the 'common variance' assumption). This approach 'borrows strength' for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice.
NASA Astrophysics Data System (ADS)
Haili, Hasnawati; Maknun, Johar; Siahaan, Parsaoran
2017-08-01
Physics is a lessons that related to students' daily experience. Therefore, before the students studying in class formally, actually they have already have a visualization and prior knowledge about natural phenomenon and could wide it themselves. The learning process in class should be aimed to detect, process, construct, and use students' mental model. So, students' mental model agree with and builds in the right concept. The previous study held in MAN 1 Muna informs that in learning process the teacher did not pay attention students' mental model. As a consequence, the learning process has not tried to build students' mental modelling ability (MMA). The purpose of this study is to describe the improvement of students' MMA as a effect of problem solving based learning model with multiple representations approach. This study is pre experimental design with one group pre post. It is conducted in XI IPA MAN 1 Muna 2016/2017. Data collection uses problem solving test concept the kinetic theory of gasses and interview to get students' MMA. The result of this study is clarification students' MMA which is categorized in 3 category; High Mental Modelling Ability (H-MMA) for 7
Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.
Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P
2017-03-01
The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, Brian; Scherzinger, William
2017-01-19
Here, a new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, andmore » compared to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. Through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, Brian T.; Scherzinger, William M.
2017-01-19
A new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, and comparedmore » to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. As a result through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.« less
NASA Astrophysics Data System (ADS)
Wolff, J.; Jankov, I.; Beck, J.; Carson, L.; Frimel, J.; Harrold, M.; Jiang, H.
2016-12-01
It is well known that global and regional numerical weather prediction ensemble systems are under-dispersive, producing unreliable and overconfident ensemble forecasts. Typical approaches to alleviate this problem include the use of multiple dynamic cores, multiple physics suite configurations, or a combination of the two. While these approaches may produce desirable results, they have practical and theoretical deficiencies and are more difficult and costly to maintain. An active area of research that promotes a more unified and sustainable system for addressing the deficiencies in ensemble modeling is the use of stochastic physics to represent model-related uncertainty. Stochastic approaches include Stochastic Parameter Perturbations (SPP), Stochastic Kinetic Energy Backscatter (SKEB), Stochastic Perturbation of Physics Tendencies (SPPT), or some combination of all three. The focus of this study is to assess the model performance within a convection-permitting ensemble at 3-km grid spacing across the Contiguous United States (CONUS) when using stochastic approaches. For this purpose, the test utilized a single physics suite configuration based on the operational High-Resolution Rapid Refresh (HRRR) model, with ensemble members produced by employing stochastic methods. Parameter perturbations were employed in the Rapid Update Cycle (RUC) land surface model and Mellor-Yamada-Nakanishi-Niino (MYNN) planetary boundary layer scheme. Results will be presented in terms of bias, error, spread, skill, accuracy, reliability, and sharpness using the Model Evaluation Tools (MET) verification package. Due to the high level of complexity of running a frequently updating (hourly), high spatial resolution (3 km), large domain (CONUS) ensemble system, extensive high performance computing (HPC) resources were needed to meet this objective. Supercomputing resources were provided through the National Center for Atmospheric Research (NCAR) Strategic Capability (NSC) project support, allowing for a more extensive set of tests over multiple seasons, consequently leading to more robust results. Through the use of these stochastic innovations and powerful supercomputing at NCAR, further insights and advancements in ensemble forecasting at convection-permitting scales will be possible.
Integrative prescreening in analysis of multiple cancer genomic studies
2012-01-01
Background In high throughput cancer genomic studies, results from the analysis of single datasets often suffer from a lack of reproducibility because of small sample sizes. Integrative analysis can effectively pool and analyze multiple datasets and provides a cost effective way to improve reproducibility. In integrative analysis, simultaneously analyzing all genes profiled may incur high computational cost. A computationally affordable remedy is prescreening, which fits marginal models, can be conducted in a parallel manner, and has low computational cost. Results An integrative prescreening approach is developed for the analysis of multiple cancer genomic datasets. Simulation shows that the proposed integrative prescreening has better performance than alternatives, particularly including prescreening with individual datasets, an intensity approach and meta-analysis. We also analyze multiple microarray gene profiling studies on liver and pancreatic cancers using the proposed approach. Conclusions The proposed integrative prescreening provides an effective way to reduce the dimensionality in cancer genomic studies. It can be coupled with existing analysis methods to identify cancer markers. PMID:22799431
Akkermans, Simen; Noriega Fernandez, Estefanía; Logist, Filip; Van Impe, Jan F
2017-01-02
Efficient modelling of the microbial growth rate can be performed by combining the effects of individual conditions in a multiplicative way, known as the gamma concept. However, several studies have illustrated that interactions between different effects should be taken into account at stressing environmental conditions to achieve a more accurate description of the growth rate. In this research, a novel approach for modeling the interactions between the effects of environmental conditions on the microbial growth rate is introduced. As a case study, the effect of temperature and pH on the growth rate of Escherichia coli K12 is modeled, based on a set of computer controlled bioreactor experiments performed under static environmental conditions. The models compared in this case study are the gamma model, the model of Augustin and Carlier (2000), the model of Le Marc et al. (2002) and the novel multiplicative interaction model, developed in this paper. This novel model enables the separate identification of interactions between the effects of two (or more) environmental conditions. The comparison of these models focuses on the accuracy, interpretability and compatibility with efficient modeling approaches. Moreover, for the separate effects of temperature and pH, new cardinal parameter model structures are proposed. The novel interaction model contributes to a generic modeling approach, resulting in predictive models that are (i) accurate, (ii) easily identifiable with a limited work load, (iii) modular, and (iv) biologically interpretable. Copyright © 2016. Published by Elsevier B.V.
Comparing Multiple Discrepancies Theory to Affective Models of Subjective Wellbeing
ERIC Educational Resources Information Center
Blore, Jed D.; Stokes, Mark A.; Mellor, David; Firth, Lucy; Cummins, Robert A.
2011-01-01
The Subjective Wellbeing (SWB) literature is replete with competing theories detailing the mechanisms underlying the construction and maintenance of SWB. The current study aimed to compare and contrast two of these approaches: multiple discrepancies theory (MDT) and an affective-cognitive theory of SWB. MDT posits SWB to be the result of perceived…
ERIC Educational Resources Information Center
Medeiros Vieira, Leandro Mauricio; Ferasso, Marcos; Schröeder, Christine da Silva
2014-01-01
This theoretical essay is a learning approach reflexion on Howard Gardner's Theory of Multiple Intelligences and the possibilities provided by the education model known as open and distance learning. Open and distance learning can revolutionize traditional pedagogical practice, meeting the needs of those who have different forms of cognitive…
ERIC Educational Resources Information Center
Campbell, S. Duke; Greenberg, Barry
The development of a predictive equation capable of explaining a significant percentage of enrollment variability at Florida International University is described. A model utilizing trend analysis and a multiple regression approach to enrollment forecasting was adapted to investigate enrollment dynamics at the university. Four independent…
ERIC Educational Resources Information Center
Toro, Maritsa
2011-01-01
The statistical assessment of dimensionality provides evidence of the underlying constructs measured by a survey or test instrument. This study focuses on educational measurement, specifically tests comprised of items described as multidimensional. That is, items that require examinee proficiency in multiple content areas and/or multiple cognitive…
Comparison of Methods to Trace Multiple Subskills: Is LR-DBN Best?
ERIC Educational Resources Information Center
Xu, Yanbo; Mostow, Jack
2012-01-01
A long-standing challenge for knowledge tracing is how to update estimates of multiple subskills that underlie a single observable step. We characterize approaches to this problem by how they model knowledge tracing, fit its parameters, predict performance, and update subskill estimates. Previous methods allocated blame or credit among subskills…
NASA Astrophysics Data System (ADS)
Osorio-Murillo, C. A.; Over, M. W.; Frystacky, H.; Ames, D. P.; Rubin, Y.
2013-12-01
A new software application called MAD# has been coupled with the HTCondor high throughput computing system to aid scientists and educators with the characterization of spatial random fields and enable understanding the spatial distribution of parameters used in hydrogeologic and related modeling. MAD# is an open source desktop software application used to characterize spatial random fields using direct and indirect information through Bayesian inverse modeling technique called the Method of Anchored Distributions (MAD). MAD relates indirect information with a target spatial random field via a forward simulation model. MAD# executes inverse process running the forward model multiple times to transfer information from indirect information to the target variable. MAD# uses two parallelization profiles according to computational resources available: one computer with multiple cores and multiple computers - multiple cores through HTCondor. HTCondor is a system that manages a cluster of desktop computers for submits serial or parallel jobs using scheduling policies, resources monitoring, job queuing mechanism. This poster will show how MAD# reduces the time execution of the characterization of random fields using these two parallel approaches in different case studies. A test of the approach was conducted using 1D problem with 400 cells to characterize saturated conductivity, residual water content, and shape parameters of the Mualem-van Genuchten model in four materials via the HYDRUS model. The number of simulations evaluated in the inversion was 10 million. Using the one computer approach (eight cores) were evaluated 100,000 simulations in 12 hours (10 million - 1200 hours approximately). In the evaluation on HTCondor, 32 desktop computers (132 cores) were used, with a processing time of 60 hours non-continuous in five days. HTCondor reduced the processing time for uncertainty characterization by a factor of 20 (1200 hours reduced to 60 hours.)
Barzegar, Rahim; Moghaddam, Asghar Asghari; Deo, Ravinesh; Fijani, Elham; Tziritis, Evangelos
2018-04-15
Constructing accurate and reliable groundwater risk maps provide scientifically prudent and strategic measures for the protection and management of groundwater. The objectives of this paper are to design and validate machine learning based-risk maps using ensemble-based modelling with an integrative approach. We employ the extreme learning machines (ELM), multivariate regression splines (MARS), M5 Tree and support vector regression (SVR) applied in multiple aquifer systems (e.g. unconfined, semi-confined and confined) in the Marand plain, North West Iran, to encapsulate the merits of individual learning algorithms in a final committee-based ANN model. The DRASTIC Vulnerability Index (VI) ranged from 56.7 to 128.1, categorized with no risk, low and moderate vulnerability thresholds. The correlation coefficient (r) and Willmott's Index (d) between NO 3 concentrations and VI were 0.64 and 0.314, respectively. To introduce improvements in the original DRASTIC method, the vulnerability indices were adjusted by NO 3 concentrations, termed as the groundwater contamination risk (GCR). Seven DRASTIC parameters utilized as the model inputs and GCR values utilized as the outputs of individual machine learning models were served in the fully optimized committee-based ANN-predictive model. The correlation indicators demonstrated that the ELM and SVR models outperformed the MARS and M5 Tree models, by virtue of a larger d and r value. Subsequently, the r and d metrics for the ANN-committee based multi-model in the testing phase were 0.8889 and 0.7913, respectively; revealing the superiority of the integrated (or ensemble) machine learning models when compared with the original DRASTIC approach. The newly designed multi-model ensemble-based approach can be considered as a pragmatic step for mapping groundwater contamination risks of multiple aquifer systems with multi-model techniques, yielding the high accuracy of the ANN committee-based model. Copyright © 2017 Elsevier B.V. All rights reserved.
Tracking of multiple targets using online learning for reference model adaptation.
Pernkopf, Franz
2008-12-01
Recently, much work has been done in multiple object tracking on the one hand and on reference model adaptation for a single-object tracker on the other side. In this paper, we do both tracking of multiple objects (faces of people) in a meeting scenario and online learning to incrementally update the models of the tracked objects to account for appearance changes during tracking. Additionally, we automatically initialize and terminate tracking of individual objects based on low-level features, i.e., face color, face size, and object movement. Many methods unlike our approach assume that the target region has been initialized by hand in the first frame. For tracking, a particle filter is incorporated to propagate sample distributions over time. We discuss the close relationship between our implemented tracker based on particle filters and genetic algorithms. Numerous experiments on meeting data demonstrate the capabilities of our tracking approach. Additionally, we provide an empirical verification of the reference model learning during tracking of indoor and outdoor scenes which supports a more robust tracking. Therefore, we report the average of the standard deviation of the trajectories over numerous tracking runs depending on the learning rate.
Filtering Meteoroid Flights Using Multiple Unscented Kalman Filters
NASA Astrophysics Data System (ADS)
Sansom, E. K.; Bland, P. A.; Rutten, M. G.; Paxman, J.; Towner, M. C.
2016-11-01
Estimator algorithms are immensely versatile and powerful tools that can be applied to any problem where a dynamic system can be modeled by a set of equations and where observations are available. A well designed estimator enables system states to be optimally predicted and errors to be rigorously quantified. Unscented Kalman filters (UKFs) and interactive multiple models can be found in methods from satellite tracking to self-driving cars. The luminous trajectory of the Bunburra Rockhole fireball was observed by the Desert Fireball Network in mid-2007. The recorded data set is used in this paper to examine the application of these two techniques as a viable approach to characterizing fireball dynamics. The nonlinear, single-body system of equations, used to model meteoroid entry through the atmosphere, is challenged by gross fragmentation events that may occur. The incorporation of the UKF within an interactive multiple model smoother provides a likely solution for when fragmentation events may occur as well as providing a statistical analysis of the state uncertainties. In addition to these benefits, another advantage of this approach is its automatability for use within an image processing pipeline to facilitate large fireball data analyses and meteorite recoveries.
NASA Technical Reports Server (NTRS)
Parker, L. Neergaard; Zank, G. P.
2013-01-01
Successful forecasting of energetic particle events in space weather models require algorithms for correctly predicting the spectrum of ions accelerated from a background population of charged particles. We present preliminary results from a model that diffusively accelerates particles at multiple shocks. Our basic approach is related to box models in which a distribution of particles is diffusively accelerated inside the box while simultaneously experiencing decompression through adiabatic expansion and losses from the convection and diffusion of particles outside the box. We adiabatically decompress the accelerated particle distribution between each shock by either the method explored in Melrose and Pope (1993) and Pope and Melrose (1994) or by the approach set forth in Zank et al. (2000) where we solve the transport equation by a method analogous to operator splitting. The second method incorporates the additional loss terms of convection and diffusion and allows for the use of a variable time between shocks. We use a maximum injection energy (E(sub max)) appropriate for quasi-parallel and quasi-perpendicular shocks and provide a preliminary application of the diffusive acceleration of particles by multiple shocks with frequencies appropriate for solar maximum (i.e., a non-Markovian process).
Noise Modeling From Conductive Shields Using Kirchhoff Equations.
Sandin, Henrik J; Volegov, Petr L; Espy, Michelle A; Matlashov, Andrei N; Savukov, Igor M; Schultz, Larry J
2010-10-09
Progress in the development of high-sensitivity magnetic-field measurements has stimulated interest in understanding the magnetic noise of conductive materials, especially of magnetic shields based on high-permeability materials and/or high-conductivity materials. For example, SQUIDs and atomic magnetometers have been used in many experiments with mu-metal shields, and additionally SQUID systems frequently have radio frequency shielding based on thin conductive materials. Typical existing approaches to modeling noise only work with simple shield and sensor geometries while common experimental setups today consist of multiple sensor systems with complex shield geometries. With complex sensor arrays used in, for example, MEG and Ultra Low Field MRI studies, knowledge of the noise correlation between sensors is as important as knowledge of the noise itself. This is crucial for incorporating efficient noise cancelation schemes for the system. We developed an approach that allows us to calculate the Johnson noise for arbitrary shaped shields and multiple sensor systems. The approach is efficient enough to be able to run on a single PC system and return results on a minute scale. With a multiple sensor system our approach calculates not only the noise for each sensor but also the noise correlation matrix between sensors. Here we will show how the algorithm can be implemented.
NASA Astrophysics Data System (ADS)
Hurford, Anthony; Harou, Julien
2015-04-01
Climate change has challenged conventional methods of planning water resources infrastructure investment, relying on stationarity of time-series data. It is not clear how to best use projections of future climatic conditions. Many-objective simulation-optimisation and trade-off analysis using evolutionary algorithms has been proposed as an approach to addressing complex planning problems with multiple conflicting objectives. The search for promising assets and policies can be carried out across a range of climate projections, to identify the configurations of infrastructure investment shown by model simulation to be robust under diverse future conditions. Climate projections can be used in different ways within a simulation model to represent the range of possible future conditions and understand how optimal investments vary according to the different hydrological conditions. We compare two approaches, optimising over an ensemble of different 20-year flow and PET timeseries projections, and separately for individual future scenarios built synthetically from the original ensemble. Comparing trade-off curves and surfaces generated by the two approaches helps understand the limits and benefits of optimising under different sets of conditions. The comparison is made for the Tana Basin in Kenya, where climate change combined with multiple conflicting objectives of water management and infrastructure investment mean decision-making is particularly challenging.
A model for solving the prescribed burn planning problem.
Rachmawati, Ramya; Ozlen, Melih; Reinke, Karin J; Hearne, John W
2015-01-01
The increasing frequency of destructive wildfires, with a consequent loss of life and property, has led to fire and land management agencies initiating extensive fuel management programs. This involves long-term planning of fuel reduction activities such as prescribed burning or mechanical clearing. In this paper, we propose a mixed integer programming (MIP) model that determines when and where fuel reduction activities should take place. The model takes into account multiple vegetation types in the landscape, their tolerance to frequency of fire events, and keeps track of the age of each vegetation class in each treatment unit. The objective is to minimise fuel load over the planning horizon. The complexity of scheduling fuel reduction activities has led to the introduction of sophisticated mathematical optimisation methods. While these approaches can provide optimum solutions, they can be computationally expensive, particularly for fuel management planning which extends across the landscape and spans long term planning horizons. This raises the question of how much better do exact modelling approaches compare to simpler heuristic approaches in their solutions. To answer this question, the proposed model is run using an exact MIP (using commercial MIP solver) and two heuristic approaches that decompose the problem into multiple single-period sub problems. The Knapsack Problem (KP), which is the first heuristic approach, solves the single period problems, using an exact MIP approach. The second heuristic approach solves the single period sub problem using a greedy heuristic approach. The three methods are compared in term of model tractability, computational time and the objective values. The model was tested using randomised data from 711 treatment units in the Barwon-Otway district of Victoria, Australia. Solutions for the exact MIP could be obtained for up to a 15-year planning only using a standard implementation of CPLEX. Both heuristic approaches can solve significantly larger problems, involving 100-year or even longer planning horizons. Furthermore there are no substantial differences in the solutions produced by the three approaches. It is concluded that for practical purposes a heuristic method is to be preferred to the exact MIP approach.
Selection of latent variables for multiple mixed-outcome models
ZHOU, LING; LIN, HUAZHEN; SONG, XINYUAN; LI, YI
2014-01-01
Latent variable models have been widely used for modeling the dependence structure of multiple outcomes data. However, the formulation of a latent variable model is often unknown a priori, the misspecification will distort the dependence structure and lead to unreliable model inference. Moreover, multiple outcomes with varying types present enormous analytical challenges. In this paper, we present a class of general latent variable models that can accommodate mixed types of outcomes. We propose a novel selection approach that simultaneously selects latent variables and estimates parameters. We show that the proposed estimator is consistent, asymptotically normal and has the oracle property. The practical utility of the methods is confirmed via simulations as well as an application to the analysis of the World Values Survey, a global research project that explores peoples’ values and beliefs and the social and personal characteristics that might influence them. PMID:27642219
NASA Astrophysics Data System (ADS)
Hagos Subagadis, Yohannes; Schütze, Niels; Grundmann, Jens
2015-04-01
The planning and implementation of effective water resources management strategies need an assessment of multiple (physical, environmental, and socio-economic) issues, and often requires new research in which knowledge of diverse disciplines are combined in a unified methodological and operational frameworks. Such integrative research to link different knowledge domains faces several practical challenges. Such complexities are further compounded by multiple actors frequently with conflicting interests and multiple uncertainties about the consequences of potential management decisions. A fuzzy-stochastic multiple criteria decision analysis tool was developed in this study to systematically quantify both probabilistic and fuzzy uncertainties associated with complex hydrosystems management. It integrated physical process-based models, fuzzy logic, expert involvement and stochastic simulation within a general framework. Subsequently, the proposed new approach is applied to a water-scarce coastal arid region water management problem in northern Oman, where saltwater intrusion into a coastal aquifer due to excessive groundwater extraction for irrigated agriculture has affected the aquifer sustainability, endangering associated socio-economic conditions as well as traditional social structure. Results from the developed method have provided key decision alternatives which can serve as a platform for negotiation and further exploration. In addition, this approach has enabled to systematically quantify both probabilistic and fuzzy uncertainties associated with the decision problem. Sensitivity analysis applied within the developed tool has shown that the decision makers' risk aversion and risk taking attitude may yield in different ranking of decision alternatives. The developed approach can be applied to address the complexities and uncertainties inherent in water resources systems to support management decisions, while serving as a platform for stakeholder participation.
Leppin, Aaron L.; Montori, Victor M.; Gionfriddo, Michael R.
2015-01-01
An increasing proportion of healthcare resources in the United States are directed toward an expanding group of complex and multimorbid patients. Federal stakeholders have called for new models of care to meet the needs of these patients. Minimally Disruptive Medicine (MDM) is a theory-based, patient-centered, and context-sensitive approach to care that focuses on achieving patient goals for life and health while imposing the smallest possible treatment burden on patients’ lives. The MDM Care Model is designed to be pragmatically comprehensive, meaning that it aims to address any and all factors that impact the implementation and effectiveness of care for patients with multiple chronic conditions. It comprises core activities that map to an underlying and testable theoretical framework. This encourages refinement and future study. Here, we present the conceptual rationale for and a practical approach to minimally disruptive care for patients with multiple chronic conditions. We introduce some of the specific tools and strategies that can be used to identify the right care for these patients and to put it into practice. PMID:27417747
Sensitivity Analysis of Multiple Informant Models When Data are Not Missing at Random
Blozis, Shelley A.; Ge, Xiaojia; Xu, Shu; Natsuaki, Misaki N.; Shaw, Daniel S.; Neiderhiser, Jenae; Scaramella, Laura; Leve, Leslie; Reiss, David
2014-01-01
Missing data are common in studies that rely on multiple informant data to evaluate relationships among variables for distinguishable individuals clustered within groups. Estimation of structural equation models using raw data allows for incomplete data, and so all groups may be retained even if only one member of a group contributes data. Statistical inference is based on the assumption that data are missing completely at random or missing at random. Importantly, whether or not data are missing is assumed to be independent of the missing data. A saturated correlates model that incorporates correlates of the missingness or the missing data into an analysis and multiple imputation that may also use such correlates offer advantages over the standard implementation of SEM when data are not missing at random because these approaches may result in a data analysis problem for which the missingness is ignorable. This paper considers these approaches in an analysis of family data to assess the sensitivity of parameter estimates to assumptions about missing data, a strategy that may be easily implemented using SEM software. PMID:25221420
Integrated presentation of ecological risk from multiple stressors
NASA Astrophysics Data System (ADS)
Goussen, Benoit; Price, Oliver R.; Rendal, Cecilie; Ashauer, Roman
2016-10-01
Current environmental risk assessments (ERA) do not account explicitly for ecological factors (e.g. species composition, temperature or food availability) and multiple stressors. Assessing mixtures of chemical and ecological stressors is needed as well as accounting for variability in environmental conditions and uncertainty of data and models. Here we propose a novel probabilistic ERA framework to overcome these limitations, which focusses on visualising assessment outcomes by construct-ing and interpreting prevalence plots as a quantitative prediction of risk. Key components include environmental scenarios that integrate exposure and ecology, and ecological modelling of relevant endpoints to assess the effect of a combination of stressors. Our illustrative results demonstrate the importance of regional differences in environmental conditions and the confounding interactions of stressors. Using this framework and prevalence plots provides a risk-based approach that combines risk assessment and risk management in a meaningful way and presents a truly mechanistic alternative to the threshold approach. Even whilst research continues to improve the underlying models and data, regulators and decision makers can already use the framework and prevalence plots. The integration of multiple stressors, environmental conditions and variability makes ERA more relevant and realistic.
Integrated presentation of ecological risk from multiple stressors.
Goussen, Benoit; Price, Oliver R; Rendal, Cecilie; Ashauer, Roman
2016-10-26
Current environmental risk assessments (ERA) do not account explicitly for ecological factors (e.g. species composition, temperature or food availability) and multiple stressors. Assessing mixtures of chemical and ecological stressors is needed as well as accounting for variability in environmental conditions and uncertainty of data and models. Here we propose a novel probabilistic ERA framework to overcome these limitations, which focusses on visualising assessment outcomes by construct-ing and interpreting prevalence plots as a quantitative prediction of risk. Key components include environmental scenarios that integrate exposure and ecology, and ecological modelling of relevant endpoints to assess the effect of a combination of stressors. Our illustrative results demonstrate the importance of regional differences in environmental conditions and the confounding interactions of stressors. Using this framework and prevalence plots provides a risk-based approach that combines risk assessment and risk management in a meaningful way and presents a truly mechanistic alternative to the threshold approach. Even whilst research continues to improve the underlying models and data, regulators and decision makers can already use the framework and prevalence plots. The integration of multiple stressors, environmental conditions and variability makes ERA more relevant and realistic.
Finite-temperature time-dependent variation with multiple Davydov states
NASA Astrophysics Data System (ADS)
Wang, Lu; Fujihashi, Yuta; Chen, Lipeng; Zhao, Yang
2017-03-01
The Dirac-Frenkel time-dependent variational approach with Davydov Ansätze is a sophisticated, yet efficient technique to obtain an accurate solution to many-body Schrödinger equations for energy and charge transfer dynamics in molecular aggregates and light-harvesting complexes. We extend this variational approach to finite temperature dynamics of the spin-boson model by adopting a Monte Carlo importance sampling method. In order to demonstrate the applicability of this approach, we compare calculated real-time quantum dynamics of the spin-boson model with that from numerically exact iterative quasiadiabatic propagator path integral (QUAPI) technique. The comparison shows that our variational approach with the single Davydov Ansätze is in excellent agreement with the QUAPI method at high temperatures, while the two differ at low temperatures. Accuracy in dynamics calculations employing a multitude of Davydov trial states is found to improve substantially over the single Davydov Ansatz, especially at low temperatures. At a moderate computational cost, our variational approach with the multiple Davydov Ansatz is shown to provide accurate spin-boson dynamics over a wide range of temperatures and bath spectral densities.
A comparison of the stochastic and machine learning approaches in hydrologic time series forecasting
NASA Astrophysics Data System (ADS)
Kim, T.; Joo, K.; Seo, J.; Heo, J. H.
2016-12-01
Hydrologic time series forecasting is an essential task in water resources management and it becomes more difficult due to the complexity of runoff process. Traditional stochastic models such as ARIMA family has been used as a standard approach in time series modeling and forecasting of hydrological variables. Due to the nonlinearity in hydrologic time series data, machine learning approaches has been studied with the advantage of discovering relevant features in a nonlinear relation among variables. This study aims to compare the predictability between the traditional stochastic model and the machine learning approach. Seasonal ARIMA model was used as the traditional time series model, and Random Forest model which consists of decision tree and ensemble method using multiple predictor approach was applied as the machine learning approach. In the application, monthly inflow data from 1986 to 2015 of Chungju dam in South Korea were used for modeling and forecasting. In order to evaluate the performances of the used models, one step ahead and multi-step ahead forecasting was applied. Root mean squared error and mean absolute error of two models were compared.
Models of Sector Flows Under Local, Regional and Airport Weather Constraints
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak
2017-01-01
Recently, the ATM community has made important progress in collaborative trajectory management through the introduction of a new FAA traffic management initiative called a Collaborative Trajectory Options Program (CTOP). FAA can use CTOPs to manage air traffic under multiple constraints (manifested as flow constrained areas or FCAs) in the system, and it allows flight operators to indicate their preferences for routing and delay options. CTOPs also permits better management of the overall trajectory of flights by considering both routing and departure delay options simultaneously. However, adoption of CTOPs in airspace has been hampered by many factors that include challenges in how to identify constrained areas and how to set rates for the FCAs. Decision support tools providing assistance would be particularly helpful in effective use of CTOPs. Such DSTs tools would need models of demand and capacity in the presence of multiple constraints. This study examines different approaches to using historical data to create and validate models of maximum flows in sectors and other airspace regions in the presence of multiple constraints. A challenge in creating an empirical model of flows under multiple constraints is a lack of sufficient historical data that captures diverse situations involving combinations of multiple constraints especially those with severe weather. The approach taken here to deal with this is two-fold. First, we create a generalized sector model encompassing multiple sectors rather than individual sectors in order to increase the amount of data used for creating the model by an order of magnitude. Secondly, we decompose the problem so that the amount of data needed is reduced. This involves creating a baseline demand model plus a separate weather constrained flow reduction model and then composing these into a single integrated model. A nominal demand model is a flow model (gdem) in the presence of clear local weather. This defines the flow as a function of weather constraints in neighboring regions, airport constraints and weather in locations that can cause re-routes to the location of interest. A weather constrained flow reduction model (fwx-red) is a model of reduction in baseline counts as a function of local weather. Because the number of independent variables associated with each of the two decomposed models is smaller than that with a single model, need for amount of data is reduced. Finally, a composite model that combines these two can be represented as fwx-red (gdem(e), l) where e represents non-local constraints and l represents local weather. The approaches studied to developing these models are divided into three categories: (1) Point estimation models (2) Empirical models (3) Theoretical models. Errors in predictions of these different types of models have been estimated. In situations when there is abundant data, point estimation models tend to be very accurate. In contrast, empirical models do better than theoretical models when there is some data available. The biggest benefit of theoretical models is their general applicability in wider range situations once the degree of accuracy of these has been established.
Models of Sector Aircraft Counts in the Presence of Local, Regional and Airport Constraints
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak
2017-01-01
Recently, the ATM community has made important progress in collaborative trajectory management through the introduction of a new FAA traffic management initiative called a Collaborative Trajectory Options Program (CTOP). FAA can use CTOPs to manage air traffic under multiple constraints (manifested as flow constrained areas or FCAs) in the system, and it allows flight operators to indicate their preferences for routing and delay options. CTOPs also permits better management of the overall trajectory of flights by considering both routing and departure delay options simultaneously. However, adoption of CTOPs in airspace has been hampered by many factors that include challenges in how to identify constrained areas and how to set rates for the FCAs. Decision support tools providing assistance would be particularly helpful in effective use of CTOPs. Such DSTs tools would need models of demand and capacity in the presence of multiple constraints. This study examines different approaches to using historical data to create and validate models of maximum flows in sectors and other airspace regions in the presence of multiple constraints. A challenge in creating an empirical model of flows under multiple constraints is a lack of sufficient historical data that captures diverse situations involving combinations of multiple constraints especially those with severe weather. The approach taken here to deal with this is two-fold. First, we create a generalized sector model encompassing multiple sectors rather than individual sectors in order to increase the amount of data used for creating the model by an order of magnitude. Secondly, we decompose the problem so that the amount of data needed is reduced. This involves creating a baseline demand model plus a separate weather constrained flow reduction model and then composing these into a single integrated model. A nominal demand model is a flow model (gdem) in the presence of clear local weather. This defines the flow as a function of weather constraints in neighboring regions, airport constraints and weather in locations that can cause re-routes to the location of interest. A weather constrained flow reduction model (fwx-red) is a model of reduction in baseline counts as a function of local weather. Because the number of independent variables associated with each of the two decomposed models is smaller than that with a single model, need for amount of data is reduced. Finally, a composite model that combines these two can be represented as fwx-red (gdem(e), l) where e represents non-local constraints and l represents local weather. The approaches studied to developing these models are divided into three categories: (1) Point estimation models (2) Empirical models (3) Theoretical models. Errors in predictions of these different types of models have been estimated. In situations when there is abundant data, point estimation models tend to be very accurate. In contrast, empirical models do better than theoretical models when there is some data available. The biggest benefit of theoretical models is their general applicability in wider range situations once the degree of accuracy of these has been established.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, Warren
2014-07-03
As an element of NACP research, the proposed investigation is a two pronged approach that derives and evaluates a regional carbon (C) budget for Oregon, Washington, and California. Objectives are (1) Use multiple data sources, including AmeriFlux data, inventories, and multispectral remote sensing data to investigate trends in carbon storage and exchanges of CO2 and water with variation in climate and disturbance history; (2) Develop and apply regional modeling that relies on these multiple data sources to reduce uncertainty in spatial estimates of carbon storage and NEP, and relative contributions of terrestrial ecosystems and anthropogenic emissions to atmospheric CO2 inmore » the region; (3) Model terrestrial carbon processes across the region, using the Biome-BGC terrestrial ecosystem model, and an atmospheric inverse modeling approach to estimate variation in rate and timing of terrestrial uptake and feedbacks to the atmosphere in response to climate and disturbance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beverly E. Law
2011-10-05
As an element of NACP research, the proposed investigation is a two pronged approach that derives and evaluates a regional carbon (C) budget for Oregon, Washington, and California. Objectives are (1) Use multiple data sources, including AmeriFlux data, inventories, and multispectral remote sensing data to investigate trends in carbon storage and exchanges of CO2 and water with variation in climate and disturbance history; (2) Develop and apply regional modeling that relies on these multiple data sources to reduce uncertainty in spatial estimates of carbon storage and NEP, and relative contributions of terrestrial ecosystems and anthropogenic emissions to atmospheric CO2 inmore » the region; (3) Model terrestrial carbon processes across the region, using the Biome-BGC terrestrial ecosystem model, and an atmospheric inverse modeling approach to estimate variation in rate and timing of terrestrial uptake and feedbacks to the atmosphere in response to climate and disturbance.« less
A Kernel Embedding-Based Approach for Nonstationary Causal Model Inference.
Hu, Shoubo; Chen, Zhitang; Chan, Laiwan
2018-05-01
Although nonstationary data are more common in the real world, most existing causal discovery methods do not take nonstationarity into consideration. In this letter, we propose a kernel embedding-based approach, ENCI, for nonstationary causal model inference where data are collected from multiple domains with varying distributions. In ENCI, we transform the complicated relation of a cause-effect pair into a linear model of variables of which observations correspond to the kernel embeddings of the cause-and-effect distributions in different domains. In this way, we are able to estimate the causal direction by exploiting the causal asymmetry of the transformed linear model. Furthermore, we extend ENCI to causal graph discovery for multiple variables by transforming the relations among them into a linear nongaussian acyclic model. We show that by exploiting the nonstationarity of distributions, both cause-effect pairs and two kinds of causal graphs are identifiable under mild conditions. Experiments on synthetic and real-world data are conducted to justify the efficacy of ENCI over major existing methods.
Genetic Programming Transforms in Linear Regression Situations
NASA Astrophysics Data System (ADS)
Castillo, Flor; Kordon, Arthur; Villa, Carlos
The chapter summarizes the use of Genetic Programming (GP) inMultiple Linear Regression (MLR) to address multicollinearity and Lack of Fit (LOF). The basis of the proposed method is applying appropriate input transforms (model respecification) that deal with these issues while preserving the information content of the original variables. The transforms are selected from symbolic regression models with optimal trade-off between accuracy of prediction and expressional complexity, generated by multiobjective Pareto-front GP. The chapter includes a comparative study of the GP-generated transforms with Ridge Regression, a variant of ordinary Multiple Linear Regression, which has been a useful and commonly employed approach for reducing multicollinearity. The advantages of GP-generated model respecification are clearly defined and demonstrated. Some recommendations for transforms selection are given as well. The application benefits of the proposed approach are illustrated with a real industrial application in one of the broadest empirical modeling areas in manufacturing - robust inferential sensors. The chapter contributes to increasing the awareness of the potential of GP in statistical model building by MLR.
NASA Astrophysics Data System (ADS)
Yamamoto, Takahiro; Nadaoka, Kazuo
2018-04-01
Atmospheric, watershed and coastal ocean models were integrated to provide a holistic analysis approach for coastal ocean simulation. The coupled model was applied to coastal ocean in the Philippines where terrestrial sediment loads provided from several adjacent watersheds play a major role in influencing coastal turbidity and are partly responsible for the coastal ecosystem degradation. The coupled model was validated using weather and hydrologic measurement to examine its potential applicability. The results revealed that the coastal water quality may be governed by the loads not only from the adjacent watershed but also from the distant watershed via coastal currents. This important feature of the multiple linkages can be quantitatively characterized by a "stress connectivity matrix", which indicates the complex underlying structure of environmental stresses in coastal ocean. The multiple stress connectivity concept shows the potential advantage of the integrated modelling approach for coastal ocean assessment, which may also serve for compensating the lack of measured data especially in tropical basins.
Fault Detection for Automotive Shock Absorber
NASA Astrophysics Data System (ADS)
Hernandez-Alcantara, Diana; Morales-Menendez, Ruben; Amezquita-Brooks, Luis
2015-11-01
Fault detection for automotive semi-active shock absorbers is a challenge due to the non-linear dynamics and the strong influence of the disturbances such as the road profile. First obstacle for this task, is the modeling of the fault, which has been shown to be of multiplicative nature. Many of the most widespread fault detection schemes consider additive faults. Two model-based fault algorithms for semiactive shock absorber are compared: an observer-based approach and a parameter identification approach. The performance of these schemes is validated and compared using a commercial vehicle model that was experimentally validated. Early results shows that a parameter identification approach is more accurate, whereas an observer-based approach is less sensible to parametric uncertainty.
Hill, Kristine; Porco, Silvana; Lobet, Guillaume; Zappala, Susan; Mooney, Sacha; Draye, Xavier; Bennett, Malcolm J.
2013-01-01
Genetic and genomic approaches in model organisms have advanced our understanding of root biology over the last decade. Recently, however, systems biology and modeling have emerged as important approaches, as our understanding of root regulatory pathways has become more complex and interpreting pathway outputs has become less intuitive. To relate root genotype to phenotype, we must move beyond the examination of interactions at the genetic network scale and employ multiscale modeling approaches to predict emergent properties at the tissue, organ, organism, and rhizosphere scales. Understanding the underlying biological mechanisms and the complex interplay between systems at these different scales requires an integrative approach. Here, we describe examples of such approaches and discuss the merits of developing models to span multiple scales, from network to population levels, and to address dynamic interactions between plants and their environment. PMID:24143806
Agent-based modeling: a new approach for theory building in social psychology.
Smith, Eliot R; Conrey, Frederica R
2007-02-01
Most social and psychological phenomena occur not as the result of isolated decisions by individuals but rather as the result of repeated interactions between multiple individuals over time. Yet the theory-building and modeling techniques most commonly used in social psychology are less than ideal for understanding such dynamic and interactive processes. This article describes an alternative approach to theory building, agent-based modeling (ABM), which involves simulation of large numbers of autonomous agents that interact with each other and with a simulated environment and the observation of emergent patterns from their interactions. The authors believe that the ABM approach is better able than prevailing approaches in the field, variable-based modeling (VBM) techniques such as causal modeling, to capture types of complex, dynamic, interactive processes so important in the social world. The article elaborates several important contrasts between ABM and VBM and offers specific recommendations for learning more and applying the ABM approach.
Wen, Shihua; Zhang, Lanju; Yang, Bo
2014-07-01
The Problem formulation, Objectives, Alternatives, Consequences, Trade-offs, Uncertainties, Risk attitude, and Linked decisions (PrOACT-URL) framework and multiple criteria decision analysis (MCDA) have been recommended by the European Medicines Agency for structured benefit-risk assessment of medicinal products undergoing regulatory review. The objective of this article was to provide solutions to incorporate the uncertainty from clinical data into the MCDA model when evaluating the overall benefit-risk profiles among different treatment options. Two statistical approaches, the δ-method approach and the Monte-Carlo approach, were proposed to construct the confidence interval of the overall benefit-risk score from the MCDA model as well as other probabilistic measures for comparing the benefit-risk profiles between treatment options. Both approaches can incorporate the correlation structure between clinical parameters (criteria) in the MCDA model and are straightforward to implement. The two proposed approaches were applied to a case study to evaluate the benefit-risk profile of an add-on therapy for rheumatoid arthritis (drug X) relative to placebo. It demonstrated a straightforward way to quantify the impact of the uncertainty from clinical data to the benefit-risk assessment and enabled statistical inference on evaluating the overall benefit-risk profiles among different treatment options. The δ-method approach provides a closed form to quantify the variability of the overall benefit-risk score in the MCDA model, whereas the Monte-Carlo approach is more computationally intensive but can yield its true sampling distribution for statistical inference. The obtained confidence intervals and other probabilistic measures from the two approaches enhance the benefit-risk decision making of medicinal products. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Muthen, Bengt
This paper investigates methods that avoid using multiple groups to represent the missing data patterns in covariance structure modeling, attempting instead to do a single-group analysis where the only action the analyst has to take is to indicate that data is missing. A new covariance structure approach developed by B. Muthen and G. Arminger is…
Behavioral Scale Reliability and Measurement Invariance Evaluation Using Latent Variable Modeling
ERIC Educational Resources Information Center
Raykov, Tenko
2004-01-01
A latent variable modeling approach to reliability and measurement invariance evaluation for multiple-component measuring instruments is outlined. An initial discussion deals with the limitations of coefficient alpha, a frequently used index of composite reliability. A widely and readily applicable structural modeling framework is next described…
ERIC Educational Resources Information Center
Wholeben, Brent Edward
A number of key issues facing elementary, secondary, and postsecondary educational administrators during retrenchment require a hierarchical decision-modeling approach. This paper identifies and discusses the use of a hierarchical multiple-alternatives modeling formulation (computer-based) that compares and evaluates a group of solution…
Automatically updating predictive modeling workflows support decision-making in drug design.
Muegge, Ingo; Bentzien, Jörg; Mukherjee, Prasenjit; Hughes, Robert O
2016-09-01
Using predictive models for early decision-making in drug discovery has become standard practice. We suggest that model building needs to be automated with minimum input and low technical maintenance requirements. Models perform best when tailored to answering specific compound optimization related questions. If qualitative answers are required, 2-bin classification models are preferred. Integrating predictive modeling results with structural information stimulates better decision making. For in silico models supporting rapid structure-activity relationship cycles the performance deteriorates within weeks. Frequent automated updates of predictive models ensure best predictions. Consensus between multiple modeling approaches increases the prediction confidence. Combining qualified and nonqualified data optimally uses all available information. Dose predictions provide a holistic alternative to multiple individual property predictions for reaching complex decisions.
Learning from Multiple Collaborating Intelligent Tutors: An Agent-based Approach.
ERIC Educational Resources Information Center
Solomos, Konstantinos; Avouris, Nikolaos
1999-01-01
Describes an open distributed multi-agent tutoring system (MATS) and discusses issues related to learning in such open environments. Topics include modeling a one student-many teachers approach in a computer-based learning context; distributed artificial intelligence; implementation issues; collaboration; and user interaction. (Author/LRW)
Ensuring long-term utility of the AOP framework and knowledge for multiple stakeholders
1.Introduction There is a need to increase the development and implementation of predictive approaches to support chemical safety assessment. These predictive approaches feature generation of data from tools such as computational models, pathway-based in vitro assays, and short-t...
Standardised Library Instruction Assessment: An Institution-Specific Approach
ERIC Educational Resources Information Center
Staley, Shannon M.; Branch, Nicole A.; Hewitt, Tom L.
2010-01-01
Introduction: We explore the use of a psychometric model for locally-relevant, information literacy assessment, using an online tool for standardised assessment of student learning during discipline-based library instruction sessions. Method: A quantitative approach to data collection and analysis was used, employing standardised multiple-choice…
NASA Technical Reports Server (NTRS)
DeCarvalho, N. V.; Chen, B. Y.; Pinho, S. T.; Baiz, P. M.; Ratcliffe, J. G.; Tay, T. E.
2013-01-01
A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.
NASA Technical Reports Server (NTRS)
DeCarvalho, Nelson V.; Chen, B. Y.; Pinho, Silvestre T.; Baiz, P. M.; Ratcliffe, James G.; Tay, T. E.
2013-01-01
A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.
Empirical Bayes Approaches to Multivariate Fuzzy Partitions.
ERIC Educational Resources Information Center
Woodbury, Max A.; Manton, Kenneth G.
1991-01-01
An empirical Bayes-maximum likelihood estimation procedure is presented for the application of fuzzy partition models in describing high dimensional discrete response data. The model describes individuals in terms of partial membership in multiple latent categories that represent bounded discrete spaces. (SLD)
Multiple electron processes of He and Ne by proton impact
NASA Astrophysics Data System (ADS)
Terekhin, Pavel Nikolaevich; Montenegro, Pablo; Quinto, Michele; Monti, Juan; Fojon, Omar; Rivarola, Roberto
2016-05-01
A detailed investigation of multiple electron processes (single and multiple ionization, single capture, transfer-ionization) of He and Ne is presented for proton impact at intermediate and high collision energies. Exclusive absolute cross sections for these processes have been obtained by calculation of transition probabilities in the independent electron and independent event models as a function of impact parameter in the framework of the continuum distorted wave-eikonal initial state theory. A binomial analysis is employed to calculate exclusive probabilities. The comparison with available theoretical and experimental results shows that exclusive probabilities are needed for a reliable description of the experimental data. The developed approach can be used for obtaining the input database for modeling multiple electron processes of charged particles passing through the matter.
Multiple re-encounter approach to radical pair reactions and the role of nonlinear master equations.
Clausen, Jens; Guerreschi, Gian Giacomo; Tiersch, Markus; Briegel, Hans J
2014-08-07
We formulate a multiple-encounter model of the radical pair mechanism that is based on a random coupling of the radical pair to a minimal model environment. These occasional pulse-like couplings correspond to the radical encounters and give rise to both dephasing and recombination. While this is in agreement with the original model of Haberkorn and its extensions that assume additional dephasing, we show how a nonlinear master equation may be constructed to describe the conditional evolution of the radical pairs prior to the detection of their recombination. We propose a nonlinear master equation for the evolution of an ensemble of independently evolving radical pairs whose nonlinearity depends on the record of the fluorescence signal. We also reformulate Haberkorn's original argument on the physicality of reaction operators using the terminology of quantum optics/open quantum systems. Our model allows one to describe multiple encounters within the exponential model and connects this with the master equation approach. We include hitherto neglected effects of the encounters, such as a separate dephasing in the triplet subspace, and predict potential new effects, such as Grover reflections of radical spins, that may be observed if the strength and time of the encounters can be experimentally controlled.
Modelling consequences of change in biodiversity and ...
This chapter offers an assessment of the rapidly changing landscape of methods assessing and forecasting the benefits that people receive from nature and how these benefits are shaped by institutions and various anthropogenic assets. There has been an explosion of activity in understanding and modeling the benefits that people receive from nature, and this explosion has provided a diversity of approaches that are both complementary and contradictory. However, there remain major gaps in what current models can do. They are not well suited to estimate most types of benefits at national, regional, or global scales. they are focused on decision analysis, but have not focused on implementation, learning, or dialogue. This hap in particular means that current models are not well suited to bridging among multiple knowledge systems, however, there are initial efforts made towards this goal. Furthermore, while participatory social-ecological scenarios are able to bridge multiple knowledge systems in their assessment and analysis of multiple ecosystem series, the social-ecological scenarios community is fragmented and not well connected. Consequently, IPBES has an excellent knowledge base to build upon, but a real investment in building a more integrated modeling and scenarios community of practice is needed to produce a more complete and useful toolbox of approaches to meet the needs of IPBES assessment and other assessment of nature benefits. This Chapter describes
Multiple criteria decision analysis for health technology assessment.
Thokala, Praveen; Duenas, Alejandra
2012-12-01
Multicriteria decision analysis (MCDA) has been suggested by some researchers as a method to capture the benefits beyond quality adjusted life-years in a transparent and consistent manner. The objectives of this article were to analyze the possible application of MCDA approaches in health technology assessment and to describe their relative advantages and disadvantages. This article begins with an introduction to the most common types of MCDA models and a critical review of state-of-the-art methods for incorporating multiple criteria in health technology assessment. An overview of MCDA is provided and is compared against the current UK National Institute for Health and Clinical Excellence health technology appraisal process. A generic MCDA modeling approach is described, and the different MCDA modeling approaches are applied to a hypothetical case study. A comparison of the different MCDA approaches is provided, and the generic issues that need consideration before the application of MCDA in health technology assessment are examined. There are general practical issues that might arise from using an MCDA approach, and it is suggested that appropriate care be taken to ensure the success of MCDA techniques in the appraisal process. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Naderi, D.; Pahlavani, M. R.; Alavi, S. A.
2013-05-01
Using the Langevin dynamical approach, the neutron multiplicity and the anisotropy of angular distribution of fission fragments in heavy ion fusion-fission reactions were calculated. We applied one- and two-dimensional Langevin equations to study the decay of a hot excited compound nucleus. The influence of the level-density parameter on neutron multiplicity and anisotropy of angular distribution of fission fragments was investigated. We used the level-density parameter based on the liquid drop model with two different values of the Bartel approach and Pomorska approach. Our calculations show that the anisotropy and neutron multiplicity are affected by level-density parameter and neck thickness. The calculations were performed on the 16O+208Pb and 20Ne+209Bi reactions. Obtained results in the case of the two-dimensional Langevin with a level-density parameter based on Bartel and co-workers approach are in better agreement with experimental data.
Genser, Bernd; Fischer, Joachim E; Figueiredo, Camila A; Alcântara-Neves, Neuza; Barreto, Mauricio L; Cooper, Philip J; Amorim, Leila D; Saemann, Marcus D; Weichhart, Thomas; Rodrigues, Laura C
2016-05-20
Immunologists often measure several correlated immunological markers, such as concentrations of different cytokines produced by different immune cells and/or measured under different conditions, to draw insights from complex immunological mechanisms. Although there have been recent methodological efforts to improve the statistical analysis of immunological data, a framework is still needed for the simultaneous analysis of multiple, often correlated, immune markers. This framework would allow the immunologists' hypotheses about the underlying biological mechanisms to be integrated. We present an analytical approach for statistical analysis of correlated immune markers, such as those commonly collected in modern immuno-epidemiological studies. We demonstrate i) how to deal with interdependencies among multiple measurements of the same immune marker, ii) how to analyse association patterns among different markers, iii) how to aggregate different measures and/or markers to immunological summary scores, iv) how to model the inter-relationships among these scores, and v) how to use these scores in epidemiological association analyses. We illustrate the application of our approach to multiple cytokine measurements from 818 children enrolled in a large immuno-epidemiological study (SCAALA Salvador), which aimed to quantify the major immunological mechanisms underlying atopic diseases or asthma. We demonstrate how to aggregate systematically the information captured in multiple cytokine measurements to immunological summary scores aimed at reflecting the presumed underlying immunological mechanisms (Th1/Th2 balance and immune regulatory network). We show how these aggregated immune scores can be used as predictors in regression models with outcomes of immunological studies (e.g. specific IgE) and compare the results to those obtained by a traditional multivariate regression approach. The proposed analytical approach may be especially useful to quantify complex immune responses in immuno-epidemiological studies, where investigators examine the relationship among epidemiological patterns, immune response, and disease outcomes.
Robust biological parametric mapping: an improved technique for multimodal brain image analysis
NASA Astrophysics Data System (ADS)
Yang, Xue; Beason-Held, Lori; Resnick, Susan M.; Landman, Bennett A.
2011-03-01
Mapping the quantitative relationship between structure and function in the human brain is an important and challenging problem. Numerous volumetric, surface, region of interest and voxelwise image processing techniques have been developed to statistically assess potential correlations between imaging and non-imaging metrics. Recently, biological parametric mapping has extended the widely popular statistical parametric approach to enable application of the general linear model to multiple image modalities (both for regressors and regressands) along with scalar valued observations. This approach offers great promise for direct, voxelwise assessment of structural and functional relationships with multiple imaging modalities. However, as presented, the biological parametric mapping approach is not robust to outliers and may lead to invalid inferences (e.g., artifactual low p-values) due to slight mis-registration or variation in anatomy between subjects. To enable widespread application of this approach, we introduce robust regression and robust inference in the neuroimaging context of application of the general linear model. Through simulation and empirical studies, we demonstrate that our robust approach reduces sensitivity to outliers without substantial degradation in power. The robust approach and associated software package provides a reliable way to quantitatively assess voxelwise correlations between structural and functional neuroimaging modalities.
Gabbe, Belinda J.; Harrison, James E.; Lyons, Ronan A.; Jolley, Damien
2011-01-01
Background Injury is a leading cause of the global burden of disease (GBD). Estimates of non-fatal injury burden have been limited by a paucity of empirical outcomes data. This study aimed to (i) establish the 12-month disability associated with each GBD 2010 injury health state, and (ii) compare approaches to modelling the impact of multiple injury health states on disability as measured by the Glasgow Outcome Scale – Extended (GOS-E). Methods 12-month functional outcomes for 11,337 survivors to hospital discharge were drawn from the Victorian State Trauma Registry and the Victorian Orthopaedic Trauma Outcomes Registry. ICD-10 diagnosis codes were mapped to the GBD 2010 injury health states. Cases with a GOS-E score >6 were defined as “recovered.” A split dataset approach was used. Cases were randomly assigned to development or test datasets. Probability of recovery for each health state was calculated using the development dataset. Three logistic regression models were evaluated: a) additive, multivariable; b) “worst injury;” and c) multiplicative. Models were adjusted for age and comorbidity and investigated for discrimination and calibration. Findings A single injury health state was recorded for 46% of cases (1–16 health states per case). The additive (C-statistic 0.70, 95% CI: 0.69, 0.71) and “worst injury” (C-statistic 0.70; 95% CI: 0.68, 0.71) models demonstrated higher discrimination than the multiplicative (C-statistic 0.68; 95% CI: 0.67, 0.70) model. The additive and “worst injury” models demonstrated acceptable calibration. Conclusions The majority of patients survived with persisting disability at 12-months, highlighting the importance of improving estimates of non-fatal injury burden. Additive and “worst” injury models performed similarly. GBD 2010 injury states were moderately predictive of recovery 1-year post-injury. Further evaluation using additional measures of health status and functioning and comparison with the GBD 2010 disability weights will be needed to optimise injury states for future GBD studies. PMID:21984951
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, S.; Barua, A.; Zhou, M., E-mail: min.zhou@me.gatech.edu
2014-05-07
Accounting for the combined effect of multiple sources of stochasticity in material attributes, we develop an approach that computationally predicts the probability of ignition of polymer-bonded explosives (PBXs) under impact loading. The probabilistic nature of the specific ignition processes is assumed to arise from two sources of stochasticity. The first source involves random variations in material microstructural morphology; the second source involves random fluctuations in grain-binder interfacial bonding strength. The effect of the first source of stochasticity is analyzed with multiple sets of statistically similar microstructures and constant interfacial bonding strength. Subsequently, each of the microstructures in the multiple setsmore » is assigned multiple instantiations of randomly varying grain-binder interfacial strengths to analyze the effect of the second source of stochasticity. Critical hotspot size-temperature states reaching the threshold for ignition are calculated through finite element simulations that explicitly account for microstructure and bulk and interfacial dissipation to quantify the time to criticality (t{sub c}) of individual samples, allowing the probability distribution of the time to criticality that results from each source of stochastic variation for a material to be analyzed. Two probability superposition models are considered to combine the effects of the multiple sources of stochasticity. The first is a parallel and series combination model, and the second is a nested probability function model. Results show that the nested Weibull distribution provides an accurate description of the combined ignition probability. The approach developed here represents a general framework for analyzing the stochasticity in the material behavior that arises out of multiple types of uncertainty associated with the structure, design, synthesis and processing of materials.« less
Pulley, S; Collins, A L
2018-09-01
The mitigation of diffuse sediment pollution requires reliable provenance information so that measures can be targeted. Sediment source fingerprinting represents one approach for supporting these needs, but recent methodological developments have resulted in an increasing complexity of data processing methods rendering the approach less accessible to non-specialists. A comprehensive new software programme (SIFT; SedIment Fingerprinting Tool) has therefore been developed which guides the user through critical data analysis decisions and automates all calculations. Multiple source group configurations and composite fingerprints are identified and tested using multiple methods of uncertainty analysis. This aims to explore the sediment provenance information provided by the tracers more comprehensively than a single model, and allows for model configurations with high uncertainties to be rejected. This paper provides an overview of its application to an agricultural catchment in the UK to determine if the approach used can provide a reduction in uncertainty and increase in precision. Five source group classifications were used; three formed using a k-means cluster analysis containing 2, 3 and 4 clusters, and two a-priori groups based upon catchment geology. Three different composite fingerprints were used for each classification and bi-plots, range tests, tracer variability ratios and virtual mixtures tested the reliability of each model configuration. Some model configurations performed poorly when apportioning the composition of virtual mixtures, and different model configurations could produce different sediment provenance results despite using composite fingerprints able to discriminate robustly between the source groups. Despite this uncertainty, dominant sediment sources were identified, and those in close proximity to each sediment sampling location were found to be of greatest importance. This new software, by integrating recent methodological developments in tracer data processing, guides users through key steps. Critically, by applying multiple model configurations and uncertainty assessment, it delivers more robust solutions for informing catchment management of the sediment problem than many previously used approaches. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Estimating and Visualizing Nonlinear Relations among Latent Variables: A Semiparametric Approach
ERIC Educational Resources Information Center
Pek, Jolynn; Sterba, Sonya K.; Kok, Bethany E.; Bauer, Daniel J.
2009-01-01
The graphical presentation of any scientific finding enhances its description, interpretation, and evaluation. Research involving latent variables is no exception, especially when potential nonlinear effects are suspect. This article has multiple aims. First, it provides a nontechnical overview of a semiparametric approach to modeling nonlinear…
ERIC Educational Resources Information Center
Nelson, Tenneisha; Squires, Vicki
2017-01-01
Organizations are faced with solving increasingly complex problems. Addressing these issues requires effective leadership that can facilitate a collaborative problem solving approach where multiple perspectives are leveraged. In this conceptual paper, we critique the effectiveness of earlier leadership models in tackling complex organizational…
A Comparison of Two Mathematics Problem-Solving Strategies: Facilitate Algebra-Readiness
ERIC Educational Resources Information Center
Xin, Yan Ping; Zhang, Dake; Park, Joo Young; Tom, Kinsey; Whipple, Amanda; Si, Luo
2011-01-01
The authors compared a conceptual model-based problem-solving (COMPS) approach with a general heuristic instructional approach for teaching multiplication-division word-problem solving to elementary students with learning problems (LP). The results indicate that only the COMPS group significantly improved, from pretests to posttests, their…
ERIC Educational Resources Information Center
Luiselli, James K.; Luiselli, Tracy Evans
1995-01-01
This report describes a behavior analysis treatment approach to establishing oral feeding in children with multiple developmental disabilities and gastrostomy-tube dependency. Pretreatment screening, functional assessment, and treatment are reported as implemented within a behavioral consultation model. A case study illustrates the sequence and…
ERIC Educational Resources Information Center
Jang, Eunice Eunhee; Lajoie, Susanne P.; Wagner, Maryam; Xu, Zhenhua; Poitras, Eric; Naismith, Laura
2017-01-01
Technology-rich learning environments (TREs) provide opportunities for learners to engage in complex interactions involving a multitude of cognitive, metacognitive, and affective states. Understanding learners' distinct learning progressions in TREs demand inquiry approaches that employ well-conceived theoretical accounts of these multiple facets.…
Park, Eun Sug; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford
2015-06-01
A major difficulty with assessing source-specific health effects is that source-specific exposures cannot be measured directly; rather, they need to be estimated by a source-apportionment method such as multivariate receptor modeling. The uncertainty in source apportionment (uncertainty in source-specific exposure estimates and model uncertainty due to the unknown number of sources and identifiability conditions) has been largely ignored in previous studies. Also, spatial dependence of multipollutant data collected from multiple monitoring sites has not yet been incorporated into multivariate receptor modeling. The objectives of this project are (1) to develop a multipollutant approach that incorporates both sources of uncertainty in source-apportionment into the assessment of source-specific health effects and (2) to develop enhanced multivariate receptor models that can account for spatial correlations in the multipollutant data collected from multiple sites. We employed a Bayesian hierarchical modeling framework consisting of multivariate receptor models, health-effects models, and a hierarchical model on latent source contributions. For the health model, we focused on the time-series design in this project. Each combination of number of sources and identifiability conditions (additional constraints on model parameters) defines a different model. We built a set of plausible models with extensive exploratory data analyses and with information from previous studies, and then computed posterior model probability to estimate model uncertainty. Parameter estimation and model uncertainty estimation were implemented simultaneously by Markov chain Monte Carlo (MCMC*) methods. We validated the methods using simulated data. We illustrated the methods using PM2.5 (particulate matter ≤ 2.5 μm in aerodynamic diameter) speciation data and mortality data from Phoenix, Arizona, and Houston, Texas. The Phoenix data included counts of cardiovascular deaths and daily PM2.5 speciation data from 1995-1997. The Houston data included respiratory mortality data and 24-hour PM2.5 speciation data sampled every six days from a region near the Houston Ship Channel in years 2002-2005. We also developed a Bayesian spatial multivariate receptor modeling approach that, while simultaneously dealing with the unknown number of sources and identifiability conditions, incorporated spatial correlations in the multipollutant data collected from multiple sites into the estimation of source profiles and contributions based on the discrete process convolution model for multivariate spatial processes. This new modeling approach was applied to 24-hour ambient air concentrations of 17 volatile organic compounds (VOCs) measured at nine monitoring sites in Harris County, Texas, during years 2000 to 2005. Simulation results indicated that our methods were accurate in identifying the true model and estimated parameters were close to the true values. The results from our methods agreed in general with previous studies on the source apportionment of the Phoenix data in terms of estimated source profiles and contributions. However, we had a greater number of statistically insignificant findings, which was likely a natural consequence of incorporating uncertainty in the estimated source contributions into the health-effects parameter estimation. For the Houston data, a model with five sources (that seemed to be Sulfate-Rich Secondary Aerosol, Motor Vehicles, Industrial Combustion, Soil/Crustal Matter, and Sea Salt) showed the highest posterior model probability among the candidate models considered when fitted simultaneously to the PM2.5 and mortality data. There was a statistically significant positive association between respiratory mortality and same-day PM2.5 concentrations attributed to one of the sources (probably industrial combustion). The Bayesian spatial multivariate receptor modeling approach applied to the VOC data led to a highest posterior model probability for a model with five sources (that seemed to be refinery, petrochemical production, gasoline evaporation, natural gas, and vehicular exhaust) among several candidate models, with the number of sources varying between three and seven and with different identifiability conditions. Our multipollutant approach assessing source-specific health effects is more advantageous than a single-pollutant approach in that it can estimate total health effects from multiple pollutants and can also identify emission sources that are responsible for adverse health effects. Our Bayesian approach can incorporate not only uncertainty in the estimated source contributions, but also model uncertainty that has not been addressed in previous studies on assessing source-specific health effects. The new Bayesian spatial multivariate receptor modeling approach enables predictions of source contributions at unmonitored sites, minimizing exposure misclassification and providing improved exposure estimates along with their uncertainty estimates, as well as accounting for uncertainty in the number of sources and identifiability conditions.
ERIC Educational Resources Information Center
McKay, Mary M.; Gopalan, Geetha; Franco, Lydia; Dean-Assael, Kara; Chacko, Anil; Jackson, Jerrold M.; Fuss, Ashley
2011-01-01
This article presents preliminary outcomes associated with an experimental, longitudinal study of a Multiple Family Group (MFG) service delivery approach set within 13 urban outpatient clinics serving children and their families living in inner-city, primarily African American and Latino communities. Specifically, this article focuses on parent…
ERIC Educational Resources Information Center
Goodwin, Amanda P.; August, Diane; Calderon, Margarita
2015-01-01
The current study unites multiple theories (i.e., the orthographic depth hypothesis and linguistic grain size theory, the simple view of reading, and the common underlying proficiency model) to explore differences in how 113 fourth-grade Spanish-speaking English learners (ELs) approached reading in their native language of Spanish, which is…
Multiple-scale prediction of forest loss risk across Borneo
Samuel A. Cushman; Ewan A. Macdonald; Erin L. Landguth; Yadvinder Malhi; David W. Macdonald
2017-01-01
Context: The forests of Borneo have among the highest biodiversity and also the highest forest loss rates on the planet. Objectives: Our objectives were to: (1) compare multiple modelling approaches, (2) evaluate the utility of landscape composition and configuration as predictors, (3) assess the influence of the ratio of forest loss and persistence points in the...
Modelling multi-hazard hurricane damages on an urbanized coast with a Bayesian Network approach
van Verseveld, H.C.W.; Van Dongeren, A. R.; Plant, Nathaniel G.; Jäger, W.S.; den Heijer, C.
2015-01-01
Hurricane flood impacts to residential buildings in coastal zones are caused by a number of hazards, such as inundation, overflow currents, erosion, and wave attack. However, traditional hurricane damage models typically make use of stage-damage functions, where the stage is related to flooding depth only. Moreover, these models are deterministic and do not consider the large amount of uncertainty associated with both the processes themselves and with the predictions. This uncertainty becomes increasingly important when multiple hazards (flooding, wave attack, erosion, etc.) are considered simultaneously. This paper focusses on establishing relationships between observed damage and multiple hazard indicators in order to make better probabilistic predictions. The concept consists of (1) determining Local Hazard Indicators (LHIs) from a hindcasted storm with use of a nearshore morphodynamic model, XBeach, and (2) coupling these LHIs and building characteristics to the observed damages. We chose a Bayesian Network approach in order to make this coupling and used the LHIs ‘Inundation depth’, ‘Flow velocity’, ‘Wave attack’, and ‘Scour depth’ to represent flooding, current, wave impacts, and erosion related hazards.The coupled hazard model was tested against four thousand damage observations from a case site at the Rockaway Peninsula, NY, that was impacted by Hurricane Sandy in late October, 2012. The model was able to accurately distinguish ‘Minor damage’ from all other outcomes 95% of the time and could distinguish areas that were affected by the storm, but not severely damaged, 68% of the time. For the most heavily damaged buildings (‘Major Damage’ and ‘Destroyed’), projections of the expected damage underestimated the observed damage. The model demonstrated that including multiple hazards doubled the prediction skill, with Log-Likelihood Ratio test (a measure of improved accuracy and reduction in uncertainty) scores between 0.02 and 0.17 when only one hazard is considered and a score of 0.37 when multiple hazards are considered simultaneously. The LHIs with the most predictive skill were ‘Inundation depth’ and ‘Wave attack’. The Bayesian Network approach has several advantages over the market-standard stage-damage functions: the predictive capacity of multiple indicators can be combined; probabilistic predictions can be obtained, which include uncertainty; and quantitative as well as descriptive information can be used simultaneously.
NASA Astrophysics Data System (ADS)
Iwamoto, Masami; Miki, Kazuo; Yang, King H.
Previous studies in both fields of automotive safety and orthopedic surgery have hypothesized that immobilization of the shoulder caused by the shoulder injury could be related to multiple rib fractures, which are frequently life threatening. Therefore, for more effective occupant protection, it is important to understand the relationship between shoulder injury and multiple rib fractures in side impact. The purpose of this study is to develop a finite element model of the human shoulder in order to understand this relationship. The shoulder model included three bones (the humerus, scapula and clavicle) and major ligaments and muscles around the shoulder. The model also included approaches to represent bone fractures and joint dislocations. The relationships between shoulder injury and immobilization of the shoulder are discussed using model responses for lateral shoulder impact. It is also discussed how the injury can be related to multiple rib fractures.
Monitoring and Modeling Performance of Communications in Computational Grids
NASA Technical Reports Server (NTRS)
Frumkin, Michael A.; Le, Thuy T.
2003-01-01
Computational grids may include many machines located in a number of sites. For efficient use of the grid we need to have an ability to estimate the time it takes to communicate data between the machines. For dynamic distributed grids it is unrealistic to know exact parameters of the communication hardware and the current communication traffic and we should rely on a model of the network performance to estimate the message delivery time. Our approach to a construction of such a model is based on observation of the messages delivery time with various message sizes and time scales. We record these observations in a database and use them to build a model of the message delivery time. Our experiments show presence of multiple bands in the logarithm of the message delivery times. These multiple bands represent multiple paths messages travel between the grid machines and are incorporated in our multiband model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego-Blanco, Jorge R.; Hunke, Elizabeth C.; Urban, Nathan M.
Here, we implement a variance-based distance metric (D n) to objectively assess skill of sea ice models when multiple output variables or uncertainties in both model predictions and observations need to be considered. The metric compares observations and model data pairs on common spatial and temporal grids improving upon highly aggregated metrics (e.g., total sea ice extent or volume) by capturing the spatial character of model skill. The D n metric is a gamma-distributed statistic that is more general than the χ 2 statistic commonly used to assess model fit, which requires the assumption that the model is unbiased andmore » can only incorporate observational error in the analysis. The D n statistic does not assume that the model is unbiased, and allows the incorporation of multiple observational data sets for the same variable and simultaneously for different variables, along with different types of variances that can characterize uncertainties in both observations and the model. This approach represents a step to establish a systematic framework for probabilistic validation of sea ice models. The methodology is also useful for model tuning by using the D n metric as a cost function and incorporating model parametric uncertainty as part of a scheme to optimize model functionality. We apply this approach to evaluate different configurations of the standalone Los Alamos sea ice model (CICE) encompassing the parametric uncertainty in the model, and to find new sets of model configurations that produce better agreement than previous configurations between model and observational estimates of sea ice concentration and thickness.« less
Urrego-Blanco, Jorge R.; Hunke, Elizabeth C.; Urban, Nathan M.; ...
2017-04-01
Here, we implement a variance-based distance metric (D n) to objectively assess skill of sea ice models when multiple output variables or uncertainties in both model predictions and observations need to be considered. The metric compares observations and model data pairs on common spatial and temporal grids improving upon highly aggregated metrics (e.g., total sea ice extent or volume) by capturing the spatial character of model skill. The D n metric is a gamma-distributed statistic that is more general than the χ 2 statistic commonly used to assess model fit, which requires the assumption that the model is unbiased andmore » can only incorporate observational error in the analysis. The D n statistic does not assume that the model is unbiased, and allows the incorporation of multiple observational data sets for the same variable and simultaneously for different variables, along with different types of variances that can characterize uncertainties in both observations and the model. This approach represents a step to establish a systematic framework for probabilistic validation of sea ice models. The methodology is also useful for model tuning by using the D n metric as a cost function and incorporating model parametric uncertainty as part of a scheme to optimize model functionality. We apply this approach to evaluate different configurations of the standalone Los Alamos sea ice model (CICE) encompassing the parametric uncertainty in the model, and to find new sets of model configurations that produce better agreement than previous configurations between model and observational estimates of sea ice concentration and thickness.« less
Fining of Red Wine Monitored by Multiple Light Scattering.
Ferrentino, Giovanna; Ramezani, Mohsen; Morozova, Ksenia; Hafner, Daniela; Pedri, Ulrich; Pixner, Konrad; Scampicchio, Matteo
2017-07-12
This work describes a new approach based on multiple light scattering to study red wine clarification processes. The whole spectral signal (1933 backscattering points along the length of each sample vial) were fitted by a multivariate kinetic model that was built with a three-step mechanism, implying (1) adsorption of wine colloids to fining agents, (2) aggregation into larger particles, and (3) sedimentation. Each step is characterized by a reaction rate constant. According to the first reaction, the results showed that gelatin was the most efficient fining agent, concerning the main objective, which was the clarification of the wine, and consequently the increase in its limpidity. Such a trend was also discussed in relation to the results achieved by nephelometry, total phenols, ζ-potential, color, sensory, and electronic nose analyses. Also, higher concentrations of the fining agent (from 5 to 30 g/100 L) or higher temperatures (from 10 to 20 °C) sped up the process. Finally, the advantage of using the whole spectral signal vs classical univariate approaches was demonstrated by comparing the uncertainty associated with the rate constants of the proposed kinetic model. Overall, multiple light scattering technique showed a great potential for studying fining processes compared to classical univariate approaches.
Richardson, Miles
2017-04-01
In ergonomics there is often a need to identify and predict the separate effects of multiple factors on performance. A cost-effective fractional factorial approach to understanding the relationship between task characteristics and task performance is presented. The method has been shown to provide sufficient independent variability to reveal and predict the effects of task characteristics on performance in two domains. The five steps outlined are: selection of performance measure, task characteristic identification, task design for user trials, data collection, regression model development and task characteristic analysis. The approach can be used for furthering knowledge of task performance, theoretical understanding, experimental control and prediction of task performance. Practitioner Summary: A cost-effective method to identify and predict the separate effects of multiple factors on performance is presented. The five steps allow a better understanding of task factors during the design process.
The SAGE Model of Social Psychological Research
Power, Séamus A.; Velez, Gabriel; Qadafi, Ahmad; Tennant, Joseph
2018-01-01
We propose a SAGE model for social psychological research. Encapsulated in our acronym is a proposal to have a synthetic approach to social psychological research, in which qualitative methods are augmentative to quantitative ones, qualitative methods can be generative of new experimental hypotheses, and qualitative methods can capture experiences that evade experimental reductionism. We remind social psychological researchers that psychology was founded in multiple methods of investigation at multiple levels of analysis. We discuss historical examples and our own research as contemporary examples of how a SAGE model can operate in part or as an integrated whole. The implications of our model are discussed. PMID:29361241
Soranno, Patricia A.; Cheruvelil, Kendra Spence; Webster, Katherine E.; Bremigan, Mary T.; Wagner, Tyler; Stow, Craig A.
2010-01-01
Governmental entities are responsible for managing and conserving large numbers of lake, river, and wetland ecosystems that can be addressed only rarely on a case-by-case basis. We present a system for predictive classification modeling, grounded in the theoretical foundation of landscape limnology, that creates a tractable number of ecosystem classes to which management actions may be tailored. We demonstrate our system by applying two types of predictive classification modeling approaches to develop nutrient criteria for eutrophication management in 1998 north temperate lakes. Our predictive classification system promotes the effective management of multiple ecosystems across broad geographic scales by explicitly connecting management and conservation goals to the classification modeling approach, considering multiple spatial scales as drivers of ecosystem dynamics, and acknowledging the hierarchical structure of freshwater ecosystems. Such a system is critical for adaptive management of complex mosaics of freshwater ecosystems and for balancing competing needs for ecosystem services in a changing world.
Multiple sparse volumetric priors for distributed EEG source reconstruction.
Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan
2014-10-15
We revisit the multiple sparse priors (MSP) algorithm implemented in the statistical parametric mapping software (SPM) for distributed EEG source reconstruction (Friston et al., 2008). In the present implementation, multiple cortical patches are introduced as source priors based on a dipole source space restricted to a cortical surface mesh. In this note, we present a technique to construct volumetric cortical regions to introduce as source priors by restricting the dipole source space to a segmented gray matter layer and using a region growing approach. This extension allows to reconstruct brain structures besides the cortical surface and facilitates the use of more realistic volumetric head models including more layers, such as cerebrospinal fluid (CSF), compared to the standard 3-layered scalp-skull-brain head models. We illustrated the technique with ERP data and anatomical MR images in 12 subjects. Based on the segmented gray matter for each of the subjects, cortical regions were created and introduced as source priors for MSP-inversion assuming two types of head models. The standard 3-layered scalp-skull-brain head models and extended 4-layered head models including CSF. We compared these models with the current implementation by assessing the free energy corresponding with each of the reconstructions using Bayesian model selection for group studies. Strong evidence was found in favor of the volumetric MSP approach compared to the MSP approach based on cortical patches for both types of head models. Overall, the strongest evidence was found in favor of the volumetric MSP reconstructions based on the extended head models including CSF. These results were verified by comparing the reconstructed activity. The use of volumetric cortical regions as source priors is a useful complement to the present implementation as it allows to introduce more complex head models and volumetric source priors in future studies. Copyright © 2014 Elsevier Inc. All rights reserved.
A Two-Dimensional Helmholtz Equation Solution for the Multiple Cavity Scattering Problem
2013-02-01
obtained by using the block Gauss – Seidel iterative meth- od. To show the convergence of the iterative method, we define the error between two...models to the general multiple cavity setting. Numerical examples indicate that the convergence of the Gauss – Seidel iterative method depends on the...variational approach. A block Gauss – Seidel iterative method is introduced to solve the cou- pled system of the multiple cavity scattering problem, where
Hyun, Seung Won; Wong, Weng Kee
2016-01-01
We construct an optimal design to simultaneously estimate three common interesting features in a dose-finding trial with possibly different emphasis on each feature. These features are (1) the shape of the dose-response curve, (2) the median effective dose and (3) the minimum effective dose level. A main difficulty of this task is that an optimal design for a single objective may not perform well for other objectives. There are optimal designs for dual objectives in the literature but we were unable to find optimal designs for 3 or more objectives to date with a concrete application. A reason for this is that the approach for finding a dual-objective optimal design does not work well for a 3 or more multiple-objective design problem. We propose a method for finding multiple-objective optimal designs that estimate the three features with user-specified higher efficiencies for the more important objectives. We use the flexible 4-parameter logistic model to illustrate the methodology but our approach is applicable to find multiple-objective optimal designs for other types of objectives and models. We also investigate robustness properties of multiple-objective optimal designs to mis-specification in the nominal parameter values and to a variation in the optimality criterion. We also provide computer code for generating tailor made multiple-objective optimal designs. PMID:26565557
Hyun, Seung Won; Wong, Weng Kee
2015-11-01
We construct an optimal design to simultaneously estimate three common interesting features in a dose-finding trial with possibly different emphasis on each feature. These features are (1) the shape of the dose-response curve, (2) the median effective dose and (3) the minimum effective dose level. A main difficulty of this task is that an optimal design for a single objective may not perform well for other objectives. There are optimal designs for dual objectives in the literature but we were unable to find optimal designs for 3 or more objectives to date with a concrete application. A reason for this is that the approach for finding a dual-objective optimal design does not work well for a 3 or more multiple-objective design problem. We propose a method for finding multiple-objective optimal designs that estimate the three features with user-specified higher efficiencies for the more important objectives. We use the flexible 4-parameter logistic model to illustrate the methodology but our approach is applicable to find multiple-objective optimal designs for other types of objectives and models. We also investigate robustness properties of multiple-objective optimal designs to mis-specification in the nominal parameter values and to a variation in the optimality criterion. We also provide computer code for generating tailor made multiple-objective optimal designs.
Statistical methods and neural network approaches for classification of data from multiple sources
NASA Technical Reports Server (NTRS)
Benediktsson, Jon Atli; Swain, Philip H.
1990-01-01
Statistical methods for classification of data from multiple data sources are investigated and compared to neural network models. A problem with using conventional multivariate statistical approaches for classification of data of multiple types is in general that a multivariate distribution cannot be assumed for the classes in the data sources. Another common problem with statistical classification methods is that the data sources are not equally reliable. This means that the data sources need to be weighted according to their reliability but most statistical classification methods do not have a mechanism for this. This research focuses on statistical methods which can overcome these problems: a method of statistical multisource analysis and consensus theory. Reliability measures for weighting the data sources in these methods are suggested and investigated. Secondly, this research focuses on neural network models. The neural networks are distribution free since no prior knowledge of the statistical distribution of the data is needed. This is an obvious advantage over most statistical classification methods. The neural networks also automatically take care of the problem involving how much weight each data source should have. On the other hand, their training process is iterative and can take a very long time. Methods to speed up the training procedure are introduced and investigated. Experimental results of classification using both neural network models and statistical methods are given, and the approaches are compared based on these results.
McFarquhar, Martyn; McKie, Shane; Emsley, Richard; Suckling, John; Elliott, Rebecca; Williams, Stephen
2016-05-15
Repeated measurements and multimodal data are common in neuroimaging research. Despite this, conventional approaches to group level analysis ignore these repeated measurements in favour of multiple between-subject models using contrasts of interest. This approach has a number of drawbacks as certain designs and comparisons of interest are either not possible or complex to implement. Unfortunately, even when attempting to analyse group level data within a repeated-measures framework, the methods implemented in popular software packages make potentially unrealistic assumptions about the covariance structure across the brain. In this paper, we describe how this issue can be addressed in a simple and efficient manner using the multivariate form of the familiar general linear model (GLM), as implemented in a new MATLAB toolbox. This multivariate framework is discussed, paying particular attention to methods of inference by permutation. Comparisons with existing approaches and software packages for dependent group-level neuroimaging data are made. We also demonstrate how this method is easily adapted for dependency at the group level when multiple modalities of imaging are collected from the same individuals. Follow-up of these multimodal models using linear discriminant functions (LDA) is also discussed, with applications to future studies wishing to integrate multiple scanning techniques into investigating populations of interest. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Emilie B. Henderson; Janet L. Ohmann; Matthew J. Gregory; Heather M. Roberts; Harold S.J. Zald
2014-01-01
Landscape management and conservation planning require maps of vegetation composition and structure over large regions. Species distribution models (SDMs) are often used for individual species, but projects mapping multiple species are rarer. We compare maps of plant community composition assembled by stacking results from many SDMs with multivariate maps constructed...
Modeling Instruction in AP Physics C: Mechanics and Electricity and Magnetism
ERIC Educational Resources Information Center
Belcher, Nathan Tillman
2017-01-01
This action research study used data from multiple assessments in Mechanics and Electricity and Magnetism to determine the viability of Modeling Instruction as a pedagogy for students in AP Physics C: Mechanics and Electricity and Magnetism. Modeling Instruction is a guided-inquiry approach to teaching science in which students progress through…
ERIC Educational Resources Information Center
Williams, Grant; Clement, John
2015-01-01
This study sought to identify specific types of discussion-based strategies that two successful high school physics teachers using a model-based approach utilized in attempting to foster students' construction of explanatory models for scientific concepts. We found evidence that, in addition to previously documented dialogical strategies that…
Thomas W. Bonnot; Frank R. Thompson; Joshua J. Millspaugh; D. Todd Jones-Farland
2013-01-01
Efforts to conserve regional biodiversity in the face of global climate change, habitat loss and fragmentation will depend on approaches that consider population processes at multiple scales. By combining habitat and demographic modeling, landscape-based population viability models effectively relate small-scale habitat and landscape patterns to regional population...
Modeling species occurrence dynamics with multiple states and imperfect detection
MacKenzie, D.I.; Nichols, J.D.; Seamans, M.E.; Gutierrez, R.J.
2009-01-01
Recent extensions of occupancy modeling have focused not only on the distribution of species over space, but also on additional state variables (e.g., reproducing or not, with or without disease organisms, relative abundance categories) that provide extra information about occupied sites. These biologist-driven extensions are characterized by ambiguity in both species presence and correct state classification, caused by imperfect detection. We first show the relationships between independently published approaches to the modeling of multistate occupancy. We then extend the pattern-based modeling to the case of sampling over multiple seasons or years in order to estimate state transition probabilities associated with system dynamics. The methodology and its potential for addressing relevant ecological questions are demonstrated using both maximum likelihood (occupancy and successful reproduction dynamics of California Spotted Owl) and Markov chain Monte Carlo estimation approaches (changes in relative abundance of green frogs in Maryland). Just as multistate capture-recapture modeling has revolutionized the study of individual marked animals, we believe that multistate occupancy modeling will dramatically increase our ability to address interesting questions about ecological processes underlying population-level dynamics. ?? 2009 by the Ecological Society of America.
Dynamic models for problems of species occurrence with multiple states
MacKenzie, D.I.; Nichols, J.D.; Seamans, M.E.; Gutierrez, R.J.
2009-01-01
Recent extensions of occupancy modeling have focused not only on the distribution of species over space, but also on additional state variables (e.g., reproducing or not, with or without disease organisms, relative abundance categories) that provide extra information about occupied sites. These biologist-driven extensions are characterized by ambiguity in both species presence and correct state classification, caused by imperfect detection. We first show the relationships between independently published approaches to the modeling of multistate occupancy. We then extend the pattern-based modeling to the case of sampling over multiple seasons or years in order to estimate state transition probabilities associated with system dynamics. The methodology and its potential for addressing relevant ecological questions are demonstrated using both maximum likelihood (occupancy and successful reproduction dynamics of California Spotted Owl) and Markov chain Monte Carlo estimation approaches (changes in relative abundance of green frogs in Maryland). Just as multistate capture?recapture modeling has revolutionized the study of individual marked animals, we believe that multistate occupancy modeling will dramatically increase our ability to address interesting questions about ecological processes underlying population-level dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmadi, Rouhollah, E-mail: rouhollahahmadi@yahoo.com; Khamehchi, Ehsan
Conditioning stochastic simulations are very important in many geostatistical applications that call for the introduction of nonlinear and multiple-point data in reservoir modeling. Here, a new methodology is proposed for the incorporation of different data types into multiple-point statistics (MPS) simulation frameworks. Unlike the previous techniques that call for an approximate forward model (filter) for integration of secondary data into geologically constructed models, the proposed approach develops an intermediate space where all the primary and secondary data are easily mapped onto. Definition of the intermediate space, as may be achieved via application of artificial intelligence tools like neural networks andmore » fuzzy inference systems, eliminates the need for using filters as in previous techniques. The applicability of the proposed approach in conditioning MPS simulations to static and geologic data is verified by modeling a real example of discrete fracture networks using conventional well-log data. The training patterns are well reproduced in the realizations, while the model is also consistent with the map of secondary data.« less
NASA Astrophysics Data System (ADS)
Mann, Nishan; Hughes, Stephen
2018-02-01
We present the analytical and numerical details behind our recently published article [Phys. Rev. Lett. 118, 253901 (2017), 10.1103/PhysRevLett.118.253901], describing the impact of disorder-induced multiple scattering on counterpropagating solitons in photonic crystal waveguides. Unlike current nonlinear approaches using the coupled mode formalism, we account for the effects of intraunit cell multiple scattering. To solve the resulting system of coupled semilinear partial differential equations, we introduce a modified Crank-Nicolson-type norm-preserving implicit finite difference scheme inspired by the transfer matrix method. We provide estimates of the numerical dispersion characteristics of our scheme so that optimal step sizes can be chosen to either minimize numerical dispersion or to mimic the exact dispersion. We then show numerical results of a fundamental soliton propagating in the presence of multiple scattering to demonstrate that choosing a subunit cell spatial step size is critical in accurately capturing the effects of multiple scattering, and illustrate the stochastic nature of disorder by simulating soliton propagation in various instances of disordered photonic crystal waveguides. Our approach is easily extended to include a wide range of optical nonlinearities and is applicable to various photonic nanostructures where power propagation is bidirectional, either by choice, or as a result of multiple scattering.
Model selection and assessment for multi-species occupancy models
Broms, Kristin M.; Hooten, Mevin B.; Fitzpatrick, Ryan M.
2016-01-01
While multi-species occupancy models (MSOMs) are emerging as a popular method for analyzing biodiversity data, formal checking and validation approaches for this class of models have lagged behind. Concurrent with the rise in application of MSOMs among ecologists, a quiet regime shift is occurring in Bayesian statistics where predictive model comparison approaches are experiencing a resurgence. Unlike single-species occupancy models that use integrated likelihoods, MSOMs are usually couched in a Bayesian framework and contain multiple levels. Standard model checking and selection methods are often unreliable in this setting and there is only limited guidance in the ecological literature for this class of models. We examined several different contemporary Bayesian hierarchical approaches for checking and validating MSOMs and applied these methods to a freshwater aquatic study system in Colorado, USA, to better understand the diversity and distributions of plains fishes. Our findings indicated distinct differences among model selection approaches, with cross-validation techniques performing the best in terms of prediction.
Karim, Mohammad Ehsanul; Gustafson, Paul; Petkau, John; Tremlett, Helen
2016-01-01
In time-to-event analyses of observational studies of drug effectiveness, incorrect handling of the period between cohort entry and first treatment exposure during follow-up may result in immortal time bias. This bias can be eliminated by acknowledging a change in treatment exposure status with time-dependent analyses, such as fitting a time-dependent Cox model. The prescription time-distribution matching (PTDM) method has been proposed as a simpler approach for controlling immortal time bias. Using simulation studies and theoretical quantification of bias, we compared the performance of the PTDM approach with that of the time-dependent Cox model in the presence of immortal time. Both assessments revealed that the PTDM approach did not adequately address immortal time bias. Based on our simulation results, another recently proposed observational data analysis technique, the sequential Cox approach, was found to be more useful than the PTDM approach (Cox: bias = −0.002, mean squared error = 0.025; PTDM: bias = −1.411, mean squared error = 2.011). We applied these approaches to investigate the association of β-interferon treatment with delaying disability progression in a multiple sclerosis cohort in British Columbia, Canada (Long-Term Benefits and Adverse Effects of Beta-Interferon for Multiple Sclerosis (BeAMS) Study, 1995–2008). PMID:27455963
Riley, Richard D; Ensor, Joie; Jackson, Dan; Burke, Danielle L
2017-01-01
Many meta-analysis models contain multiple parameters, for example due to multiple outcomes, multiple treatments or multiple regression coefficients. In particular, meta-regression models may contain multiple study-level covariates, and one-stage individual participant data meta-analysis models may contain multiple patient-level covariates and interactions. Here, we propose how to derive percentage study weights for such situations, in order to reveal the (otherwise hidden) contribution of each study toward the parameter estimates of interest. We assume that studies are independent, and utilise a decomposition of Fisher's information matrix to decompose the total variance matrix of parameter estimates into study-specific contributions, from which percentage weights are derived. This approach generalises how percentage weights are calculated in a traditional, single parameter meta-analysis model. Application is made to one- and two-stage individual participant data meta-analyses, meta-regression and network (multivariate) meta-analysis of multiple treatments. These reveal percentage study weights toward clinically important estimates, such as summary treatment effects and treatment-covariate interactions, and are especially useful when some studies are potential outliers or at high risk of bias. We also derive percentage study weights toward methodologically interesting measures, such as the magnitude of ecological bias (difference between within-study and across-study associations) and the amount of inconsistency (difference between direct and indirect evidence in a network meta-analysis).
Yang, Yingbao; Li, Xiaolong; Pan, Xin; Zhang, Yong; Cao, Chen
2017-01-01
Many downscaling algorithms have been proposed to address the issue of coarse-resolution land surface temperature (LST) derived from available satellite-borne sensors. However, few studies have focused on improving LST downscaling in urban areas with several mixed surface types. In this study, LST was downscaled by a multiple linear regression model between LST and multiple scale factors in mixed areas with three or four surface types. The correlation coefficients (CCs) between LST and the scale factors were used to assess the importance of the scale factors within a moving window. CC thresholds determined which factors participated in the fitting of the regression equation. The proposed downscaling approach, which involves an adaptive selection of the scale factors, was evaluated using the LST derived from four Landsat 8 thermal imageries of Nanjing City in different seasons. Results of the visual and quantitative analyses show that the proposed approach achieves relatively satisfactory downscaling results on 11 August, with coefficient of determination and root-mean-square error of 0.87 and 1.13 °C, respectively. Relative to other approaches, our approach shows the similar accuracy and the availability in all seasons. The best (worst) availability occurred in the region of vegetation (water). Thus, the approach is an efficient and reliable LST downscaling method. Future tasks include reliable LST downscaling in challenging regions and the application of our model in middle and low spatial resolutions. PMID:28368301
Methods for forecasting freight in uncertainty : time series analysis of multiple factors.
DOT National Transportation Integrated Search
2011-01-31
The main goal of this research was to analyze and more accurately model freight movement in : Alabama. Ultimately, the goal of this project was to provide an overall approach to the : integration of accurate freight models into transportation plans a...
Multiagent intelligent systems
NASA Astrophysics Data System (ADS)
Krause, Lee S.; Dean, Christopher; Lehman, Lynn A.
2003-09-01
This paper will discuss a simulation approach based upon a family of agent-based models. As the demands placed upon simulation technology by such applications as Effects Based Operations (EBO), evaluations of indicators and warnings surrounding homeland defense and commercial demands such financial risk management current single thread based simulations will continue to show serious deficiencies. The types of "what if" analysis required to support these types of applications, demand rapidly re-configurable approaches capable of aggregating large models incorporating multiple viewpoints. The use of agent technology promises to provide a broad spectrum of models incorporating differing viewpoints through a synthesis of a collection of models. Each model would provide estimates to the overall scenario based upon their particular measure or aspect. An agent framework, denoted as the "family" would provide a common ontology in support of differing aspects of the scenario. This approach permits the future of modeling to change from viewing the problem as a single thread simulation, to take into account multiple viewpoints from different models. Even as models are updated or replaced the agent approach permits rapid inclusion in new or modified simulations. In this approach a variety of low and high-resolution information and its synthesis requires a family of models. Each agent "publishes" its support for a given measure and each model provides their own estimates on the scenario based upon their particular measure or aspect. If more than one agent provides the same measure (e.g. cognitive) then the results from these agents are combined to form an aggregate measure response. The objective would be to inform and help calibrate a qualitative model, rather than merely to present highly aggregated statistical information. As each result is processed, the next action can then be determined. This is done by a top-level decision system that communicates to the family at the ontology level without any specific understanding of the processes (or model) behind each agent. The increasingly complex demands upon simulation for the necessity to incorporate the breadth and depth of influencing factors makes a family of agent based models a promising solution. This paper will discuss that solution with syntax and semantics necessary to support the approach.
NASA Astrophysics Data System (ADS)
Mai, J.; Cuntz, M.; Zink, M.; Schaefer, D.; Thober, S.; Samaniego, L. E.; Shafii, M.; Tolson, B.
2015-12-01
Hydrologic models are traditionally calibrated against discharge. Recent studies have shown however, that only a few global model parameters are constrained using the integral discharge measurements. It is therefore advisable to use additional information to calibrate those models. Snow pack data, for example, could improve the parametrization of snow-related processes, which might be underrepresented when using only discharge. One common approach is to combine these multiple objectives into one single objective function and allow the use of a single-objective algorithm. Another strategy is to consider the different objectives separately and apply a Pareto-optimizing algorithm. Both methods are challenging in the choice of appropriate multiple objectives with either conflicting interests or the focus on different model processes. A first aim of this study is to compare the two approaches employing the mesoscale Hydrologic Model mHM at several distinct river basins over Europe and North America. This comparison will allow the identification of the single-objective solution on the Pareto front. It is elucidated if this position is determined by the weighting and scaling of the multiple objectives when combing them to the single objective. The principal second aim is to guide the selection of proper objectives employing sensitivity analyses. These analyses are used to determine if an additional information would help to constrain additional model parameters. The additional information are either multiple data sources or multiple signatures of one measurement. It is evaluated if specific discharge signatures can inform different parts of the hydrologic model. The results show that an appropriate selection of discharge signatures increased the number of constrained parameters by more than 50% compared to using only NSE of the discharge time series. It is further assessed if the use of these signatures impose conflicting objectives on the hydrologic model. The usage of signatures is furthermore contrasted to the use of additional observations such as soil moisture or snow height. The gain of using an auxiliary dataset is determined using the parametric sensitivity on the respective modeled variable.
Chen, Yongsheng; Persaud, Bhagwant
2014-09-01
Crash modification factors (CMFs) for road safety treatments are developed as multiplicative factors that are used to reflect the expected changes in safety performance associated with changes in highway design and/or the traffic control features. However, current CMFs have methodological drawbacks. For example, variability with application circumstance is not well understood, and, as important, correlation is not addressed when several CMFs are applied multiplicatively. These issues can be addressed by developing safety performance functions (SPFs) with components of crash modification functions (CM-Functions), an approach that includes all CMF related variables, along with others, while capturing quantitative and other effects of factors and accounting for cross-factor correlations. CM-Functions can capture the safety impact of factors through a continuous and quantitative approach, avoiding the problematic categorical analysis that is often used to capture CMF variability. There are two formulations to develop such SPFs with CM-Function components - fully specified models and hierarchical models. Based on sample datasets from two Canadian cities, both approaches are investigated in this paper. While both model formulations yielded promising results and reasonable CM-Functions, the hierarchical model was found to be more suitable in retaining homogeneity of first-level SPFs, while addressing CM-Functions in sub-level modeling. In addition, hierarchical models better capture the correlations between different impact factors. Copyright © 2014 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Hussein, Bassam A.
2015-01-01
The paper demonstrates and evaluates the effectiveness of a blended learning approach to create a meaningful learning environment. We use the term blended learning approach in this paper to refer to the use of multiple or hybrid instructional methods that emphasize the role of learners as contributors to the learning process rather than recipients…
Periodicity analysis of tourist arrivals to Banda Aceh using smoothing SARIMA approach
NASA Astrophysics Data System (ADS)
Miftahuddin, Helida, Desri; Sofyan, Hizir
2017-11-01
Forecasting the number of tourist arrivals who enters a region is needed for tourism businesses, economic and industrial policies, so that the statistical modeling needs to be conducted. Banda Aceh is the capital of Aceh province more economic activity is driven by the services sector, one of which is the tourism sector. Therefore, the prediction of the number of tourist arrivals is needed to develop further policies. The identification results indicate that the data arrival of foreign tourists to Banda Aceh to contain the trend and seasonal nature. Allegedly, the number of arrivals is influenced by external factors, such as economics, politics, and the holiday season caused the structural break in the data. Trend patterns are detected by using polynomial regression with quadratic and cubic approaches, while seasonal is detected by a periodic regression polynomial with quadratic and cubic approach. To model the data that has seasonal effects, one of the statistical methods that can be used is SARIMA (Seasonal Autoregressive Integrated Moving Average). The results showed that the smoothing, a method to detect the trend pattern is cubic polynomial regression approach, with the modified model and the multiplicative periodicity of 12 months. The AIC value obtained was 70.52. While the method for detecting the seasonal pattern is a periodic regression polynomial cubic approach, with the modified model and the multiplicative periodicity of 12 months. The AIC value obtained was 73.37. Furthermore, the best model to predict the number of foreign tourist arrivals to Banda Aceh in 2017 to 2018 is SARIMA (0,1,1)(1,1,0) with MAPE is 26%.
Accounting for disease modifying therapy in models of clinical progression in multiple sclerosis.
Healy, Brian C; Engler, David; Gholipour, Taha; Weiner, Howard; Bakshi, Rohit; Chitnis, Tanuja
2011-04-15
Identifying predictors of clinical progression in patients with relapsing-remitting multiple sclerosis (RRMS) is complicated in the era of disease modifying therapy (DMT) because patients follow many different DMT regimens. To investigate predictors of progression in a treated RRMS sample, a cohort of RRMS patients was prospectively followed in the Comprehensive Longitudinal Investigation of Multiple Sclerosis at the Brigham and Women's Hospital (CLIMB). Enrollment criteria were exposure to either interferon-β (IFN-β, n=164) or glatiramer acetate (GA, n=114) for at least 6 months prior to study entry. Baseline demographic and clinical features were used as candidate predictors of longitudinal clinical change on the Expanded Disability Status Scale (EDSS). We compared three approaches to account for DMT effects in statistical modeling. In all approaches, we analyzed all patients together and stratified based on baseline DMT. Model 1 used all available longitudinal EDSS scores, even those after on-study DMT changes. Model 2 used only clinical observations prior to changing DMT. Model 3 used causal statistical models to identify predictors of clinical change. When all patients were considered using Model 1, patients with a motor symptom as the first relapse had significantly larger change in EDSS scores during follow-up (p=0.04); none of the other clinical or demographic variables significantly predicted change. In Models 2 and 3, results were generally unchanged. DMT modeling choice had a modest impact on the variables classified as predictors of EDSS score change. Importantly, however, interpretation of these predictors is dependent upon modeling choice. Copyright © 2011 Elsevier B.V. All rights reserved.
Semantic Indexing of Multimedia Content Using Visual, Audio, and Text Cues
NASA Astrophysics Data System (ADS)
Adams, W. H.; Iyengar, Giridharan; Lin, Ching-Yung; Naphade, Milind Ramesh; Neti, Chalapathy; Nock, Harriet J.; Smith, John R.
2003-12-01
We present a learning-based approach to the semantic indexing of multimedia content using cues derived from audio, visual, and text features. We approach the problem by developing a set of statistical models for a predefined lexicon. Novel concepts are then mapped in terms of the concepts in the lexicon. To achieve robust detection of concepts, we exploit features from multiple modalities, namely, audio, video, and text. Concept representations are modeled using Gaussian mixture models (GMM), hidden Markov models (HMM), and support vector machines (SVM). Models such as Bayesian networks and SVMs are used in a late-fusion approach to model concepts that are not explicitly modeled in terms of features. Our experiments indicate promise in the proposed classification and fusion methodologies: our proposed fusion scheme achieves more than 10% relative improvement over the best unimodal concept detector.
Complexity Science Applications to Dynamic Trajectory Management: Research Strategies
NASA Technical Reports Server (NTRS)
Sawhill, Bruce; Herriot, James; Holmes, Bruce J.; Alexandrov, Natalia
2009-01-01
The promise of the Next Generation Air Transportation System (NextGen) is strongly tied to the concept of trajectory-based operations in the national airspace system. Existing efforts to develop trajectory management concepts are largely focused on individual trajectories, optimized independently, then de-conflicted among each other, and individually re-optimized, as possible. The benefits in capacity, fuel, and time are valuable, though perhaps could be greater through alternative strategies. The concept of agent-based trajectories offers a strategy for automation of simultaneous multiple trajectory management. The anticipated result of the strategy would be dynamic management of multiple trajectories with interacting and interdependent outcomes that satisfy multiple, conflicting constraints. These constraints would include the business case for operators, the capacity case for the Air Navigation Service Provider (ANSP), and the environmental case for noise and emissions. The benefits in capacity, fuel, and time might be improved over those possible under individual trajectory management approaches. The proposed approach relies on computational agent-based modeling (ABM), combinatorial mathematics, as well as application of "traffic physics" concepts to the challenge, and modeling and simulation capabilities. The proposed strategy could support transforming air traffic control from managing individual aircraft behaviors to managing systemic behavior of air traffic in the NAS. A system built on the approach could provide the ability to know when regions of airspace approach being "full," that is, having non-viable local solution space for optimizing trajectories in advance.
Reflection of a Year Long Model-Driven Business and UI Modeling Development Project
NASA Astrophysics Data System (ADS)
Sukaviriya, Noi; Mani, Senthil; Sinha, Vibha
Model-driven software development enables users to specify an application at a high level - a level that better matches problem domain. It also promises the users with better analysis and automation. Our work embarks on two collaborating domains - business process and human interactions - to build an application. Business modeling expresses business operations and flows then creates business flow implementation. Human interaction modeling expresses a UI design, its relationship with business data, logic, and flow, and can generate working UI. This double modeling approach automates the production of a working system with UI and business logic connected. This paper discusses the human aspects of this modeling approach after a year long of building a procurement outsourcing contract application using the approach - the result of which was deployed in December 2008. The paper discusses in multiple areas the happy endings and some heartache. We end with insights on how a model-driven approach could do better for humans in the process.
Reinterpreting Comorbidity: A Model-Based Approach to Understanding and Classifying Psychopathology
Krueger, Robert F.; Markon, Kristian E.
2008-01-01
Comorbidity has presented a persistent puzzle for psychopathology research. We review recent literature indicating that the puzzle of comorbidity is being solved by research fitting explicit quantitative models to data on comorbidity. We present a meta-analysis of a liability spectrum model of comorbidity, in which specific mental disorders are understood as manifestations of latent liability factors that explain comorbidity by virtue of their impact on multiple disorders. Nosological, structural, etiological, and psychological aspects of this liability spectrum approach to understanding comorbidity are discussed. PMID:17716066
Vassallo, Rebecca; Durrant, Gabriele B; Smith, Peter W F; Goldstein, Harvey
2015-01-01
The paper investigates two different multilevel approaches, the multilevel cross-classified and the multiple-membership models, for the analysis of interviewer effects on wave non-response in longitudinal surveys. The models proposed incorporate both interviewer and area effects to account for the non-hierarchical structure, the influence of potentially more than one interviewer across waves and possible confounding of area and interviewer effects arising from the non-random allocation of interviewers across areas. The methods are compared by using a data set: the UK Family and Children Survey. PMID:25598587
DOE Office of Scientific and Technical Information (OSTI.GOV)
Creutz, Michael
Using the Sigma model to explore the lowest order pseudo-scalar spectrum with SU(3) breaking, this talk considers an additional exact "taste" symmetry to mimic species doubling. Rooting replicas of a valid approach such as Wilson fermions reproduces the desired physical spectrum. In contrast, extra symmetries of the rooted staggered approach leave spurious states and a flavor dependent taste multiplicity.
Transport theory and the WKB approximation for interplanetary MHD fluctuations
NASA Technical Reports Server (NTRS)
Matthaeus, William H.; Zhou, YE; Zank, G. P.; Oughton, S.
1994-01-01
An alternative approach, based on a multiple scale analysis, is presented in order to reconcile the traditional Wentzel-Kramer-Brillouin (WKB) approach to the modeling of interplanetary fluctuations in a mildly inhomogeneous large-scale flow with a more recently developed transport theory. This enables us to compare directly, at a formal level, the inherent structure of the two models. In the case of noninteracting, incompressible (Alven) waves, the principle difference between the two models is the presence of leading-order couplings (called 'mixing effects') in the non-WKB turbulence model which are absent in a WKB development. Within the context of linearized MHD, two cases have been identified for which the leading order non-WJB 'mixing term' does not vanish at zero wavelength. For these cases the WKB expansion is divergent, whereas the multiple-scale theory is well behaved. We have thus established that the WKB results are contained within the multiple-scale theory, but leading order mixing effects, which are likely to have important observational consequences, can never be recovered in the WKB style expansion. Properties of the higher-order terms in each expansion are also discussed, leading to the conclusion that the non-WKB hierarchy may be applicable even when the scale separation parameter is not small.
Modeling Spatial Dependence of Rainfall Extremes Across Multiple Durations
NASA Astrophysics Data System (ADS)
Le, Phuong Dong; Leonard, Michael; Westra, Seth
2018-03-01
Determining the probability of a flood event in a catchment given that another flood has occurred in a nearby catchment is useful in the design of infrastructure such as road networks that have multiple river crossings. These conditional flood probabilities can be estimated by calculating conditional probabilities of extreme rainfall and then transforming rainfall to runoff through a hydrologic model. Each catchment's hydrological response times are unlikely to be the same, so in order to estimate these conditional probabilities one must consider the dependence of extreme rainfall both across space and across critical storm durations. To represent these types of dependence, this study proposes a new approach for combining extreme rainfall across different durations within a spatial extreme value model using max-stable process theory. This is achieved in a stepwise manner. The first step defines a set of common parameters for the marginal distributions across multiple durations. The parameters are then spatially interpolated to develop a spatial field. Storm-level dependence is represented through the max-stable process for rainfall extremes across different durations. The dependence model shows a reasonable fit between the observed pairwise extremal coefficients and the theoretical pairwise extremal coefficient function across all durations. The study demonstrates how the approach can be applied to develop conditional maps of the return period and return level across different durations.
Matsumoto, Atsushi; Miyazaki, Naoyuki; Takagi, Junichi; Iwasaki, Kenji
2017-03-23
In this study, we develop an approach termed "2D hybrid analysis" for building atomic models by image matching from electron microscopy (EM) images of biological molecules. The key advantage is that it is applicable to flexible molecules, which are difficult to analyze by 3DEM approach. In the proposed approach, first, a lot of atomic models with different conformations are built by computer simulation. Then, simulated EM images are built from each atomic model. Finally, they are compared with the experimental EM image. Two kinds of models are used as simulated EM images: the negative stain model and the simple projection model. Although the former is more realistic, the latter is adopted to perform faster computations. The use of the negative stain model enables decomposition of the averaged EM images into multiple projection images, each of which originated from a different conformation or orientation. We apply this approach to the EM images of integrin to obtain the distribution of the conformations, from which the pathway of the conformational change of the protein is deduced.
ERIC Educational Resources Information Center
Tay, Louis; Huang, Qiming; Vermunt, Jeroen K.
2016-01-01
In large-scale testing, the use of multigroup approaches is limited for assessing differential item functioning (DIF) across multiple variables as DIF is examined for each variable separately. In contrast, the item response theory with covariate (IRT-C) procedure can be used to examine DIF across multiple variables (covariates) simultaneously. To…
A method for integrating multiple components in a decision support system
Donald Nute; Walter D. Potter; Zhiyuan Cheng; Mayukh Dass; Astrid Glende; Frederick Maierv; Cy Routh; Hajime Uchiyama; Jin Wang; Sarah Witzig; Mark Twery; Peter Knopp; Scott Thomasma; H. Michael Rauscher
2005-01-01
We present a flexible, extensible method for integrating multiple tools into a single large decision support system (DSS) using a forest ecosystem management DSS (NED-2) as an example. In our approach, a rich ontology for the target domain is developed and implemented in the internal data model for the DSS. Semi-autonomous agents control external components and...
ERIC Educational Resources Information Center
Teo, Timothy
2010-01-01
Purpose: The purpose of this paper is to examine the effect of gender on pre-service teachers' computer attitudes. Design/methodology/approach: A total of 157 pre-service teachers completed a survey questionnaire measuring their responses to four constructs which explain computer attitude. These were administered during the teaching term where…
Fast automated segmentation of multiple objects via spatially weighted shape learning
NASA Astrophysics Data System (ADS)
Chandra, Shekhar S.; Dowling, Jason A.; Greer, Peter B.; Martin, Jarad; Wratten, Chris; Pichler, Peter; Fripp, Jurgen; Crozier, Stuart
2016-11-01
Active shape models (ASMs) have proved successful in automatic segmentation by using shape and appearance priors in a number of areas such as prostate segmentation, where accurate contouring is important in treatment planning for prostate cancer. The ASM approach however, is heavily reliant on a good initialisation for achieving high segmentation quality. This initialisation often requires algorithms with high computational complexity, such as three dimensional (3D) image registration. In this work, we present a fast, self-initialised ASM approach that simultaneously fits multiple objects hierarchically controlled by spatially weighted shape learning. Prominent objects are targeted initially and spatial weights are progressively adjusted so that the next (more difficult, less visible) object is simultaneously initialised using a series of weighted shape models. The scheme was validated and compared to a multi-atlas approach on 3D magnetic resonance (MR) images of 38 cancer patients and had the same (mean, median, inter-rater) Dice’s similarity coefficients of (0.79, 0.81, 0.85), while having no registration error and a computational time of 12-15 min, nearly an order of magnitude faster than the multi-atlas approach.
Fast automated segmentation of multiple objects via spatially weighted shape learning.
Chandra, Shekhar S; Dowling, Jason A; Greer, Peter B; Martin, Jarad; Wratten, Chris; Pichler, Peter; Fripp, Jurgen; Crozier, Stuart
2016-11-21
Active shape models (ASMs) have proved successful in automatic segmentation by using shape and appearance priors in a number of areas such as prostate segmentation, where accurate contouring is important in treatment planning for prostate cancer. The ASM approach however, is heavily reliant on a good initialisation for achieving high segmentation quality. This initialisation often requires algorithms with high computational complexity, such as three dimensional (3D) image registration. In this work, we present a fast, self-initialised ASM approach that simultaneously fits multiple objects hierarchically controlled by spatially weighted shape learning. Prominent objects are targeted initially and spatial weights are progressively adjusted so that the next (more difficult, less visible) object is simultaneously initialised using a series of weighted shape models. The scheme was validated and compared to a multi-atlas approach on 3D magnetic resonance (MR) images of 38 cancer patients and had the same (mean, median, inter-rater) Dice's similarity coefficients of (0.79, 0.81, 0.85), while having no registration error and a computational time of 12-15 min, nearly an order of magnitude faster than the multi-atlas approach.
Multiple imputation methods for bivariate outcomes in cluster randomised trials.
DiazOrdaz, K; Kenward, M G; Gomes, M; Grieve, R
2016-09-10
Missing observations are common in cluster randomised trials. The problem is exacerbated when modelling bivariate outcomes jointly, as the proportion of complete cases is often considerably smaller than the proportion having either of the outcomes fully observed. Approaches taken to handling such missing data include the following: complete case analysis, single-level multiple imputation that ignores the clustering, multiple imputation with a fixed effect for each cluster and multilevel multiple imputation. We contrasted the alternative approaches to handling missing data in a cost-effectiveness analysis that uses data from a cluster randomised trial to evaluate an exercise intervention for care home residents. We then conducted a simulation study to assess the performance of these approaches on bivariate continuous outcomes, in terms of confidence interval coverage and empirical bias in the estimated treatment effects. Missing-at-random clustered data scenarios were simulated following a full-factorial design. Across all the missing data mechanisms considered, the multiple imputation methods provided estimators with negligible bias, while complete case analysis resulted in biased treatment effect estimates in scenarios where the randomised treatment arm was associated with missingness. Confidence interval coverage was generally in excess of nominal levels (up to 99.8%) following fixed-effects multiple imputation and too low following single-level multiple imputation. Multilevel multiple imputation led to coverage levels of approximately 95% throughout. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Bi-level Multi-Source Learning for Heterogeneous Block-wise Missing Data
Xiang, Shuo; Yuan, Lei; Fan, Wei; Wang, Yalin; Thompson, Paul M.; Ye, Jieping
2013-01-01
Bio-imaging technologies allow scientists to collect large amounts of high-dimensional data from multiple heterogeneous sources for many biomedical applications. In the study of Alzheimer's Disease (AD), neuroimaging data, gene/protein expression data, etc., are often analyzed together to improve predictive power. Joint learning from multiple complementary data sources is advantageous, but feature-pruning and data source selection are critical to learn interpretable models from high-dimensional data. Often, the data collected has block-wise missing entries. In the Alzheimer’s Disease Neuroimaging Initiative (ADNI), most subjects have MRI and genetic information, but only half have cerebrospinal fluid (CSF) measures, a different half has FDG-PET; only some have proteomic data. Here we propose how to effectively integrate information from multiple heterogeneous data sources when data is block-wise missing. We present a unified “bi-level” learning model for complete multi-source data, and extend it to incomplete data. Our major contributions are: (1) our proposed models unify feature-level and source-level analysis, including several existing feature learning approaches as special cases; (2) the model for incomplete data avoids imputing missing data and offers superior performance; it generalizes to other applications with block-wise missing data sources; (3) we present efficient optimization algorithms for modeling complete and incomplete data. We comprehensively evaluate the proposed models including all ADNI subjects with at least one of four data types at baseline: MRI, FDG-PET, CSF and proteomics. Our proposed models compare favorably with existing approaches. PMID:23988272
Bi-level multi-source learning for heterogeneous block-wise missing data.
Xiang, Shuo; Yuan, Lei; Fan, Wei; Wang, Yalin; Thompson, Paul M; Ye, Jieping
2014-11-15
Bio-imaging technologies allow scientists to collect large amounts of high-dimensional data from multiple heterogeneous sources for many biomedical applications. In the study of Alzheimer's Disease (AD), neuroimaging data, gene/protein expression data, etc., are often analyzed together to improve predictive power. Joint learning from multiple complementary data sources is advantageous, but feature-pruning and data source selection are critical to learn interpretable models from high-dimensional data. Often, the data collected has block-wise missing entries. In the Alzheimer's Disease Neuroimaging Initiative (ADNI), most subjects have MRI and genetic information, but only half have cerebrospinal fluid (CSF) measures, a different half has FDG-PET; only some have proteomic data. Here we propose how to effectively integrate information from multiple heterogeneous data sources when data is block-wise missing. We present a unified "bi-level" learning model for complete multi-source data, and extend it to incomplete data. Our major contributions are: (1) our proposed models unify feature-level and source-level analysis, including several existing feature learning approaches as special cases; (2) the model for incomplete data avoids imputing missing data and offers superior performance; it generalizes to other applications with block-wise missing data sources; (3) we present efficient optimization algorithms for modeling complete and incomplete data. We comprehensively evaluate the proposed models including all ADNI subjects with at least one of four data types at baseline: MRI, FDG-PET, CSF and proteomics. Our proposed models compare favorably with existing approaches. © 2013 Elsevier Inc. All rights reserved.
New Models for Predicting Diameter at Breast Height from Stump Dimensions
James A. Westfall
2010-01-01
Models to predict dbh from stump dimensions are presented for 18 species groups. Data used to fit the models were collected across thirteen states in the northeastern United States. Primarily because of the presence of multiple measurements from each tree, a mixed-effects modeling approach was used to account for the lack of independence among observations. The...
ERIC Educational Resources Information Center
Zhou, Bo; Konstorum, Anna; Duong, Thao; Tieu, Kinh H.; Wells, William M.; Brown, Gregory G.; Stern, Hal S.; Shahbaba, Babak
2013-01-01
We propose a hierarchical Bayesian model for analyzing multi-site experimental fMRI studies. Our method takes the hierarchical structure of the data (subjects are nested within sites, and there are multiple observations per subject) into account and allows for modeling between-site variation. Using posterior predictive model checking and model…
A Model-Based Method for Content Validation of Automatically Generated Test Items
ERIC Educational Resources Information Center
Zhang, Xinxin; Gierl, Mark
2016-01-01
The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…
Dynamic fuzzy modeling of storm water infiltration in urban fractured aquifers
Hong, Y.-S.; Rosen, Michael R.; Reeves, R.R.
2002-01-01
In an urban fractured-rock aquifer in the Mt. Eden area of Auckland, New Zealand, disposal of storm water is via "soakholes" drilled directly into the top of the fractured basalt rock. The dynamic response of the groundwater level due to the storm water infiltration shows characteristics of a strongly time-varying system. A dynamic fuzzy modeling approach, which is based on multiple local models that are weighted using fuzzy membership functions, has been developed to identify and predict groundwater level fluctuations caused by storm water infiltration. The dynamic fuzzy model is initialized by the fuzzy clustering algorithm and optimized by the gradient-descent algorithm in order to effectively derive the multiple local models-each of which is associated with a locally valid model that represents the groundwater level state as a response to different intensities of rainfall events. The results have shown that even if the number of fuzzy local models derived is small, the fuzzy modeling approach developed provides good prediction results despite the highly time-varying nature of this urban fractured-rock aquifer system. Further, it allows interpretable representations of the dynamic behavior of the groundwater system due to storm water infiltration.
Enhancing multiple disciplinary teamwork.
Weaver, Terri E
2008-01-01
Multiple disciplinary research provides an opportunity to bring together investigators across disciplines to provide new views and develop innovative approaches to important questions. Through this shared experience, novel paradigms are formed, original frameworks are developed, and new language is generated. Integral to the successful construction of effective cross-disciplinary teams is the recognition of antecedent factors that affect the development of the team such as intrapersonal, social, physical environmental, organizational, and institutional influences. Team functioning is enhanced with well-developed behavioral, affective, interpersonal, and intellectual processes. Outcomes of effective multiple disciplinary research teams include novel ideas, integrative models, new training programs, institutional change, and innovative policies that can also influence the degree to which antecedents and processes contribute to team performance. Ongoing evaluation of team functioning and achievement of designated outcomes ensures the continued development of the multiple disciplinary team and confirmation of this approach as important to the advancement of science.
2013-01-01
Background Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the ‘common variance’ assumption). This approach ‘borrows strength’ for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. Methods In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. Results In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. Conclusions MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice. PMID:23311298
The Money-Creation Model: Another Pedagogy.
ERIC Educational Resources Information Center
Gamble, Ralph C., Jr.
1991-01-01
Describes graphical techniques to help explain the multiple creation of deposits that accompany lending in a fractional reserve banking system. Presents a model that emphasizes the banking system, the interaction of total permitted, required, and excess reserves and deposits. Argues that the approach simplifies information to examining a slope…
The Industrial Process System Assessment (IPSA) methodology is a multiple step allocation approach for connecting information from the production line level up to the facility level and vice versa using a multiscale model of process systems. The allocation procedure assigns inpu...
Attitude determination using an adaptive multiple model filtering Scheme
NASA Technical Reports Server (NTRS)
Lam, Quang; Ray, Surendra N.
1995-01-01
Attitude determination has been considered as a permanent topic of active research and perhaps remaining as a forever-lasting interest for spacecraft system designers. Its role is to provide a reference for controls such as pointing the directional antennas or solar panels, stabilizing the spacecraft or maneuvering the spacecraft to a new orbit. Least Square Estimation (LSE) technique was utilized to provide attitude determination for the Nimbus 6 and G. Despite its poor performance (estimation accuracy consideration), LSE was considered as an effective and practical approach to meet the urgent need and requirement back in the 70's. One reason for this poor performance associated with the LSE scheme is the lack of dynamic filtering or 'compensation'. In other words, the scheme is based totally on the measurements and no attempts were made to model the dynamic equations of motion of the spacecraft. We propose an adaptive filtering approach which employs a bank of Kalman filters to perform robust attitude estimation. The proposed approach, whose architecture is depicted, is essentially based on the latest proof on the interactive multiple model design framework to handle the unknown of the system noise characteristics or statistics. The concept fundamentally employs a bank of Kalman filter or submodel, instead of using fixed values for the system noise statistics for each submodel (per operating condition) as the traditional multiple model approach does, we use an on-line dynamic system noise identifier to 'identify' the system noise level (statistics) and update the filter noise statistics using 'live' information from the sensor model. The advanced noise identifier, whose architecture is also shown, is implemented using an advanced system identifier. To insure the robust performance for the proposed advanced system identifier, it is also further reinforced by a learning system which is implemented (in the outer loop) using neural networks to identify other unknown quantities such as spacecraft dynamics parameters, gyro biases, dynamic disturbances, or environment variations.
Attitude determination using an adaptive multiple model filtering Scheme
NASA Astrophysics Data System (ADS)
Lam, Quang; Ray, Surendra N.
1995-05-01
Attitude determination has been considered as a permanent topic of active research and perhaps remaining as a forever-lasting interest for spacecraft system designers. Its role is to provide a reference for controls such as pointing the directional antennas or solar panels, stabilizing the spacecraft or maneuvering the spacecraft to a new orbit. Least Square Estimation (LSE) technique was utilized to provide attitude determination for the Nimbus 6 and G. Despite its poor performance (estimation accuracy consideration), LSE was considered as an effective and practical approach to meet the urgent need and requirement back in the 70's. One reason for this poor performance associated with the LSE scheme is the lack of dynamic filtering or 'compensation'. In other words, the scheme is based totally on the measurements and no attempts were made to model the dynamic equations of motion of the spacecraft. We propose an adaptive filtering approach which employs a bank of Kalman filters to perform robust attitude estimation. The proposed approach, whose architecture is depicted, is essentially based on the latest proof on the interactive multiple model design framework to handle the unknown of the system noise characteristics or statistics. The concept fundamentally employs a bank of Kalman filter or submodel, instead of using fixed values for the system noise statistics for each submodel (per operating condition) as the traditional multiple model approach does, we use an on-line dynamic system noise identifier to 'identify' the system noise level (statistics) and update the filter noise statistics using 'live' information from the sensor model. The advanced noise identifier, whose architecture is also shown, is implemented using an advanced system identifier. To insure the robust performance for the proposed advanced system identifier, it is also further reinforced by a learning system which is implemented (in the outer loop) using neural networks to identify other unknown quantities such as spacecraft dynamics parameters, gyro biases, dynamic disturbances, or environment variations.
Data matching for free-surface multiple attenuation by multidimensional deconvolution
NASA Astrophysics Data System (ADS)
van der Neut, Joost; Frijlink, Martijn; van Borselen, Roald
2012-09-01
A common strategy for surface-related multiple elimination of seismic data is to predict multiples by a convolutional model and subtract these adaptively from the input gathers. Problems can be posed by interfering multiples and primaries. Removing multiples by multidimensional deconvolution (MDD) (inversion) does not suffer from these problems. However, this approach requires data to be consistent, which is often not the case, especially not at interpolated near-offsets. A novel method is proposed to improve data consistency prior to inversion. This is done by backpropagating first-order multiples with a time-gated reference primary event and matching these with early primaries in the input gather. After data matching, multiple elimination by MDD can be applied with a deterministic inversion scheme.
Integrated presentation of ecological risk from multiple stressors
Goussen, Benoit; Price, Oliver R.; Rendal, Cecilie; Ashauer, Roman
2016-01-01
Current environmental risk assessments (ERA) do not account explicitly for ecological factors (e.g. species composition, temperature or food availability) and multiple stressors. Assessing mixtures of chemical and ecological stressors is needed as well as accounting for variability in environmental conditions and uncertainty of data and models. Here we propose a novel probabilistic ERA framework to overcome these limitations, which focusses on visualising assessment outcomes by construct-ing and interpreting prevalence plots as a quantitative prediction of risk. Key components include environmental scenarios that integrate exposure and ecology, and ecological modelling of relevant endpoints to assess the effect of a combination of stressors. Our illustrative results demonstrate the importance of regional differences in environmental conditions and the confounding interactions of stressors. Using this framework and prevalence plots provides a risk-based approach that combines risk assessment and risk management in a meaningful way and presents a truly mechanistic alternative to the threshold approach. Even whilst research continues to improve the underlying models and data, regulators and decision makers can already use the framework and prevalence plots. The integration of multiple stressors, environmental conditions and variability makes ERA more relevant and realistic. PMID:27782171
Channel Acquisition for Massive MIMO-OFDM With Adjustable Phase Shift Pilots
NASA Astrophysics Data System (ADS)
You, Li; Gao, Xiqi; Swindlehurst, A. Lee; Zhong, Wen
2016-03-01
We propose adjustable phase shift pilots (APSPs) for channel acquisition in wideband massive multiple-input multiple-output (MIMO) systems employing orthogonal frequency division multiplexing (OFDM) to reduce the pilot overhead. Based on a physically motivated channel model, we first establish a relationship between channel space-frequency correlations and the channel power angle-delay spectrum in the massive antenna array regime, which reveals the channel sparsity in massive MIMO-OFDM. With this channel model, we then investigate channel acquisition, including channel estimation and channel prediction, for massive MIMO-OFDM with APSPs. We show that channel acquisition performance in terms of sum mean square error can be minimized if the user terminals' channel power distributions in the angle-delay domain can be made non-overlapping with proper phase shift scheduling. A simplified pilot phase shift scheduling algorithm is developed based on this optimal channel acquisition condition. The performance of APSPs is investigated for both one symbol and multiple symbol data models. Simulations demonstrate that the proposed APSP approach can provide substantial performance gains in terms of achievable spectral efficiency over the conventional phase shift orthogonal pilot approach in typical mobility scenarios.
NASA Technical Reports Server (NTRS)
Parker, Linda Neergaard; Zank, Gary P.
2013-01-01
We present preliminary results from a model that diffusively accelerates particles at multiple shocks. Our basic approach is related to box models (Protheroe and Stanev, 1998; Moraal and Axford, 1983; Ball and Kirk, 1992; Drury et al., 1999) in which a distribution of particles is diffusively accelerated inside the box while simultaneously experiencing decompression through adiabatic expansion and losses from the convection and diffusion of particles outside the box (Melrose and Pope, 1993; Zank et al., 2000). We adiabatically decompress the accelerated particle distribution between each shock by either the method explored in Melrose and Pope (1993) and Pope and Melrose (1994) or by the approach set forth in Zank et al. (2000) where we solve the transport equation by a method analogous to operator splitting. The second method incorporates the additional loss terms of convection and diffusion and allows for the use of a variable time between shocks. We use a maximum injection energy (Emax) appropriate for quasi-parallel and quasi-perpendicular shocks (Zank et al., 2000, 2006; Dosch and Shalchi, 2010) and provide a preliminary application of the diffusive acceleration of particles by multiple shocks with frequencies appropriate for solar maximum (i.e., a non-Markovian process).
System design in an evolving system-of-systems architecture and concept of operations
NASA Astrophysics Data System (ADS)
Rovekamp, Roger N., Jr.
Proposals for space exploration architectures have increased in complexity and scope. Constituent systems (e.g., rovers, habitats, in-situ resource utilization facilities, transfer vehicles, etc) must meet the needs of these architectures by performing in multiple operational environments and across multiple phases of the architecture's evolution. This thesis proposes an approach for using system-of-systems engineering principles in conjunction with system design methods (e.g., Multi-objective optimization, genetic algorithms, etc) to create system design options that perform effectively at both the system and system-of-systems levels, across multiple concepts of operations, and over multiple architectural phases. The framework is presented by way of an application problem that investigates the design of power systems within a power sharing architecture for use in a human Lunar Surface Exploration Campaign. A computer model has been developed that uses candidate power grid distribution solutions for a notional lunar base. The agent-based model utilizes virtual control agents to manage the interactions of various exploration and infrastructure agents. The philosophy behind the model is based both on lunar power supply strategies proposed in literature, as well as on the author's own approaches for power distribution strategies of future lunar bases. In addition to proposing a framework for system design, further implications of system-of-systems engineering principles are briefly explored, specifically as they relate to producing more robust cross-cultural system-of-systems architecture solutions.
MAGDM linear-programming models with distinct uncertain preference structures.
Xu, Zeshui S; Chen, Jian
2008-10-01
Group decision making with preference information on alternatives is an interesting and important research topic which has been receiving more and more attention in recent years. The purpose of this paper is to investigate multiple-attribute group decision-making (MAGDM) problems with distinct uncertain preference structures. We develop some linear-programming models for dealing with the MAGDM problems, where the information about attribute weights is incomplete, and the decision makers have their preferences on alternatives. The provided preference information can be represented in the following three distinct uncertain preference structures: 1) interval utility values; 2) interval fuzzy preference relations; and 3) interval multiplicative preference relations. We first establish some linear-programming models based on decision matrix and each of the distinct uncertain preference structures and, then, develop some linear-programming models to integrate all three structures of subjective uncertain preference information provided by the decision makers and the objective information depicted in the decision matrix. Furthermore, we propose a simple and straightforward approach in ranking and selecting the given alternatives. It is worth pointing out that the developed models can also be used to deal with the situations where the three distinct uncertain preference structures are reduced to the traditional ones, i.e., utility values, fuzzy preference relations, and multiplicative preference relations. Finally, we use a practical example to illustrate in detail the calculation process of the developed approach.
A nonparametric multiple imputation approach for missing categorical data.
Zhou, Muhan; He, Yulei; Yu, Mandi; Hsu, Chiu-Hsieh
2017-06-06
Incomplete categorical variables with more than two categories are common in public health data. However, most of the existing missing-data methods do not use the information from nonresponse (missingness) probabilities. We propose a nearest-neighbour multiple imputation approach to impute a missing at random categorical outcome and to estimate the proportion of each category. The donor set for imputation is formed by measuring distances between each missing value with other non-missing values. The distance function is calculated based on a predictive score, which is derived from two working models: one fits a multinomial logistic regression for predicting the missing categorical outcome (the outcome model) and the other fits a logistic regression for predicting missingness probabilities (the missingness model). A weighting scheme is used to accommodate contributions from two working models when generating the predictive score. A missing value is imputed by randomly selecting one of the non-missing values with the smallest distances. We conduct a simulation to evaluate the performance of the proposed method and compare it with several alternative methods. A real-data application is also presented. The simulation study suggests that the proposed method performs well when missingness probabilities are not extreme under some misspecifications of the working models. However, the calibration estimator, which is also based on two working models, can be highly unstable when missingness probabilities for some observations are extremely high. In this scenario, the proposed method produces more stable and better estimates. In addition, proper weights need to be chosen to balance the contributions from the two working models and achieve optimal results for the proposed method. We conclude that the proposed multiple imputation method is a reasonable approach to dealing with missing categorical outcome data with more than two levels for assessing the distribution of the outcome. In terms of the choices for the working models, we suggest a multinomial logistic regression for predicting the missing outcome and a binary logistic regression for predicting the missingness probability.
Formalizing the Role of Agent-Based Modeling in Causal Inference and Epidemiology
Marshall, Brandon D. L.; Galea, Sandro
2015-01-01
Calls for the adoption of complex systems approaches, including agent-based modeling, in the field of epidemiology have largely centered on the potential for such methods to examine complex disease etiologies, which are characterized by feedback behavior, interference, threshold dynamics, and multiple interacting causal effects. However, considerable theoretical and practical issues impede the capacity of agent-based methods to examine and evaluate causal effects and thus illuminate new areas for intervention. We build on this work by describing how agent-based models can be used to simulate counterfactual outcomes in the presence of complexity. We show that these models are of particular utility when the hypothesized causal mechanisms exhibit a high degree of interdependence between multiple causal effects and when interference (i.e., one person's exposure affects the outcome of others) is present and of intrinsic scientific interest. Although not without challenges, agent-based modeling (and complex systems methods broadly) represent a promising novel approach to identify and evaluate complex causal effects, and they are thus well suited to complement other modern epidemiologic methods of etiologic inquiry. PMID:25480821
Multi-criteria comparative evaluation of spallation reaction models
NASA Astrophysics Data System (ADS)
Andrianov, Andrey; Andrianova, Olga; Konobeev, Alexandr; Korovin, Yury; Kuptsov, Ilya
2017-09-01
This paper presents an approach to a comparative evaluation of the predictive ability of spallation reaction models based on widely used, well-proven multiple-criteria decision analysis methods (MAVT/MAUT, AHP, TOPSIS, PROMETHEE) and the results of such a comparison for 17 spallation reaction models in the presence of the interaction of high-energy protons with natPb.
Afshin Pourmokhtarian; Charles T. Driscoll; John L. Campbell; Katharine Hayhoe; Anne M. K. Stoner
2016-01-01
Assessments of future climate change impacts on ecosystems typically rely on multiple climate model projections, but often utilize only one downscaling approach trained on one set of observations. Here, we explore the extent to which modeled biogeochemical responses to changing climate are affected by the selection of the climate downscaling method and training...
Curve of Factors Model: A Latent Growth Modeling Approach for Educational Research
ERIC Educational Resources Information Center
Isiordia, Marilu; Ferrer, Emilio
2018-01-01
A first-order latent growth model assesses change in an unobserved construct from a single score and is commonly used across different domains of educational research. However, examining change using a set of multiple response scores (e.g., scale items) affords researchers several methodological benefits not possible when using a single score. A…
The Robust Learning Model (RLM): A Comprehensive Approach to a New Online University
ERIC Educational Resources Information Center
Neumann, Yoram; Neumann, Edith F.
2010-01-01
This paper outlines the components of the Robust Learning Model (RLM) as a conceptual framework for creating a new online university offering numerous degree programs at all degree levels. The RLM is a multi-factorial model based on the basic belief that successful learning outcomes depend on multiple factors employed together in a holistic…
Abstraction Techniques for Parameterized Verification
2006-11-01
approach for applying model checking to unbounded systems is to extract finite state models from them using conservative abstraction techniques. Prop...36 2.5.1 Multiple Reference Processes . . . . . . . . . . . . . . . . . . . 36 2.5.2 Adding Monitor Processes...model checking to complex pieces of code like device drivers depends on the use of abstraction methods. An abstraction method extracts a small finite
NASA Astrophysics Data System (ADS)
Yang, Jingyu; Lin, Jiahui; Liu, Yuejun; Yang, Kang; Zhou, Lanwei; Chen, Guoping
2017-08-01
It is well known that intelligent control theory has been used in many research fields, novel modeling method (DROMM) is used for flexible rectangular active vibration control, and then the validity of new model is confirmed by comparing finite element model with new model. In this paper, taking advantage of the dynamics of flexible rectangular plate, a two-loop sliding mode (TSM) MIMO approach is introduced for designing multiple-input multiple-output continuous vibration control system, which can overcome uncertainties, disturbances or unstable dynamics. An illustrative example is given in order to show the feasibility of the method. Numerical simulations and experiment confirm the effectiveness of the proposed TSM MIMO controller.
Olives, Casey; Kim, Sun-Young; Sheppard, Lianne; Sampson, Paul D.; Szpiro, Adam A.; Oron, Assaf P.; Lindström, Johan; Vedal, Sverre; Kaufman, Joel D.
2014-01-01
Background: Cohort studies of the relationship between air pollution exposure and chronic health effects require predictions of exposure over long periods of time. Objectives: We developed a unified modeling approach for predicting fine particulate matter, nitrogen dioxide, oxides of nitrogen, and black carbon (as measured by light absorption coefficient) in six U.S. metropolitan regions from 1999 through early 2012 as part of the Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air). Methods: We obtained monitoring data from regulatory networks and supplemented those data with study-specific measurements collected from MESA Air community locations and participants’ homes. In each region, we applied a spatiotemporal model that included a long-term spatial mean, time trends with spatially varying coefficients, and a spatiotemporal residual. The mean structure was derived from a large set of geographic covariates that was reduced using partial least-squares regression. We estimated time trends from observed time series and used spatial smoothing methods to borrow strength between observations. Results: Prediction accuracy was high for most models, with cross-validation R2 (R2CV) > 0.80 at regulatory and fixed sites for most regions and pollutants. At home sites, overall R2CV ranged from 0.45 to 0.92, and temporally adjusted R2CV ranged from 0.23 to 0.92. Conclusions: This novel spatiotemporal modeling approach provides accurate fine-scale predictions in multiple regions for four pollutants. We have generated participant-specific predictions for MESA Air to investigate health effects of long-term air pollution exposures. These successes highlight modeling advances that can be adopted more widely in modern cohort studies. Citation: Keller JP, Olives C, Kim SY, Sheppard L, Sampson PD, Szpiro AA, Oron AP, Lindström J, Vedal S, Kaufman JD. 2015. A unified spatiotemporal modeling approach for predicting concentrations of multiple air pollutants in the Multi-Ethnic Study of Atherosclerosis and Air Pollution. Environ Health Perspect 123:301–309; http://dx.doi.org/10.1289/ehp.1408145 PMID:25398188
A Decentralized Adaptive Approach to Fault Tolerant Flight Control
NASA Technical Reports Server (NTRS)
Wu, N. Eva; Nikulin, Vladimir; Heimes, Felix; Shormin, Victor
2000-01-01
This paper briefly reports some results of our study on the application of a decentralized adaptive control approach to a 6 DOF nonlinear aircraft model. The simulation results showed the potential of using this approach to achieve fault tolerant control. Based on this observation and some analysis, the paper proposes a multiple channel adaptive control scheme that makes use of the functionally redundant actuating and sensing capabilities in the model, and explains how to implement the scheme to tolerate actuator and sensor failures. The conditions, under which the scheme is applicable, are stated in the paper.
A Logical Account of Diagnosis with Multiple Theories
NASA Technical Reports Server (NTRS)
Pandurang, P.; Lum, Henry Jr. (Technical Monitor)
1994-01-01
Model-based diagnosis is a powerful, first-principles approach to diagnosis. The primary drawback with model-based diagnosis is that it is based on a system model, and this model might be inappropriate. The inappropriateness of models usually stems from the fundamental tradeoff between completeness and efficiency. Recently, Struss has developed an elegant proposal for diagnosis with multiple models. Struss characterizes models as relations and develops a precise notion of abstraction. He defines relations between models and analyzes the effect of a model switch on the space of possible diagnoses. In this paper we extend Struss's proposal in three ways. First, our account of diagnosis with multiple models is based on representing models as more expressive first-order theories, rather than as relations. A key technical contribution is the use of a general notion of abstraction based on interpretations between theories. Second, Struss conflates component modes with models, requiring him to define models relations such as choices which result in non-relational models. We avoid this problem by differentiating component modes from models. Third, we present a more general account of simplifications that correctly handles situations where the simplification contradicts the base theory.
Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework
Talluto, Matthew V.; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C. Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A.; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique
2016-01-01
Aim Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Location Eastern North America (as an example). Methods Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple (Acer saccharum), an abundant tree native to eastern North America. Results For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. Main conclusions We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making. PMID:27499698
Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework.
Talluto, Matthew V; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique
2016-02-01
Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Eastern North America (as an example). Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple ( Acer saccharum ), an abundant tree native to eastern North America. For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making.
Kukafka, Rita; Johnson, Stephen B; Linfante, Allison; Allegrante, John P
2003-06-01
Many interventions to improve the success of information technology (IT) implementations are grounded in behavioral science, using theories, and models to identify conditions and determinants of successful use. However, each model in the IT literature has evolved to address specific theoretical problems of particular disciplinary concerns, and each model has been tested and has evolved using, in most cases, a more or less restricted set of IT implementation procedures. Functionally, this limits the perspective for taking into account the multiple factors at the individual, group, and organizational levels that influence use behavior. While a rich body of literature has emerged, employing prominent models such as the Technology Adoption Model, Social-Cognitive Theory, and Diffusion of Innovation Theory, the complexity of defining a suitable multi-level intervention has largely been overlooked. A gap exists between the implementation of IT and the integration of theories and models that can be utilized to develop multi-level approaches to identify factors that impede usage behavior. We present a novel framework that is intended to guide synthesis of more than one theoretical perspective for the purpose of planning multi-level interventions to enhance IT use. This integrative framework is adapted from PRECEDE/PROCEDE, a conceptual framework used by health planners in hundreds of published studies to direct interventions that account for the multiple determinants of behavior. Since we claim that the literature on IT use behavior does not now include a multi-level approach, we undertook a systematic literature analysis to confirm this assertion. Our framework facilitated organizing this literature synthesis and our analysis was aimed at determining if the IT implementation approaches in the published literature were characterized by an approach that considered at least two levels of IT usage determinants. We found that while 61% of studies mentioned or referred to theory, none considered two or more levels. In other words, although the researchers employ behavioral theory, they omit two fundamental propositions: (1) IT usage is influenced by multiple factors and (2) interventions must be multi-dimensional. Our literature synthesis may provide additional insight into the reason for high failure rates associated with underutilized systems, and underscores the need to move beyond the current dominant approach that employs a single model to guide IT implementation plans that aim to address factors associated with IT acceptance and subsequent positive use behavior.
Numeracy for the 21st Century: A Commentary
ERIC Educational Resources Information Center
Askew, Mike
2015-01-01
Many of the papers in this special issue draw on the fundamental model of numeracy developed by Goos et al. (Transforming mathematics instruction: multiple approaches and practices. 81-102, 2014). Four elements in that model--contexts, tools, dispositions, and mathematical knowledge--are embedded within a critical orientation, and being, or…
ERIC Educational Resources Information Center
Boronico, Jess; Murdy, Jim; Kong, Xinlu
2014-01-01
This manuscript proposes a mathematical model to address faculty sufficiency requirements towards assuring overall high quality management education at a global university. Constraining elements include full-time faculty coverage by discipline, location, and program, across multiple campus locations subject to stated service quality standards of…
Developing Mindful Learners Model: A 21st Century Ecological Approach.
ERIC Educational Resources Information Center
Fluellen, Jerry
The Developing Mindful Learners Model (DMLM), developed within the framework of Howard Gardner's multiple intelligences theory, connects three factors--content, framework, and world vision--for the purpose of helping underachieving students to become more "mindful": i.e., to become one who welcomes new ideas, considers more than one…
Clinical Reasoning in Athletic Training Education: Modeling Expert Thinking
ERIC Educational Resources Information Center
Geisler, Paul R.; Lazenby, Todd W.
2009-01-01
Objective: To address the need for a more definitive approach to critical thinking during athletic training educational experiences by introducing the clinical reasoning model for critical thinking. Background: Educators are aware of the need to teach students how to think critically. The multiple domains of athletic training are comprehensive and…
A Probabilistic Model of Cross-Categorization
ERIC Educational Resources Information Center
Shafto, Patrick; Kemp, Charles; Mansinghka, Vikash; Tenenbaum, Joshua B.
2011-01-01
Most natural domains can be represented in multiple ways: we can categorize foods in terms of their nutritional content or social role, animals in terms of their taxonomic groupings or their ecological niches, and musical instruments in terms of their taxonomic categories or social uses. Previous approaches to modeling human categorization have…
An Extension of Dominance Analysis to Canonical Correlation Analysis
ERIC Educational Resources Information Center
Huo, Yan; Budescu, David V.
2009-01-01
Dominance analysis (Budescu, 1993) offers a general framework for determination of relative importance of predictors in univariate and multivariate multiple regression models. This approach relies on pairwise comparisons of the contribution of predictors in all relevant subset models. In this article we extend dominance analysis to canonical…
Precision Efficacy Analysis for Regression.
ERIC Educational Resources Information Center
Brooks, Gordon P.
When multiple linear regression is used to develop a prediction model, sample size must be large enough to ensure stable coefficients. If the derivation sample size is inadequate, the model may not predict well for future subjects. The precision efficacy analysis for regression (PEAR) method uses a cross- validity approach to select sample sizes…
An Evaluation of Curriculum Materials Based Upon the Socio-Scientific Reasoning Model.
ERIC Educational Resources Information Center
Henkin, Gayle; And Others
To address the need to develop a scientifically literate citizenry, the socio-scientific reasoning model was created to guide curriculum development. Goals of this developmental approach include increasing: (1) students' skills in dealing with problems containing multiple interacting variables; (2) students' decision-making skills incorporating a…
Meta-Analysis of Scale Reliability Using Latent Variable Modeling
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2013-01-01
A latent variable modeling approach is outlined that can be used for meta-analysis of reliability coefficients of multicomponent measuring instruments. Important limitations of efforts to combine composite reliability findings across multiple studies are initially pointed out. A reliability synthesis procedure is discussed that is based on…
Analyzing Multiple Outcomes in Clinical Research Using Multivariate Multilevel Models
Baldwin, Scott A.; Imel, Zac E.; Braithwaite, Scott R.; Atkins, David C.
2014-01-01
Objective Multilevel models have become a standard data analysis approach in intervention research. Although the vast majority of intervention studies involve multiple outcome measures, few studies use multivariate analysis methods. The authors discuss multivariate extensions to the multilevel model that can be used by psychotherapy researchers. Method and Results Using simulated longitudinal treatment data, the authors show how multivariate models extend common univariate growth models and how the multivariate model can be used to examine multivariate hypotheses involving fixed effects (e.g., does the size of the treatment effect differ across outcomes?) and random effects (e.g., is change in one outcome related to change in the other?). An online supplemental appendix provides annotated computer code and simulated example data for implementing a multivariate model. Conclusions Multivariate multilevel models are flexible, powerful models that can enhance clinical research. PMID:24491071
Energy dependence of strangeness production and event-byevent fluctuations
NASA Astrophysics Data System (ADS)
Rustamov, Anar
2018-02-01
We review the energy dependence of strangeness production in nucleus-nucleus collisions and contrast it with the experimental observations in pp and p-A collisions at LHC energies as a function of the charged particle multiplicities. For the high multiplicity final states the results from pp and p-Pb reactions systematically approach the values obtained from Pb-Pb collisions. In statistical models this implies an approach to the thermodynamic limit, where differences of mean multiplicities between various formalisms, such as Canonical and Grand Canonical Ensembles, vanish. Furthermore, we report on event-by-event net-proton fluctuations as measured by STAR at RHIC/BNL and by ALICE at LHC/CERN and discuss various non-dynamical contributions to these measurements, which should be properly subtracted before comparison to theoretical calculations on dynamical net-baryon fluctuations.
NASA Astrophysics Data System (ADS)
Mokhtar, Nurkhairany Amyra; Zubairi, Yong Zulina; Hussin, Abdul Ghapor
2017-05-01
Outlier detection has been used extensively in data analysis to detect anomalous observation in data and has important application in fraud detection and robust analysis. In this paper, we propose a method in detecting multiple outliers for circular variables in linear functional relationship model. Using the residual values of the Caires and Wyatt model, we applied the hierarchical clustering procedure. With the use of tree diagram, we illustrate the graphical approach of the detection of outlier. A simulation study is done to verify the accuracy of the proposed method. Also, an illustration to a real data set is given to show its practical applicability.
NASA Technical Reports Server (NTRS)
1971-01-01
A study of techniques for the prediction of crime in the City of Los Angeles was conducted. Alternative approaches to crime prediction (causal, quasicausal, associative, extrapolative, and pattern-recognition models) are discussed, as is the environment within which predictions were desired for the immediate application. The decision was made to use time series (extrapolative) models to produce the desired predictions. The characteristics of the data and the procedure used to choose equations for the extrapolations are discussed. The usefulness of different functional forms (constant, quadratic, and exponential forms) and of different parameter estimation techniques (multiple regression and multiple exponential smoothing) are compared, and the quality of the resultant predictions is assessed.
NASA Astrophysics Data System (ADS)
Liang, Dong; Song, Yimin; Sun, Tao; Jin, Xueying
2018-03-01
This paper addresses the problem of rigid-flexible coupling dynamic modeling and active control of a novel flexible parallel manipulator (PM) with multiple actuation modes. Firstly, based on the flexible multi-body dynamics theory, the rigid-flexible coupling dynamic model (RFDM) of system is developed by virtue of the augmented Lagrangian multipliers approach. For completeness, the mathematical models of permanent magnet synchronous motor (PMSM) and piezoelectric transducer (PZT) are further established and integrated with the RFDM of mechanical system to formulate the electromechanical coupling dynamic model (ECDM). To achieve the trajectory tracking and vibration suppression, a hierarchical compound control strategy is presented. Within this control strategy, the proportional-differential (PD) feedback controller is employed to realize the trajectory tracking of end-effector, while the strain and strain rate feedback (SSRF) controller is developed to restrain the vibration of the flexible links using PZT. Furthermore, the stability of the control algorithm is demonstrated based on the Lyapunov stability theory. Finally, two simulation case studies are performed to illustrate the effectiveness of the proposed approach. The results indicate that, under the redundant actuation mode, the hierarchical compound control strategy can guarantee the flexible PM achieves singularity-free motion and vibration attenuation within task workspace simultaneously. The systematic methodology proposed in this study can be conveniently extended for the dynamic modeling and efficient controller design of other flexible PMs, especially the emerging ones with multiple actuation modes.
An approach to multiscale modelling with graph grammars.
Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried
2014-09-01
Functional-structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models.
An approach to multiscale modelling with graph grammars
Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried
2014-01-01
Background and Aims Functional–structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. Methods A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Key Results Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. Conclusions The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models. PMID:25134929
Association analysis of multiple traits by an approach of combining P values.
Chen, Lili; Wang, Yong; Zhou, Yajing
2018-03-01
Increasing evidence shows that one variant can affect multiple traits, which is a widespread phenomenon in complex diseases. Joint analysis of multiple traits can increase statistical power of association analysis and uncover the underlying genetic mechanism. Although there are many statistical methods to analyse multiple traits, most of these methods are usually suitable for detecting common variants associated with multiple traits. However, because of low minor allele frequency of rare variant, these methods are not optimal for rare variant association analysis. In this paper, we extend an adaptive combination of P values method (termed ADA) for single trait to test association between multiple traits and rare variants in the given region. For a given region, we use reverse regression model to test each rare variant associated with multiple traits and obtain the P value of single-variant test. Further, we take the weighted combination of these P values as the test statistic. Extensive simulation studies show that our approach is more powerful than several other comparison methods in most cases and is robust to the inclusion of a high proportion of neutral variants and the different directions of effects of causal variants.
Using the Logarithm of Odds to Define a Vector Space on Probabilistic Atlases
Pohl, Kilian M.; Fisher, John; Bouix, Sylvain; Shenton, Martha; McCarley, Robert W.; Grimson, W. Eric L.; Kikinis, Ron; Wells, William M.
2007-01-01
The Logarithm of the Odds ratio (LogOdds) is frequently used in areas such as artificial neural networks, economics, and biology, as an alternative representation of probabilities. Here, we use LogOdds to place probabilistic atlases in a linear vector space. This representation has several useful properties for medical imaging. For example, it not only encodes the shape of multiple anatomical structures but also captures some information concerning uncertainty. We demonstrate that the resulting vector space operations of addition and scalar multiplication have natural probabilistic interpretations. We discuss several examples for placing label maps into the space of LogOdds. First, we relate signed distance maps, a widely used implicit shape representation, to LogOdds and compare it to an alternative that is based on smoothing by spatial Gaussians. We find that the LogOdds approach better preserves shapes in a complex multiple object setting. In the second example, we capture the uncertainty of boundary locations by mapping multiple label maps of the same object into the LogOdds space. Third, we define a framework for non-convex interpolations among atlases that capture different time points in the aging process of a population. We evaluate the accuracy of our representation by generating a deformable shape atlas that captures the variations of anatomical shapes across a population. The deformable atlas is the result of a principal component analysis within the LogOdds space. This atlas is integrated into an existing segmentation approach for MR images. We compare the performance of the resulting implementation in segmenting 20 test cases to a similar approach that uses a more standard shape model that is based on signed distance maps. On this data set, the Bayesian classification model with our new representation outperformed the other approaches in segmenting subcortical structures. PMID:17698403
Maruyama, Rika; Echigoya, Yusuke; Caluseriu, Oana; Aoki, Yoshitsugu; Takeda, Shin'ichi; Yokota, Toshifumi
2017-01-01
Exon-skipping therapy is an emerging approach that uses synthetic DNA-like molecules called antisense oligonucleotides (AONs) to splice out frame-disrupting parts of mRNA, restore the reading frame, and produce truncated yet functional proteins. Multiple exon skipping utilizing a cocktail of AONs can theoretically treat 80-90% of patients with Duchenne muscular dystrophy (DMD). The success of multiple exon skipping by the systemic delivery of a cocktail of AONs called phosphorodiamidate morpholino oligomers (PMOs) in a DMD dog model has made a significant impact on the development of therapeutics for DMD, leading to clinical trials of PMO-based drugs. Here, we describe the systemic delivery of a cocktail of PMOs to skip multiple exons in dystrophic dogs and the evaluation of the efficacies and toxicity in vivo.
Shen, Yanna; Cooper, Gregory F
2012-09-01
This paper investigates Bayesian modeling of known and unknown causes of events in the context of disease-outbreak detection. We introduce a multivariate Bayesian approach that models multiple evidential features of every person in the population. This approach models and detects (1) known diseases (e.g., influenza and anthrax) by using informative prior probabilities and (2) unknown diseases (e.g., a new, highly contagious respiratory virus that has never been seen before) by using relatively non-informative prior probabilities. We report the results of simulation experiments which support that this modeling method can improve the detection of new disease outbreaks in a population. A contribution of this paper is that it introduces a multivariate Bayesian approach for jointly modeling both known and unknown causes of events. Such modeling has general applicability in domains where the space of known causes is incomplete. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Taboo Search: An Approach to the Multiple Minima Problem
NASA Astrophysics Data System (ADS)
Cvijovic, Djurdje; Klinowski, Jacek
1995-02-01
Described here is a method, based on Glover's taboo search for discrete functions, of solving the multiple minima problem for continuous functions. As demonstrated by model calculations, the algorithm avoids entrapment in local minima and continues the search to give a near-optimal final solution. Unlike other methods of global optimization, this procedure is generally applicable, easy to implement, derivative-free, and conceptually simple.
ERIC Educational Resources Information Center
Wichaidit, Patcharee Rompayom; Wichaidit, Sittichai
2016-01-01
Learning chemistry may be difficult for students for several reasons, such as the abstract nature of many chemistry concepts and the fact that students may view chemistry as irrelevant to their everyday lives. Teaching chemistry in familiar contexts and the use of multiple representations are seen as effective approaches for enhancing students'…
Advanced Modeling and Uncertainty Quantification for Flight Dynamics; Interim Results and Challenges
NASA Technical Reports Server (NTRS)
Hyde, David C.; Shweyk, Kamal M.; Brown, Frank; Shah, Gautam
2014-01-01
As part of the NASA Vehicle Systems Safety Technologies (VSST), Assuring Safe and Effective Aircraft Control Under Hazardous Conditions (Technical Challenge #3), an effort is underway within Boeing Research and Technology (BR&T) to address Advanced Modeling and Uncertainty Quantification for Flight Dynamics (VSST1-7). The scope of the effort is to develop and evaluate advanced multidisciplinary flight dynamics modeling techniques, including integrated uncertainties, to facilitate higher fidelity response characterization of current and future aircraft configurations approaching and during loss-of-control conditions. This approach is to incorporate multiple flight dynamics modeling methods for aerodynamics, structures, and propulsion, including experimental, computational, and analytical. Also to be included are techniques for data integration and uncertainty characterization and quantification. This research shall introduce new and updated multidisciplinary modeling and simulation technologies designed to improve the ability to characterize airplane response in off-nominal flight conditions. The research shall also introduce new techniques for uncertainty modeling that will provide a unified database model comprised of multiple sources, as well as an uncertainty bounds database for each data source such that a full vehicle uncertainty analysis is possible even when approaching or beyond Loss of Control boundaries. Methodologies developed as part of this research shall be instrumental in predicting and mitigating loss of control precursors and events directly linked to causal and contributing factors, such as stall, failures, damage, or icing. The tasks will include utilizing the BR&T Water Tunnel to collect static and dynamic data to be compared to the GTM extended WT database, characterizing flight dynamics in off-nominal conditions, developing tools for structural load estimation under dynamic conditions, devising methods for integrating various modeling elements into a real-time simulation capability, generating techniques for uncertainty modeling that draw data from multiple modeling sources, and providing a unified database model that includes nominal plus increments for each flight condition. This paper presents status of testing in the BR&T water tunnel and analysis of the resulting data and efforts to characterize these data using alternative modeling methods. Program challenges and issues are also presented.
Process Writing and the Internet: Blogs and Ning Networks in the Classroom
ERIC Educational Resources Information Center
Boas, Isabela Villas
2011-01-01
In contrast to the product approach to writing, which is based on studying and replicating textual models, the process approach involves multiple and repeated steps that compel the writer to closely consider the topic, language, purpose for writing, and social reality of an audience. In addition to discussing the benefits of the process approach…
Multiple constraint analysis of regional land-surface carbon flux
D.P. Turner; M. Göckede; B.E. Law; W.D. Ritts; W.B. Cohen; Z. Yang; T. Hudiburg; R. Kennedy; M. Duane
2011-01-01
We applied and compared bottom-up (process model-based) and top-down (atmospheric inversion-based) scaling approaches to evaluate the spatial and temporal patterns of net ecosystem production (NEP) over a 2.5 Ã 105 km2 area (the state of Oregon) in the western United States. Both approaches indicated a carbon sink over this...
A comparative study of serial and parallel aeroelastic computations of wings
NASA Technical Reports Server (NTRS)
Byun, Chansup; Guruswamy, Guru P.
1994-01-01
A procedure for computing the aeroelasticity of wings on parallel multiple-instruction, multiple-data (MIMD) computers is presented. In this procedure, fluids are modeled using Euler equations, and structures are modeled using modal or finite element equations. The procedure is designed in such a way that each discipline can be developed and maintained independently by using a domain decomposition approach. In the present parallel procedure, each computational domain is scalable. A parallel integration scheme is used to compute aeroelastic responses by solving fluid and structural equations concurrently. The computational efficiency issues of parallel integration of both fluid and structural equations are investigated in detail. This approach, which reduces the total computational time by a factor of almost 2, is demonstrated for a typical aeroelastic wing by using various numbers of processors on the Intel iPSC/860.
A Semiparametric Approach for Composite Functional Mapping of Dynamic Quantitative Traits
Yang, Runqing; Gao, Huijiang; Wang, Xin; Zhang, Ji; Zeng, Zhao-Bang; Wu, Rongling
2007-01-01
Functional mapping has emerged as a powerful tool for mapping quantitative trait loci (QTL) that control developmental patterns of complex dynamic traits. Original functional mapping has been constructed within the context of simple interval mapping, without consideration of separate multiple linked QTL for a dynamic trait. In this article, we present a statistical framework for mapping QTL that affect dynamic traits by capitalizing on the strengths of functional mapping and composite interval mapping. Within this so-called composite functional-mapping framework, functional mapping models the time-dependent genetic effects of a QTL tested within a marker interval using a biologically meaningful parametric function, whereas composite interval mapping models the time-dependent genetic effects of the markers outside the test interval to control the genome background using a flexible nonparametric approach based on Legendre polynomials. Such a semiparametric framework was formulated by a maximum-likelihood model and implemented with the EM algorithm, allowing for the estimation and the test of the mathematical parameters that define the QTL effects and the regression coefficients of the Legendre polynomials that describe the marker effects. Simulation studies were performed to investigate the statistical behavior of composite functional mapping and compare its advantage in separating multiple linked QTL as compared to functional mapping. We used the new mapping approach to analyze a genetic mapping example in rice, leading to the identification of multiple QTL, some of which are linked on the same chromosome, that control the developmental trajectory of leaf age. PMID:17947431
Attention Modulates Spatial Precision in Multiple-Object Tracking.
Srivastava, Nisheeth; Vul, Ed
2016-01-01
We present a computational model of multiple-object tracking that makes trial-level predictions about the allocation of visual attention and the effect of this allocation on observers' ability to track multiple objects simultaneously. This model follows the intuition that increased attention to a location increases the spatial resolution of its internal representation. Using a combination of empirical and computational experiments, we demonstrate the existence of a tight coupling between cognitive and perceptual resources in this task: Low-level tracking of objects generates bottom-up predictions of error likelihood, and high-level attention allocation selectively reduces error probabilities in attended locations while increasing it at non-attended locations. Whereas earlier models of multiple-object tracking have predicted the big picture relationship between stimulus complexity and response accuracy, our approach makes accurate predictions of both the macro-scale effect of target number and velocity on tracking difficulty and micro-scale variations in difficulty across individual trials and targets arising from the idiosyncratic within-trial interactions of targets and distractors. Copyright © 2016 Cognitive Science Society, Inc.
Jiménez, José; García, Emilio J; Llaneza, Luis; Palacios, Vicente; González, Luis Mariano; García-Domínguez, Francisco; Múñoz-Igualada, Jaime; López-Bao, José Vicente
2016-08-01
In many cases, the first step in large-carnivore management is to obtain objective, reliable, and cost-effective estimates of population parameters through procedures that are reproducible over time. However, monitoring predators over large areas is difficult, and the data have a high level of uncertainty. We devised a practical multimethod and multistate modeling approach based on Bayesian hierarchical-site-occupancy models that combined multiple survey methods to estimate different population states for use in monitoring large predators at a regional scale. We used wolves (Canis lupus) as our model species and generated reliable estimates of the number of sites with wolf reproduction (presence of pups). We used 2 wolf data sets from Spain (Western Galicia in 2013 and Asturias in 2004) to test the approach. Based on howling surveys, the naïve estimation (i.e., estimate based only on observations) of the number of sites with reproduction was 9 and 25 sites in Western Galicia and Asturias, respectively. Our model showed 33.4 (SD 9.6) and 34.4 (3.9) sites with wolf reproduction, respectively. The number of occupied sites with wolf reproduction was 0.67 (SD 0.19) and 0.76 (0.11), respectively. This approach can be used to design more cost-effective monitoring programs (i.e., to define the sampling effort needed per site). Our approach should inspire well-coordinated surveys across multiple administrative borders and populations and lead to improved decision making for management of large carnivores on a landscape level. The use of this Bayesian framework provides a simple way to visualize the degree of uncertainty around population-parameter estimates and thus provides managers and stakeholders an intuitive approach to interpreting monitoring results. Our approach can be widely applied to large spatial scales in wildlife monitoring where detection probabilities differ between population states and where several methods are being used to estimate different population parameters. © 2016 Society for Conservation Biology.
A Model for Semantic Equivalence Discovery for Harmonizing Master Data
NASA Astrophysics Data System (ADS)
Piprani, Baba
IT projects often face the challenge of harmonizing metadata and data so as to have a "single" version of the truth. Determining equivalency of multiple data instances against the given type, or set of types, is mandatory in establishing master data legitimacy in a data set that contains multiple incarnations of instances belonging to the same semantic data record . The results of a real-life application define how measuring criteria and equivalence path determination were established via a set of "probes" in conjunction with a score-card approach. There is a need for a suite of supporting models to help determine master data equivalency towards entity resolution—including mapping models, transform models, selection models, match models, an audit and control model, a scorecard model, a rating model. An ORM schema defines the set of supporting models along with their incarnation into an attribute based model as implemented in an RDBMS.
NASA Langley's Approach to the Sandia's Structural Dynamics Challenge Problem
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Kenny, Sean P.; Crespo, Luis G.; Elliott, Kenny B.
2007-01-01
The objective of this challenge is to develop a data-based probabilistic model of uncertainty to predict the behavior of subsystems (payloads) by themselves and while coupled to a primary (target) system. Although this type of analysis is routinely performed and representative of issues faced in real-world system design and integration, there are still several key technical challenges that must be addressed when analyzing uncertain interconnected systems. For example, one key technical challenge is related to the fact that there is limited data on target configurations. Moreover, it is typical to have multiple data sets from experiments conducted at the subsystem level, but often samples sizes are not sufficient to compute high confidence statistics. In this challenge problem additional constraints are placed as ground rules for the participants. One such rule is that mathematical models of the subsystem are limited to linear approximations of the nonlinear physics of the problem at hand. Also, participants are constrained to use these models and the multiple data sets to make predictions about the target system response under completely different input conditions. Our approach involved initially the screening of several different methods. Three of the ones considered are presented herein. The first one is based on the transformation of the modal data to an orthogonal space where the mean and covariance of the data are matched by the model. The other two approaches worked solutions in physical space where the uncertain parameter set is made of masses, stiffnesses and damping coefficients; one matches confidence intervals of low order moments of the statistics via optimization while the second one uses a Kernel density estimation approach. The paper will touch on all the approaches, lessons learned, validation 1 metrics and their comparison, data quantity restriction, and assumptions/limitations of each approach. Keywords: Probabilistic modeling, model validation, uncertainty quantification, kernel density
DOE Office of Scientific and Technical Information (OSTI.GOV)
B.E. Law; D. Turner; M. Goeckede
GOAL: To develop and apply an approach to quantify and understand the regional carbon balance of the west coast states for the North American Carbon Program. OBJECTIVE: As an element of NACP research, the proposed investigation is a two pronged approach that derives and evaluates a regional carbon (C) budget for Oregon, Washington, and California. Objectives are (1) Use multiple data sources, including AmeriFlux data, inventories, and multispectral remote sensing data to investigate trends in carbon storage and exchanges of CO2 and water with variation in climate and disturbance history; (2) Develop and apply regional modeling that relies on thesemore » multiple data sources to reduce uncertainty in spatial estimates of carbon storage and NEP, and relative contributions of terrestrial ecosystems and anthropogenic emissions to atmospheric CO2 in the region; (3) Model terrestrial carbon processes across the region, using the Biome-BGC terrestrial ecosystem model, and an atmospheric inverse modeling approach to estimate variation in rate and timing of terrestrial uptake and feedbacks to the atmosphere in response to climate and disturbance. APPROACH: In performing the regional analysis, the research plan for the bottom-up approach uses a nested hierarchy of observations that include AmeriFlux data (i.e., net ecosystem exchange (NEE) from eddy covariance and associated biometric data), intermediate intensity inventories from an extended plot array partially developed from the PI's previous research, Forest Service FIA and CVS inventory data, time since disturbance, disturbance type, and cover type from Landsat developed in this study, and productivity estimates from MODIS algorithms. The BIOME-BGC model is used to integrate information from these sources and quantify C balance across the region. The inverse modeling approach assimilates flux data from AmeriFlux sites, high precision CO2 concentration data from AmeriFlux towers and four new calibrated CO2 sites, reanalysis meteorology and various remote sensing products to generate statewide estimates of biosphere carbon exchange from the atmospheric point of view.« less
A new approach to estimate parameters of speciation models with application to apes.
Becquet, Celine; Przeworski, Molly
2007-10-01
How populations diverge and give rise to distinct species remains a fundamental question in evolutionary biology, with important implications for a wide range of fields, from conservation genetics to human evolution. A promising approach is to estimate parameters of simple speciation models using polymorphism data from multiple loci. Existing methods, however, make a number of assumptions that severely limit their applicability, notably, no gene flow after the populations split and no intralocus recombination. To overcome these limitations, we developed a new Markov chain Monte Carlo method to estimate parameters of an isolation-migration model. The approach uses summaries of polymorphism data at multiple loci surveyed in a pair of diverging populations or closely related species and, importantly, allows for intralocus recombination. To illustrate its potential, we applied it to extensive polymorphism data from populations and species of apes, whose demographic histories are largely unknown. The isolation-migration model appears to provide a reasonable fit to the data. It suggests that the two chimpanzee species became reproductively isolated in allopatry approximately 850 Kya, while Western and Central chimpanzee populations split approximately 440 Kya but continued to exchange migrants. Similarly, Eastern and Western gorillas and Sumatran and Bornean orangutans appear to have experienced gene flow since their splits approximately 90 and over 250 Kya, respectively.
Making Philosophy of Science Education Practical for Science Teachers
NASA Astrophysics Data System (ADS)
Janssen, F. J. J. M.; van Berkel, B.
2015-04-01
Philosophy of science education can play a vital role in the preparation and professional development of science teachers. In order to fulfill this role a philosophy of science education should be made practical for teachers. First, multiple and inherently incomplete philosophies on the teacher and teaching on what, how and why should be integrated. In this paper we describe our philosophy of science education (ASSET approach) which is composed of bounded rationalism as a guideline for understanding teachers' practical reasoning, liberal education underlying the why of teaching, scientific perspectivism as guideline for the what and educational social constructivism as guiding choices about the how of science education. Integration of multiple philosophies into a coherent philosophy of science education is necessary but not sufficient to make it practical for teachers. Philosophies are still formulated at a too abstract level to guide teachers' practical reasoning. For this purpose, a heuristic model must be developed on an intermediate level of abstraction that will provide teachers with a bridge between these abstract ideas and their specific teaching situation. We have developed and validated such a heuristic model, the CLASS model in order to complement our ASSET approach. We illustrate how science teachers use the ASSET approach and the CLASS model to make choices about the what, the how and the why of science teaching.
Slater, Graham J; Harmon, Luke J; Wegmann, Daniel; Joyce, Paul; Revell, Liam J; Alfaro, Michael E
2012-03-01
In recent years, a suite of methods has been developed to fit multiple rate models to phylogenetic comparative data. However, most methods have limited utility at broad phylogenetic scales because they typically require complete sampling of both the tree and the associated phenotypic data. Here, we develop and implement a new, tree-based method called MECCA (Modeling Evolution of Continuous Characters using ABC) that uses a hybrid likelihood/approximate Bayesian computation (ABC)-Markov-Chain Monte Carlo approach to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. We demonstrate via simulation that MECCA has considerable power to choose among single versus multiple evolutionary rate models, and thus can be used to test hypotheses about changes in the rate of trait evolution across an incomplete tree of life. We finally apply MECCA to an empirical example of body size evolution in carnivores, and show that there is no evidence for an elevated rate of body size evolution in the pinnipeds relative to terrestrial carnivores. ABC approaches can provide a useful alternative set of tools for future macroevolutionary studies where likelihood-dependent approaches are lacking. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.
Protein fold recognition using geometric kernel data fusion.
Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves
2014-07-01
Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.
Zuo, Erwei; Cai, Yi-Jun; Li, Kui; Wei, Yu; Wang, Bang-An; Sun, Yidi; Liu, Zhen; Liu, Jiwei; Hu, Xinde; Wei, Wei; Huo, Xiaona; Shi, Linyu; Tang, Cheng; Liang, Dan; Wang, Yan; Nie, Yan-Hong; Zhang, Chen-Chen; Yao, Xuan; Wang, Xing; Zhou, Changyang; Ying, Wenqin; Wang, Qifang; Chen, Ren-Chao; Shen, Qi; Xu, Guo-Liang; Li, Jinsong; Sun, Qiang; Xiong, Zhi-Qi; Yang, Hui
2017-07-01
The CRISPR/Cas9 system is an efficient gene-editing method, but the majority of gene-edited animals showed mosaicism, with editing occurring only in a portion of cells. Here we show that single gene or multiple genes can be completely knocked out in mouse and monkey embryos by zygotic injection of Cas9 mRNA and multiple adjacent single-guide RNAs (spaced 10-200 bp apart) that target only a single key exon of each gene. Phenotypic analysis of F0 mice following targeted deletion of eight genes on the Y chromosome individually demonstrated the robustness of this approach in generating knockout mice. Importantly, this approach delivers complete gene knockout at high efficiencies (100% on Arntl and 91% on Prrt2) in monkey embryos. Finally, we could generate a complete Prrt2 knockout monkey in a single step, demonstrating the usefulness of this approach in rapidly establishing gene-edited monkey models.
NASA Astrophysics Data System (ADS)
Zeng, Wenhui; Yi, Jin; Rao, Xiao; Zheng, Yun
2017-11-01
In this article, collision-avoidance path planning for multiple car-like robots with variable motion is formulated as a two-stage objective optimization problem minimizing both the total length of all paths and the task's completion time. Accordingly, a new approach based on Pythagorean Hodograph (PH) curves and Modified Harmony Search algorithm is proposed to solve the two-stage path-planning problem subject to kinematic constraints such as velocity, acceleration, and minimum turning radius. First, a method of path planning based on PH curves for a single robot is proposed. Second, a mathematical model of the two-stage path-planning problem for multiple car-like robots with variable motion subject to kinematic constraints is constructed that the first-stage minimizes the total length of all paths and the second-stage minimizes the task's completion time. Finally, a modified harmony search algorithm is applied to solve the two-stage optimization problem. A set of experiments demonstrate the effectiveness of the proposed approach.
Widaman, Keith F.; Grimm, Kevin J.; Early, Dawnté R.; Robins, Richard W.; Conger, Rand D.
2013-01-01
Difficulties arise in multiple-group evaluations of factorial invariance if particular manifest variables are missing completely in certain groups. Ad hoc analytic alternatives can be used in such situations (e.g., deleting manifest variables), but some common approaches, such as multiple imputation, are not viable. At least 3 solutions to this problem are viable: analyzing differing sets of variables across groups, using pattern mixture approaches, and a new method using random number generation. The latter solution, proposed in this article, is to generate pseudo-random normal deviates for all observations for manifest variables that are missing completely in a given sample and then to specify multiple-group models in a way that respects the random nature of these values. An empirical example is presented in detail comparing the 3 approaches. The proposed solution can enable quantitative comparisons at the latent variable level between groups using programs that require the same number of manifest variables in each group. PMID:24019738
Radac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M
2015-11-01
This paper proposes a novel model-free trajectory tracking of multiple-input multiple-output (MIMO) systems by the combination of iterative learning control (ILC) and primitives. The optimal trajectory tracking solution is obtained in terms of previously learned solutions to simple tasks called primitives. The library of primitives that are stored in memory consists of pairs of reference input/controlled output signals. The reference input primitives are optimized in a model-free ILC framework without using knowledge of the controlled process. The guaranteed convergence of the learning scheme is built upon a model-free virtual reference feedback tuning design of the feedback decoupling controller. Each new complex trajectory to be tracked is decomposed into the output primitives regarded as basis functions. The optimal reference input for the control system to track the desired trajectory is next recomposed from the reference input primitives. This is advantageous because the optimal reference input is computed straightforward without the need to learn from repeated executions of the tracking task. In addition, the optimization problem specific to trajectory tracking of square MIMO systems is decomposed in a set of optimization problems assigned to each separate single-input single-output control channel that ensures a convenient model-free decoupling. The new model-free primitive-based ILC approach is capable of planning, reasoning, and learning. A case study dealing with the model-free control tuning for a nonlinear aerodynamic system is included to validate the new approach. The experimental results are given.
Classical Michaelis-Menten and system theory approach to modeling metabolite formation kinetics.
Popović, Jovan
2004-01-01
When single doses of drug are administered and kinetics are linear, techniques, which are based on the compartment approach and the linear system theory approach, in modeling the formation of the metabolite from the parent drug are proposed. Unlike the purpose-specific compartment approach, the methodical, conceptual and computational uniformity in modeling various linear biomedical systems is the dominant characteristic of the linear system approach technology. Saturation of the metabolic reaction results in nonlinear kinetics according to the Michaelis-Menten equation. The two compartment open model with Michaelis-Menten elimination kinetics is theorethicaly basic when single doses of drug are administered. To simulate data or to fit real data using this model, one must resort to numerical integration. A biomathematical model for multiple dosage regimen calculations of nonlinear metabolic systems in steady-state and a working example with phenytoin are presented. High correlation between phenytoin steady-state serum levels calculated from individual Km and Vmax values in the 15 adult epileptic outpatients and the observed levels at the third adjustment of phenytoin daily dose (r=0.961, p<0.01) were found.
Predicting trauma patient mortality: ICD [or ICD-10-AM] versus AIS based approaches.
Willis, Cameron D; Gabbe, Belinda J; Jolley, Damien; Harrison, James E; Cameron, Peter A
2010-11-01
The International Classification of Diseases Injury Severity Score (ICISS) has been proposed as an International Classification of Diseases (ICD)-10-based alternative to mortality prediction tools that use Abbreviated Injury Scale (AIS) data, including the Trauma and Injury Severity Score (TRISS). To date, studies have not examined the performance of ICISS using Australian trauma registry data. This study aimed to compare the performance of ICISS with other mortality prediction tools in an Australian trauma registry. This was a retrospective review of prospectively collected data from the Victorian State Trauma Registry. A training dataset was created for model development and a validation dataset for evaluation. The multiplicative ICISS model was compared with a worst injury ICISS approach, Victorian TRISS (V-TRISS, using local coefficients), maximum AIS severity and a multivariable model including ICD-10-AM codes as predictors. Models were investigated for discrimination (C-statistic) and calibration (Hosmer-Lemeshow statistic). The multivariable approach had the highest level of discrimination (C-statistic 0.90) and calibration (H-L 7.65, P= 0.468). Worst injury ICISS, V-TRISS and maximum AIS had similar performance. The multiplicative ICISS produced the lowest level of discrimination (C-statistic 0.80) and poorest calibration (H-L 50.23, P < 0.001). The performance of ICISS may be affected by the data used to develop estimates, the ICD version employed, the methods for deriving estimates and the inclusion of covariates. In this analysis, a multivariable approach using ICD-10-AM codes was the best-performing method. A multivariable ICISS approach may therefore be a useful alternative to AIS-based methods and may have comparable predictive performance to locally derived TRISS models. © 2010 The Authors. ANZ Journal of Surgery © 2010 Royal Australasian College of Surgeons.
Clinical time series prediction: Toward a hierarchical dynamical system framework.
Liu, Zitao; Hauskrecht, Milos
2015-09-01
Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.
Terrain modeling for microwave landing system
NASA Technical Reports Server (NTRS)
Poulose, M. M.
1991-01-01
A powerful analytical approach for evaluating the terrain effects on a microwave landing system (MLS) is presented. The approach combines a multiplate model with a powerful and exhaustive ray tracing technique and an accurate formulation for estimating the electromagnetic fields due to the antenna array in the presence of terrain. Both uniform theory of diffraction (UTD) and impedance UTD techniques have been employed to evaluate these fields. Innovative techniques are introduced at each stage to make the model versatile to handle most general terrain contours and also to reduce the computational requirement to a minimum. The model is applied to several terrain geometries, and the results are discussed.
Torres-Valencia, Cristian A; Álvarez, Mauricio A; Orozco-Gutiérrez, Alvaro A
2014-01-01
Human emotion recognition (HER) allows the assessment of an affective state of a subject. Until recently, such emotional states were described in terms of discrete emotions, like happiness or contempt. In order to cover a high range of emotions, researchers in the field have introduced different dimensional spaces for emotion description that allow the characterization of affective states in terms of several variables or dimensions that measure distinct aspects of the emotion. One of the most common of such dimensional spaces is the bidimensional Arousal/Valence space. To the best of our knowledge, all HER systems so far have modelled independently, the dimensions in these dimensional spaces. In this paper, we study the effect of modelling the output dimensions simultaneously and show experimentally the advantages in modeling them in this way. We consider a multimodal approach by including features from the Electroencephalogram and a few physiological signals. For modelling the multiple outputs, we employ a multiple output regressor based on support vector machines. We also include an stage of feature selection that is developed within an embedded approach known as Recursive Feature Elimination (RFE), proposed initially for SVM. The results show that several features can be eliminated using the multiple output support vector regressor with RFE without affecting the performance of the regressor. From the analysis of the features selected in smaller subsets via RFE, it can be observed that the signals that are more informative into the arousal and valence space discrimination are the EEG, Electrooculogram/Electromiogram (EOG/EMG) and the Galvanic Skin Response (GSR).
Nelson, Suchitra; Albert, Jeffrey M.
2013-01-01
Mediators are intermediate variables in the causal pathway between an exposure and an outcome. Mediation analysis investigates the extent to which exposure effects occur through these variables, thus revealing causal mechanisms. In this paper, we consider the estimation of the mediation effect when the outcome is binary and multiple mediators of different types exist. We give a precise definition of the total mediation effect as well as decomposed mediation effects through individual or sets of mediators using the potential outcomes framework. We formulate a model of joint distribution (probit-normal) using continuous latent variables for any binary mediators to account for correlations among multiple mediators. A mediation formula approach is proposed to estimate the total mediation effect and decomposed mediation effects based on this parametric model. Estimation of mediation effects through individual or subsets of mediators requires an assumption involving the joint distribution of multiple counterfactuals. We conduct a simulation study that demonstrates low bias of mediation effect estimators for two-mediator models with various combinations of mediator types. The results also show that the power to detect a non-zero total mediation effect increases as the correlation coefficient between two mediators increases, while power for individual mediation effects reaches a maximum when the mediators are uncorrelated. We illustrate our approach by applying it to a retrospective cohort study of dental caries in adolescents with low and high socioeconomic status. Sensitivity analysis is performed to assess the robustness of conclusions regarding mediation effects when the assumption of no unmeasured mediator-outcome confounders is violated. PMID:23650048
Wang, Wei; Nelson, Suchitra; Albert, Jeffrey M
2013-10-30
Mediators are intermediate variables in the causal pathway between an exposure and an outcome. Mediation analysis investigates the extent to which exposure effects occur through these variables, thus revealing causal mechanisms. In this paper, we consider the estimation of the mediation effect when the outcome is binary and multiple mediators of different types exist. We give a precise definition of the total mediation effect as well as decomposed mediation effects through individual or sets of mediators using the potential outcomes framework. We formulate a model of joint distribution (probit-normal) using continuous latent variables for any binary mediators to account for correlations among multiple mediators. A mediation formula approach is proposed to estimate the total mediation effect and decomposed mediation effects based on this parametric model. Estimation of mediation effects through individual or subsets of mediators requires an assumption involving the joint distribution of multiple counterfactuals. We conduct a simulation study that demonstrates low bias of mediation effect estimators for two-mediator models with various combinations of mediator types. The results also show that the power to detect a nonzero total mediation effect increases as the correlation coefficient between two mediators increases, whereas power for individual mediation effects reaches a maximum when the mediators are uncorrelated. We illustrate our approach by applying it to a retrospective cohort study of dental caries in adolescents with low and high socioeconomic status. Sensitivity analysis is performed to assess the robustness of conclusions regarding mediation effects when the assumption of no unmeasured mediator-outcome confounders is violated. Copyright © 2013 John Wiley & Sons, Ltd.
A new multiple air beam approach for in-process form error optical measurement
NASA Astrophysics Data System (ADS)
Gao, Y.; Li, R.
2018-07-01
In-process measurement can provide feedback for the control of workpiece precision in terms of size, roughness and, in particular, mid-spatial frequency form error. Optical measurement methods are of the non-contact type and possess high precision, as required for in-process form error measurement. In precision machining, coolant is commonly used to reduce heat generation and thermal deformation on the workpiece surface. However, the use of coolant will induce an opaque coolant barrier if optical measurement methods are used. In this paper, a new multiple air beam approach is proposed. The new approach permits the displacement of coolant from any direction and with a large thickness, i.e. with a large amount of coolant. The model, the working principle, and the key features of the new approach are presented. Based on the proposed new approach, a new in-process form error optical measurement system is developed. The coolant removal capability and the performance of this new multiple air beam approach are assessed. The experimental results show that the workpiece surface y(x, z) can be measured successfully with standard deviation up to 0.3011 µm even under a large amount of coolant, such that the coolant thickness is 15 mm. This means a relative uncertainty of 2σ up to 4.35% and the workpiece surface is deeply immersed in the opaque coolant. The results also show that, in terms of coolant removal capability, air supply and air velocity, the proposed new approach improves by, respectively, 3.3, 1.3 and 5.3 times on the previous single air beam approach. The results demonstrate the significant improvements brought by the new multiple air beam method together with the developed measurement system.
NASA Astrophysics Data System (ADS)
Mfumu Kihumba, Antoine; Ndembo Longo, Jean; Vanclooster, Marnik
2016-03-01
A multivariate statistical modelling approach was applied to explain the anthropogenic pressure of nitrate pollution on the Kinshasa groundwater body (Democratic Republic of Congo). Multiple regression and regression tree models were compared and used to identify major environmental factors that control the groundwater nitrate concentration in this region. The analyses were made in terms of physical attributes related to the topography, land use, geology and hydrogeology in the capture zone of different groundwater sampling stations. For the nitrate data, groundwater datasets from two different surveys were used. The statistical models identified the topography, the residential area, the service land (cemetery), and the surface-water land-use classes as major factors explaining nitrate occurrence in the groundwater. Also, groundwater nitrate pollution depends not on one single factor but on the combined influence of factors representing nitrogen loading sources and aquifer susceptibility characteristics. The groundwater nitrate pressure was better predicted with the regression tree model than with the multiple regression model. Furthermore, the results elucidated the sensitivity of the model performance towards the method of delineation of the capture zones. For pollution modelling at the monitoring points, therefore, it is better to identify capture-zone shapes based on a conceptual hydrogeological model rather than to adopt arbitrary circular capture zones.
ERIC Educational Resources Information Center
Ferrando, Pere J.
2008-01-01
This paper develops results and procedures for obtaining linear composites of factor scores that maximize: (a) test information, and (b) validity with respect to external variables in the multiple factor analysis (FA) model. I treat FA as a multidimensional item response theory model, and use Ackerman's multidimensional information approach based…
A probabilistic model of cross-categorization.
Shafto, Patrick; Kemp, Charles; Mansinghka, Vikash; Tenenbaum, Joshua B
2011-07-01
Most natural domains can be represented in multiple ways: we can categorize foods in terms of their nutritional content or social role, animals in terms of their taxonomic groupings or their ecological niches, and musical instruments in terms of their taxonomic categories or social uses. Previous approaches to modeling human categorization have largely ignored the problem of cross-categorization, focusing on learning just a single system of categories that explains all of the features. Cross-categorization presents a difficult problem: how can we infer categories without first knowing which features the categories are meant to explain? We present a novel model that suggests that human cross-categorization is a result of joint inference about multiple systems of categories and the features that they explain. We also formalize two commonly proposed alternative explanations for cross-categorization behavior: a features-first and an objects-first approach. The features-first approach suggests that cross-categorization is a consequence of attentional processes, where features are selected by an attentional mechanism first and categories are derived second. The objects-first approach suggests that cross-categorization is a consequence of repeated, sequential attempts to explain features, where categories are derived first, then features that are poorly explained are recategorized. We present two sets of simulations and experiments testing the models' predictions about human categorization. We find that an approach based on joint inference provides the best fit to human categorization behavior, and we suggest that a full account of human category learning will need to incorporate something akin to these capabilities. Copyright © 2011 Elsevier B.V. All rights reserved.
Exploring the use of multiple analogical models when teaching and learning chemical equilibrium
NASA Astrophysics Data System (ADS)
Harrison, Allan G.; de Jong, Onno
2005-12-01
This study describes the multiple analogical models used to introduce and teach Grade 12 chemical equilibrium. We examine the teacher's reasons for using models, explain each model's development during the lessons, and analyze the understandings students derived from the models. A case study approach was used and the data were drawn from the observation of three consecutive Grade 12 lessons on chemical equilibrium, pre- and post-lesson interviews, and delayed student interviews. The key analogical models used in teaching were: the school dance; the sugar in a teacup; the pot of curry; and the busy highway. The lesson and interview data were subject to multiple, independent analyses and yielded the following outcomes: The teacher planned to use the students' prior knowledge wherever possible and he responded to student questions with stories and extended and enriched analogies. He planned to discuss where each analogy broke down but did not. The students enjoyed the teaching but built variable mental models of equilibrium and some of their analogical mappings were unreliable. A female student disliked masculine analogies, other students tended to see elements of the multiple models in isolation, and some did not recognize all the analogical mappings embedded in the teaching plan. Most students learned that equilibrium reactions are dynamic, occur in closed systems, and the forward and reverse reactions are balanced. We recommend the use of multiple analogies like these and insist that teachers always show where the analogy breaks down and carefully negotiate the conceptual outcomes.
The Primary Care Behavioral Health (PCBH) Model: An Overview and Operational Definition.
Reiter, Jeffrey T; Dobmeyer, Anne C; Hunter, Christopher L
2018-06-01
The Primary Care Behavioral Health (PCBH) model is a prominent approach to the integration of behavioral health services into primary care settings. Implementation of the PCBH model has grown over the past two decades, yet research and training efforts have been slowed by inconsistent terminology and lack of a concise, operationalized definition of the model and its key components. This article provides the first concise operationalized definition of the PCBH model, developed from examination of multiple published resources and consultation with nationally recognized PCBH model experts. The definition frames the model as a team-based approach to managing biopsychosocial issues that present in primary care, with the over-arching goal of improving primary care in general. The article provides a description of the key components and strategies used in the model, the rationale for those strategies, a brief comparison of this model to other integration approaches, a focused summary of PCBH model outcomes, and an overview of common challenges to implementing the model.
Mark-recapture with multiple, non-invasive marks.
Bonner, Simon J; Holmberg, Jason
2013-09-01
Non-invasive marks, including pigmentation patterns, acquired scars, and genetic markers, are often used to identify individuals in mark-recapture experiments. If animals in a population can be identified from multiple, non-invasive marks then some individuals may be counted twice in the observed data. Analyzing the observed histories without accounting for these errors will provide incorrect inference about the population dynamics. Previous approaches to this problem include modeling data from only one mark and combining estimators obtained from each mark separately assuming that they are independent. Motivated by the analysis of data from the ECOCEAN online whale shark (Rhincodon typus) catalog, we describe a Bayesian method to analyze data from multiple, non-invasive marks that is based on the latent-multinomial model of Link et al. (2010, Biometrics 66, 178-185). Further to this, we describe a simplification of the Markov chain Monte Carlo algorithm of Link et al. (2010, Biometrics 66, 178-185) that leads to more efficient computation. We present results from the analysis of the ECOCEAN whale shark data and from simulation studies comparing our method with the previous approaches. © 2013, The International Biometric Society.
McGrath, Lauren M; Pennington, Bruce F; Shanahan, Michelle A; Santerre-Lemmon, Laura E; Barnard, Holly D; Willcutt, Erik G; Defries, John C; Olson, Richard K
2011-05-01
This study tests a multiple cognitive deficit model of reading disability (RD), attention-deficit/hyperactivity disorder (ADHD), and their comorbidity. A structural equation model (SEM) of multiple cognitive risk factors and symptom outcome variables was constructed. The model included phonological awareness as a unique predictor of RD and response inhibition as a unique predictor of ADHD. Processing speed, naming speed, and verbal working memory were modeled as potential shared cognitive deficits. Model fit indices from the SEM indicated satisfactory fit. Closer inspection of the path weights revealed that processing speed was the only cognitive variable with significant unique relationships to RD and ADHD dimensions, particularly inattention. Moreover, the significant correlation between reading and inattention was reduced to non-significance when processing speed was included in the model, suggesting that processing speed primarily accounted for the phenotypic correlation (or comorbidity) between reading and inattention. This study illustrates the power of a multiple deficit approach to complex developmental disorders and psychopathologies, particularly for exploring comorbidities. The theoretical role of processing speed in the developmental pathways of RD and ADHD and directions for future research are discussed. © 2010 The Authors. Journal of Child Psychology and Psychiatry © 2010 Association for Child and Adolescent Mental Health.
An efficient approach to ARMA modeling of biological systems with multiple inputs and delays
NASA Technical Reports Server (NTRS)
Perrott, M. H.; Cohen, R. J.
1996-01-01
This paper presents a new approach to AutoRegressive Moving Average (ARMA or ARX) modeling which automatically seeks the best model order to represent investigated linear, time invariant systems using their input/output data. The algorithm seeks the ARMA parameterization which accounts for variability in the output of the system due to input activity and contains the fewest number of parameters required to do so. The unique characteristics of the proposed system identification algorithm are its simplicity and efficiency in handling systems with delays and multiple inputs. We present results of applying the algorithm to simulated data and experimental biological data In addition, a technique for assessing the error associated with the impulse responses calculated from estimated ARMA parameterizations is presented. The mapping from ARMA coefficients to impulse response estimates is nonlinear, which complicates any effort to construct confidence bounds for the obtained impulse responses. Here a method for obtaining a linearization of this mapping is derived, which leads to a simple procedure to approximate the confidence bounds.
Overcoming multicollinearity in multiple regression using correlation coefficient
NASA Astrophysics Data System (ADS)
Zainodin, H. J.; Yap, S. J.
2013-09-01
Multicollinearity happens when there are high correlations among independent variables. In this case, it would be difficult to distinguish between the contributions of these independent variables to that of the dependent variable as they may compete to explain much of the similar variance. Besides, the problem of multicollinearity also violates the assumption of multiple regression: that there is no collinearity among the possible independent variables. Thus, an alternative approach is introduced in overcoming the multicollinearity problem in achieving a well represented model eventually. This approach is accomplished by removing the multicollinearity source variables on the basis of the correlation coefficient values based on full correlation matrix. Using the full correlation matrix can facilitate the implementation of Excel function in removing the multicollinearity source variables. It is found that this procedure is easier and time-saving especially when dealing with greater number of independent variables in a model and a large number of all possible models. Hence, in this paper detailed insight of the procedure is shown, compared and implemented.
Time-resolved non-sequential ray-tracing modelling of non-line-of-sight picosecond pulse LIDAR
NASA Astrophysics Data System (ADS)
Sroka, Adam; Chan, Susan; Warburton, Ryan; Gariepy, Genevieve; Henderson, Robert; Leach, Jonathan; Faccio, Daniele; Lee, Stephen T.
2016-05-01
The ability to detect motion and to track a moving object that is hidden around a corner or behind a wall provides a crucial advantage when physically going around the obstacle is impossible or dangerous. One recently demonstrated approach to achieving this goal makes use of non-line-of-sight picosecond pulse laser ranging. This approach has recently become interesting due to the availability of single-photon avalanche diode (SPAD) receivers with picosecond time resolution. We present a time-resolved non-sequential ray-tracing model and its application to indirect line-of-sight detection of moving targets. The model makes use of the Zemax optical design programme's capabilities in stray light analysis where it traces large numbers of rays through multiple random scattering events in a 3D non-sequential environment. Our model then reconstructs the generated multi-segment ray paths and adds temporal analysis. Validation of this model against experimental results is shown. We then exercise the model to explore the limits placed on system design by available laser sources and detectors. In particular we detail the requirements on the laser's pulse energy, duration and repetition rate, and on the receiver's temporal response and sensitivity. These are discussed in terms of the resulting implications for achievable range, resolution and measurement time while retaining eye-safety with this technique. Finally, the model is used to examine potential extensions to the experimental system that may allow for increased localisation of the position of the detected moving object, such as the inclusion of multiple detectors and/or multiple emitters.
Two Formal Gas Models For Multi-Agent Sweeping and Obstacle Avoidance
NASA Technical Reports Server (NTRS)
Kerr, Wesley; Spears, Diana; Spears, William; Thayer, David
2004-01-01
The task addressed here is a dynamic search through a bounded region, while avoiding multiple large obstacles, such as buildings. In the case of limited sensors and communication, maintaining spatial coverage - especially after passing the obstacles - is a challenging problem. Here, we investigate two physics-based approaches to solving this task with multiple simulated mobile robots, one based on artificial forces and the other based on the kinetic theory of gases. The desired behavior is achieved with both methods, and a comparison is made between them. Because both approaches are physics-based, formal assurances about the multi-robot behavior are straightforward, and are included in the paper.
Distributed Finite-Time Cooperative Control of Multiple High-Order Nonholonomic Mobile Robots.
Du, Haibo; Wen, Guanghui; Cheng, Yingying; He, Yigang; Jia, Ruting
2017-12-01
The consensus problem of multiple nonholonomic mobile robots in the form of high-order chained structure is considered in this paper. Based on the model features and the finite-time control technique, a finite-time cooperative controller is explicitly constructed which guarantees that the states consensus is achieved in a finite time. As an application of the proposed results, finite-time formation control of multiple wheeled mobile robots is studied and a finite-time formation control algorithm is proposed. To show effectiveness of the proposed approach, a simulation example is given.
Huang, Weidong; Li, Kun; Wang, Gan; Wang, Yingzhe
2013-11-01
In this article, we present a newly designed inverse umbrella surface aerator, and tested its performance in driving flow of an oxidation ditch. Results show that it has a better performance in driving the oxidation ditch than the original one with higher average velocity and more uniform flow field. We also present a computational fluid dynamics model for predicting the flow field in an oxidation ditch driven by a surface aerator. The improved momentum source term approach to simulate the flow field of the oxidation ditch driven by an inverse umbrella surface aerator was developed and validated through experiments. Four kinds of turbulent models were investigated with the approach, including the standard k - ɛ model, RNG k - ɛ model, realizable k - ɛ model, and Reynolds stress model, and the predicted data were compared with those calculated with the multiple rotating reference frame approach (MRF) and sliding mesh approach (SM). Results of the momentum source term approach are in good agreement with the experimental data, and its prediction accuracy is better than MRF, close to SM. It is also found that the momentum source term approach has lower computational expenses, is simpler to preprocess, and is easier to use.
Classroom Strategies Coaching Model: Integration of Formative Assessment and Instructional Coaching
ERIC Educational Resources Information Center
Reddy, Linda A.; Dudek, Christopher M.; Lekwa, Adam
2017-01-01
This article describes the theory, key components, and empirical support for the Classroom Strategies Coaching (CSC) Model, a data-driven coaching approach that systematically integrates data from multiple observations to identify teacher practice needs and goals, design practice plans, and evaluate progress towards goals. The primary aim of the…
A Framework for Model-Based Inquiry through Agent-Based Programming
ERIC Educational Resources Information Center
Xiang, Lin; Passmore, Cynthia
2015-01-01
There has been increased recognition in the past decades that model-based inquiry (MBI) is a promising approach for cultivating deep understandings by helping students unite phenomena and underlying mechanisms. Although multiple technology tools have been used to improve the effectiveness of MBI, there are not enough detailed examinations of how…
"It's about Improving My Practice": The Learner Experience of Real-Time Coaching
ERIC Educational Resources Information Center
Sharplin, Erica J.; Stahl, Garth; Kehrwald, Ben
2016-01-01
This article reports on pre-service teachers' experience of the Real-Time Coaching model, an innovative technology-based approach to teacher training. The Real-Time Coaching model uses multiple feedback cycles via wireless technology to develop within pre-service teachers the specific skills and mindset toward continual improvement. Results of…
Spatial scaling and multi-model inference in landscape genetics: Martes americana in northern Idaho
Tzeidle N. Wasserman; Samuel A. Cushman; Michael K. Schwartz; David O. Wallin
2010-01-01
Individual-based analyses relating landscape structure to genetic distances across complex landscapes enable rigorous evaluation of multiple alternative hypotheses linking landscape structure to gene flow. We utilize two extensions to increase the rigor of the individual-based causal modeling approach to inferring relationships between landscape patterns and gene flow...
The weighted priors approach for combining expert opinions in logistic regression experiments
Quinlan, Kevin R.; Anderson-Cook, Christine M.; Myers, Kary L.
2017-04-24
When modeling the reliability of a system or component, it is not uncommon for more than one expert to provide very different prior estimates of the expected reliability as a function of an explanatory variable such as age or temperature. Our goal in this paper is to incorporate all information from the experts when choosing a design about which units to test. Bayesian design of experiments has been shown to be very successful for generalized linear models, including logistic regression models. We use this approach to develop methodology for the case where there are several potentially non-overlapping priors under consideration.more » While multiple priors have been used for analysis in the past, they have never been used in a design context. The Weighted Priors method performs well for a broad range of true underlying model parameter choices and is more robust when compared to other reasonable design choices. Finally, we illustrate the method through multiple scenarios and a motivating example. Additional figures for this article are available in the online supplementary information.« less
A study of pilot modeling in multi-controller tasks
NASA Technical Reports Server (NTRS)
Whitbeck, R. F.; Knight, J. R.
1972-01-01
A modeling approach, which utilizes a matrix of transfer functions to describe the human pilot in multiple input, multiple output control situations, is studied. The approach used was to extend a well established scalar Wiener-Hopf minimization technique to the matrix case and then study, via a series of experiments, the data requirements when only finite record lengths are available. One of these experiments was a two-controller roll tracking experiment designed to force the pilot to use rudder in order to coordinate and reduce the effects of aileron yaw. One model was computed for the case where the signals used to generate the spectral matrix are error and bank angle while another model was computed for the case where error and yaw angle are the inputs. Several anomalies were observed to be present in the experimental data. These are defined by the descriptive terms roll up, break up, and roll down. Due to these algorithm induced anomalies, the frequency band over which reliable estimates of power spectra can be achieved is considerably less than predicted by the sampling theorem.
No-Reference Image Quality Assessment by Wide-Perceptual-Domain Scorer Ensemble Method.
Liu, Tsung-Jung; Liu, Kuan-Hsien
2018-03-01
A no-reference (NR) learning-based approach to assess image quality is presented in this paper. The devised features are extracted from wide perceptual domains, including brightness, contrast, color, distortion, and texture. These features are used to train a model (scorer) which can predict scores. The scorer selection algorithms are utilized to help simplify the proposed system. In the final stage, the ensemble method is used to combine the prediction results from selected scorers. Two multiple-scale versions of the proposed approach are also presented along with the single-scale one. They turn out to have better performances than the original single-scale method. Because of having features from five different domains at multiple image scales and using the outputs (scores) from selected score prediction models as features for multi-scale or cross-scale fusion (i.e., ensemble), the proposed NR image quality assessment models are robust with respect to more than 24 image distortion types. They also can be used on the evaluation of images with authentic distortions. The extensive experiments on three well-known and representative databases confirm the performance robustness of our proposed model.
The weighted priors approach for combining expert opinions in logistic regression experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinlan, Kevin R.; Anderson-Cook, Christine M.; Myers, Kary L.
When modeling the reliability of a system or component, it is not uncommon for more than one expert to provide very different prior estimates of the expected reliability as a function of an explanatory variable such as age or temperature. Our goal in this paper is to incorporate all information from the experts when choosing a design about which units to test. Bayesian design of experiments has been shown to be very successful for generalized linear models, including logistic regression models. We use this approach to develop methodology for the case where there are several potentially non-overlapping priors under consideration.more » While multiple priors have been used for analysis in the past, they have never been used in a design context. The Weighted Priors method performs well for a broad range of true underlying model parameter choices and is more robust when compared to other reasonable design choices. Finally, we illustrate the method through multiple scenarios and a motivating example. Additional figures for this article are available in the online supplementary information.« less
Yu, Guozhi; Hozé, Nathanaël; Rolff, Jens
2016-01-01
Antimicrobial peptides (AMPs) and antibiotics reduce the net growth rate of bacterial populations they target. It is relevant to understand if effects of multiple antimicrobials are synergistic or antagonistic, in particular for AMP responses, because naturally occurring responses involve multiple AMPs. There are several competing proposals describing how multiple types of antimicrobials add up when applied in combination, such as Loewe additivity or Bliss independence. These additivity terms are defined ad hoc from abstract principles explaining the supposed interaction between the antimicrobials. Here, we link these ad hoc combination terms to a mathematical model that represents the dynamics of antimicrobial molecules hitting targets on bacterial cells. In this multi-hit model, bacteria are killed when a certain number of targets are hit by antimicrobials. Using this bottom-up approach reveals that Bliss independence should be the model of choice if no interaction between antimicrobial molecules is expected. Loewe additivity, on the other hand, describes scenarios in which antimicrobials affect the same components of the cell, i.e. are not acting independently. While our approach idealizes the dynamics of antimicrobials, it provides a conceptual underpinning of the additivity terms. The choice of the additivity term is essential to determine synergy or antagonism of antimicrobials. This article is part of the themed issue ‘Evolutionary ecology of arthropod antimicrobial peptides’. PMID:27160596
NASA Astrophysics Data System (ADS)
Parker, L. N.; Zank, G. P.
2013-12-01
Successful forecasting of energetic particle events in space weather models require algorithms for correctly predicting the spectrum of ions accelerated from a background population of charged particles. We present preliminary results from a model that diffusively accelerates particles at multiple shocks. Our basic approach is related to box models (Protheroe and Stanev, 1998; Moraal and Axford, 1983; Ball and Kirk, 1992; Drury et al., 1999) in which a distribution of particles is diffusively accelerated inside the box while simultaneously experiencing decompression through adiabatic expansion and losses from the convection and diffusion of particles outside the box (Melrose and Pope, 1993; Zank et al., 2000). We adiabatically decompress the accelerated particle distribution between each shock by either the method explored in Melrose and Pope (1993) and Pope and Melrose (1994) or by the approach set forth in Zank et al. (2000) where we solve the transport equation by a method analogous to operator splitting. The second method incorporates the additional loss terms of convection and diffusion and allows for the use of a variable time between shocks. We use a maximum injection energy (Emax) appropriate for quasi-parallel and quasi-perpendicular shocks (Zank et al., 2000, 2006; Dosch and Shalchi, 2010) and provide a preliminary application of the diffusive acceleration of particles by multiple shocks with frequencies appropriate for solar maximum (i.e., a non-Markovian process).
Systems thinking: what business modeling can do for public health.
Williams, Warren; Lyalin, David; Wingo, Phyllis A
2005-01-01
Today's public health programs are complex business systems with multiple levels of collaborating federal, state, and local entities. The use of proven systems engineering modeling techniques to analyze, align, and streamline public health operations is in the beginning stages. The authors review the initial business modeling efforts in immunization and cancer registries and present a case to broadly apply business modeling approaches to analyze and improve public health processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adam, J. C.; Stephens, J. C.; Chung, Serena
As managers of agricultural and natural resources are confronted with uncertainties in global change impacts, the complexities associated with the interconnected cycling of nitrogen, carbon, and water present daunting management challenges. Existing models provide detailed information on specific sub-systems (land, air, water, economics, etc). An increasing awareness of the unintended consequences of management decisions resulting from interconnectedness of these sub-systems, however, necessitates coupled regional earth system models (EaSMs). Decision makers’ needs and priorities can be integrated into the model design and development processes to enhance decision-making relevance and "usability" of EaSMs. BioEarth is a current research initiative with a focusmore » on the U.S. Pacific Northwest region that explores the coupling of multiple stand-alone EaSMs to generate usable information for resource decision-making. Direct engagement between model developers and non-academic stakeholders involved in resource and environmental management decisions throughout the model development process is a critical component of this effort. BioEarth utilizes a "bottom-up" approach, upscaling a catchment-scale model to basin and regional scales, as opposed to the "top-down" approach of downscaling global models utilized by most other EaSM efforts. This paper describes the BioEarth initiative and highlights opportunities and challenges associated with coupling multiple stand-alone models to generate usable information for agricultural and natural resource decision-making.« less
Bayesian models based on test statistics for multiple hypothesis testing problems.
Ji, Yuan; Lu, Yiling; Mills, Gordon B
2008-04-01
We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.
Experimental design matters for statistical analysis: how to handle blocking.
Jensen, Signe M; Schaarschmidt, Frank; Onofri, Andrea; Ritz, Christian
2018-03-01
Nowadays, evaluation of the effects of pesticides often relies on experimental designs that involve multiple concentrations of the pesticide of interest or multiple pesticides at specific comparable concentrations and, possibly, secondary factors of interest. Unfortunately, the experimental design is often more or less neglected when analysing data. Two data examples were analysed using different modelling strategies. First, in a randomized complete block design, mean heights of maize treated with a herbicide and one of several adjuvants were compared. Second, translocation of an insecticide applied to maize as a seed treatment was evaluated using incomplete data from an unbalanced design with several layers of hierarchical sampling. Extensive simulations were carried out to further substantiate the effects of different modelling strategies. It was shown that results from suboptimal approaches (two-sample t-tests and ordinary ANOVA assuming independent observations) may be both quantitatively and qualitatively different from the results obtained using an appropriate linear mixed model. The simulations demonstrated that the different approaches may lead to differences in coverage percentages of confidence intervals and type 1 error rates, confirming that misleading conclusions can easily happen when an inappropriate statistical approach is chosen. To ensure that experimental data are summarized appropriately, avoiding misleading conclusions, the experimental design should duly be reflected in the choice of statistical approaches and models. We recommend that author guidelines should explicitly point out that authors need to indicate how the statistical analysis reflects the experimental design. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Moran, Lyndsey; Lengua, Liliana J.; Zalewski, Maureen; Ruberry, Erika; Klien, Melanie; Thompson, Stephanie; Kiff, Cara
2016-01-01
Using both variable- and person-centered approaches, this study examined the role of temperament in relation to children's vulnerable or resilient responses to cumulative risk. Observed reactivity and regulation dimensions of temperament were tested as mediating and moderating the relation between family cumulative risk and teacher-reported adjustment problems in a sample of 259 preschool-age children. Further, latent profile analyses were used to examine whether profiles of temperament, accounting for multiple characteristics simultaneously, provided additional information about the role of temperament in children's responses to risk. Results support a diathesis-stress model in which high frustration, low fear, and low delay ability confer particular vulnerability for children in high-risk contexts. Benefits of multiple approaches are highlighted. PMID:28408769
Multiframe video coding for improved performance over wireless channels.
Budagavi, M; Gibson, J D
2001-01-01
We propose and evaluate a multi-frame extension to block motion compensation (BMC) coding of videoconferencing-type video signals for wireless channels. The multi-frame BMC (MF-BMC) coder makes use of the redundancy that exists across multiple frames in typical videoconferencing sequences to achieve additional compression over that obtained by using the single frame BMC (SF-BMC) approach, such as in the base-level H.263 codec. The MF-BMC approach also has an inherent ability of overcoming some transmission errors and is thus more robust when compared to the SF-BMC approach. We model the error propagation process in MF-BMC coding as a multiple Markov chain and use Markov chain analysis to infer that the use of multiple frames in motion compensation increases robustness. The Markov chain analysis is also used to devise a simple scheme which randomizes the selection of the frame (amongst the multiple previous frames) used in BMC to achieve additional robustness. The MF-BMC coders proposed are a multi-frame extension of the base level H.263 coder and are found to be more robust than the base level H.263 coder when subjected to simulated errors commonly encountered on wireless channels.
A Survey of Insider Attack Detection Research
2008-08-25
modeling of statistical features , such as the frequency of events, the duration of events, the co-occurrence of multiple events combined through...forms of attack that have been reported [Error! Reference source not found.]. For example: • Unauthorized extraction , duplication, or exfiltration...network level. Schultz pointed out that not one approach will work but solutions need to be based on multiple sensors to be able to find any combination
The multiple complex exponential model and its application to EEG analysis
NASA Astrophysics Data System (ADS)
Chen, Dao-Mu; Petzold, J.
The paper presents a novel approach to the analysis of the EEG signal, which is based on a multiple complex exponential (MCE) model. Parameters of the model are estimated using a nonharmonic Fourier expansion algorithm. The central idea of the algorithm is outlined, and the results, estimated on the basis of simulated data, are presented and compared with those obtained by the conventional methods of signal analysis. Preliminary work on various application possibilities of the MCE model in EEG data analysis is described. It is shown that the parameters of the MCE model reflect the essential information contained in an EEG segment. These parameters characterize the EEG signal in a more objective way because they are closer to the recent supposition of the nonlinear character of the brain's dynamic behavior.
Aliabadi, Mohsen; Golmohammadi, Rostam; Khotanlou, Hassan; Mansoorizadeh, Muharram; Salarpour, Amir
2014-01-01
Noise prediction is considered to be the best method for evaluating cost-preventative noise controls in industrial workrooms. One of the most important issues is the development of accurate models for analysis of the complex relationships among acoustic features affecting noise level in workrooms. In this study, advanced fuzzy approaches were employed to develop relatively accurate models for predicting noise in noisy industrial workrooms. The data were collected from 60 industrial embroidery workrooms in the Khorasan Province, East of Iran. The main acoustic and embroidery process features that influence the noise were used to develop prediction models using MATLAB software. Multiple regression technique was also employed and its results were compared with those of fuzzy approaches. Prediction errors of all prediction models based on fuzzy approaches were within the acceptable level (lower than one dB). However, Neuro-fuzzy model (RMSE=0.53dB and R2=0.88) could slightly improve the accuracy of noise prediction compared with generate fuzzy model. Moreover, fuzzy approaches provided more accurate predictions than did regression technique. The developed models based on fuzzy approaches as useful prediction tools give professionals the opportunity to have an optimum decision about the effectiveness of acoustic treatment scenarios in embroidery workrooms.
Krafty, Robert T; Rosen, Ori; Stoffer, David S; Buysse, Daniel J; Hall, Martica H
2017-01-01
This article considers the problem of analyzing associations between power spectra of multiple time series and cross-sectional outcomes when data are observed from multiple subjects. The motivating application comes from sleep medicine, where researchers are able to non-invasively record physiological time series signals during sleep. The frequency patterns of these signals, which can be quantified through the power spectrum, contain interpretable information about biological processes. An important problem in sleep research is drawing connections between power spectra of time series signals and clinical characteristics; these connections are key to understanding biological pathways through which sleep affects, and can be treated to improve, health. Such analyses are challenging as they must overcome the complicated structure of a power spectrum from multiple time series as a complex positive-definite matrix-valued function. This article proposes a new approach to such analyses based on a tensor-product spline model of Cholesky components of outcome-dependent power spectra. The approach exibly models power spectra as nonparametric functions of frequency and outcome while preserving geometric constraints. Formulated in a fully Bayesian framework, a Whittle likelihood based Markov chain Monte Carlo (MCMC) algorithm is developed for automated model fitting and for conducting inference on associations between outcomes and spectral measures. The method is used to analyze data from a study of sleep in older adults and uncovers new insights into how stress and arousal are connected to the amount of time one spends in bed.
NASA Astrophysics Data System (ADS)
Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.
2017-12-01
Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.
Chiu, Chung-Yi; Lynch, Ruth T; Chan, Fong; Berven, Norman L
2011-08-01
To evaluate the Health Action Process Approach (HAPA) as a motivational model for physical activity self-management for people with multiple sclerosis (MS). Quantitative descriptive research design using path analysis. One hundred ninety-five individuals with MS were recruited from the National Multiple Sclerosis Society and a neurology clinic at a university teaching hospital in the Midwest. Outcome was measured by the Physical Activity Stages of Change Instrument, along with measures for nine predictors (severity, action self-efficacy, outcome expectancy, risk perception, perceived barriers, intention, maintenance self-efficacy, action and coping planning, and recovery self-efficacy). The respecified HAPA physical activity model fit the data relatively well (goodness-of-fit index = .92, normed fit index = .91, and comparative fit index = .93) explaining 38% of the variance in physical activity. Recovery self-efficacy, action and coping planning, and perceived barriers directly contributed to the prediction of physical activity. Outcome expectancy significantly influenced intention and the relationship between intention and physical activity is mediated by action and coping planning. Action self-efficacy, maintenance self-efficacy, and recovery self-efficacy directly or indirectly affected physical activity. Severity of MS and action self-efficacy had an inverse relationship with perceived barriers and perceived barriers influenced physical activity. Empirical support was found for the proposed HAPA model of physical activity for people with MS. The HAPA model appears to provide useful information for clinical rehabilitation and health promotion interventions.
NASA Astrophysics Data System (ADS)
Millar, David J.; Ewers, Brent E.; Mackay, D. Scott; Peckham, Scott; Reed, David E.; Sekoni, Adewale
2017-09-01
Mountain pine beetle outbreaks in western North America have led to extensive forest mortality, justifiably generating interest in improving our understanding of how this type of ecological disturbance affects hydrological cycles. While observational studies and simulations have been used to elucidate the effects of mountain beetle mortality on hydrological fluxes, an ecologically mechanistic model of forest evapotranspiration (ET) evaluated against field data has yet to be developed. In this work, we use the Terrestrial Regional Ecosystem Exchange Simulator (TREES) to incorporate the ecohydrological impacts of mountain pine beetle disturbance on ET for a lodgepole pine-dominated forest equipped with an eddy covariance tower. An existing degree-day model was incorporated that predicted the life cycle of mountain pine beetles, along with an empirically derived submodel that allowed sap flux to decline as a function of temperature-dependent blue stain fungal growth. The eddy covariance footprint was divided into multiple cohorts for multiple growing seasons, including representations of recently attacked trees and the compensatory effects of regenerating understory, using two different spatial scaling methods. Our results showed that using a multiple cohort approach matched eddy covariance-measured ecosystem-scale ET fluxes well, and showed improved performance compared to model simulations assuming a binary framework of only areas of live and dead overstory. Cumulative growing season ecosystem-scale ET fluxes were 8 - 29% greater using the multicohort approach during years in which beetle attacks occurred, highlighting the importance of including compensatory ecological mechanism in ET models.
Automatic Generation of Analogy Questions for Student Assessment: An Ontology-Based Approach
ERIC Educational Resources Information Center
Alsubait, Tahani; Parsia, Bijan; Sattler, Uli
2012-01-01
Different computational models for generating analogies of the form "A is to B as C is to D" have been proposed over the past 35 years. However, analogy generation is a challenging problem that requires further research. In this article, we present a new approach for generating analogies in Multiple Choice Question (MCQ) format that can be used…
Erika L. Rowland; Jennifer E. Davison; Lisa J. Graumlich
2011-01-01
Assessing the impact of climate change on species and associated management objectives is a critical initial step for engaging in the adaptation planning process. Multiple approaches are available. While all possess limitations to their application associated with the uncertainties inherent in the data and models that inform their results, conducting and incorporating...
Predicting drug-target interactions using restricted Boltzmann machines.
Wang, Yuhao; Zeng, Jianyang
2013-07-01
In silico prediction of drug-target interactions plays an important role toward identifying and developing new uses of existing or abandoned drugs. Network-based approaches have recently become a popular tool for discovering new drug-target interactions (DTIs). Unfortunately, most of these network-based approaches can only predict binary interactions between drugs and targets, and information about different types of interactions has not been well exploited for DTI prediction in previous studies. On the other hand, incorporating additional information about drug-target relationships or drug modes of action can improve prediction of DTIs. Furthermore, the predicted types of DTIs can broaden our understanding about the molecular basis of drug action. We propose a first machine learning approach to integrate multiple types of DTIs and predict unknown drug-target relationships or drug modes of action. We cast the new DTI prediction problem into a two-layer graphical model, called restricted Boltzmann machine, and apply a practical learning algorithm to train our model and make predictions. Tests on two public databases show that our restricted Boltzmann machine model can effectively capture the latent features of a DTI network and achieve excellent performance on predicting different types of DTIs, with the area under precision-recall curve up to 89.6. In addition, we demonstrate that integrating multiple types of DTIs can significantly outperform other predictions either by simply mixing multiple types of interactions without distinction or using only a single interaction type. Further tests show that our approach can infer a high fraction of novel DTIs that has been validated by known experiments in the literature or other databases. These results indicate that our approach can have highly practical relevance to DTI prediction and drug repositioning, and hence advance the drug discovery process. Software and datasets are available on request. Supplementary data are available at Bioinformatics online.
Geng, Runzhe; Wang, Xiaoyan; Sharpley, Andrew N; Meng, Fande
2015-01-01
Best management practices (BMPs) for agricultural diffuse pollution control are implemented at the field or small-watershed scale. However, the benefits of BMP implementation on receiving water quality at multiple spatial is an ongoing challenge. In this paper, we introduce an integrated approach that combines risk assessment (i.e., Phosphorus (P) index), model simulation techniques (Hydrological Simulation Program-FORTRAN), and a BMP placement tool at various scales to identify the optimal location for implementing multiple BMPs and estimate BMP effectiveness after implementation. A statistically significant decrease in nutrient discharge from watersheds is proposed to evaluate the effectiveness of BMPs, strategically targeted within watersheds. Specifically, we estimate two types of cost-effectiveness curves (total pollution reduction and proportion of watersheds improved) for four allocation approaches. Selection of a ''best approach" depends on the relative importance of the two types of effectiveness, which involves a value judgment based on the random/aggregated degree of BMP distribution among and within sub-watersheds. A statistical optimization framework is developed and evaluated in Chaohe River Watershed located in the northern mountain area of Beijing. Results show that BMP implementation significantly (p >0.001) decrease P loss from the watershed. Remedial strategies where BMPs were targeted to areas of high risk of P loss, deceased P loads compared with strategies where BMPs were randomly located across watersheds. Sensitivity analysis indicated that aggregated BMP placement in particular watershed is the most cost-effective scenario to decrease P loss. The optimization approach outlined in this paper is a spatially hierarchical method for targeting nonpoint source controls across a range of scales from field to farm, to watersheds, to regions. Further, model estimates showed targeting at multiple scales is necessary to optimize program efficiency. The integrated model approach described that selects and places BMPs at varying levels of implementation, provides a new theoretical basis and technical guidance for diffuse pollution management in agricultural watersheds.
Shifting from the single- to the multitarget paradigm in drug discovery
Medina-Franco, José L.; Giulianotti, Marc A.; Welmaker, Gregory S.; Houghten, Richard A.
2013-01-01
Increasing evidence that several drug compounds exert their effects through interactions with multiple targets is boosting the development of research fields that challenge the data reductionism approach. In this article, we review and discuss the concepts of drug repurposing, polypharmacology, chemogenomics, phenotypic screening and highthroughput in vivo testing of mixture-based libraries in an integrated manner. These research fields offer alternatives to the current paradigm of drug discovery, from a one target–one drug model to a multiple-target approach. Furthermore, the goals of lead identification are being expanded accordingly to identify not only ‘key’ compounds that fit with a single-target ‘lock’, but also ‘master key’ compounds that favorably interact with multiple targets (i.e. operate a set of desired locks to gain access to the expected clinical effects). PMID:23340113
Density estimation in tiger populations: combining information for strong inference
Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.
2012-01-01
A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.
Density estimation in tiger populations: combining information for strong inference.
Gopalaswamy, Arjun M; Royle, J Andrew; Delampady, Mohan; Nichols, James D; Karanth, K Ullas; Macdonald, David W
2012-07-01
A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture-recapture data. The model, which combined information, provided the most precise estimate of density (8.5 +/- 1.95 tigers/100 km2 [posterior mean +/- SD]) relative to a model that utilized only one data source (photographic, 12.02 +/- 3.02 tigers/100 km2 and fecal DNA, 6.65 +/- 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.
Crown, Scott B; Kelleher, Joanne K; Rouf, Rosanne; Muoio, Deborah M; Antoniewicz, Maciek R
2016-10-01
In many forms of cardiomyopathy, alterations in energy substrate metabolism play a key role in disease pathogenesis. Stable isotope tracing in rodent heart perfusion systems can be used to determine cardiac metabolic fluxes, namely those relative fluxes that contribute to pyruvate, the acetyl-CoA pool, and pyruvate anaplerosis, which are critical to cardiac homeostasis. Methods have previously been developed to interrogate these relative fluxes using isotopomer enrichments of measured metabolites and algebraic equations to determine a predefined metabolic flux model. However, this approach is exquisitely sensitive to measurement error, thus precluding accurate relative flux parameter determination. In this study, we applied a novel mathematical approach to determine relative cardiac metabolic fluxes using 13 C-metabolic flux analysis ( 13 C-MFA) aided by multiple tracer experiments and integrated data analysis. Using 13 C-MFA, we validated a metabolic network model to explain myocardial energy substrate metabolism. Four different 13 C-labeled substrates were queried (i.e., glucose, lactate, pyruvate, and oleate) based on a previously published study. We integrated the analysis of the complete set of isotopomer data gathered from these mouse heart perfusion experiments into a single comprehensive network model that delineates substrate contributions to both pyruvate and acetyl-CoA pools at a greater resolution than that offered by traditional methods using algebraic equations. To our knowledge, this is the first rigorous application of 13 C-MFA to interrogate data from multiple tracer experiments in the perfused heart. We anticipate that this approach can be used widely to study energy substrate metabolism in this and other similar biological systems. Copyright © 2016 the American Physiological Society.
Kelleher, Joanne K.; Rouf, Rosanne; Muoio, Deborah M.; Antoniewicz, Maciek R.
2016-01-01
In many forms of cardiomyopathy, alterations in energy substrate metabolism play a key role in disease pathogenesis. Stable isotope tracing in rodent heart perfusion systems can be used to determine cardiac metabolic fluxes, namely those relative fluxes that contribute to pyruvate, the acetyl-CoA pool, and pyruvate anaplerosis, which are critical to cardiac homeostasis. Methods have previously been developed to interrogate these relative fluxes using isotopomer enrichments of measured metabolites and algebraic equations to determine a predefined metabolic flux model. However, this approach is exquisitely sensitive to measurement error, thus precluding accurate relative flux parameter determination. In this study, we applied a novel mathematical approach to determine relative cardiac metabolic fluxes using 13C-metabolic flux analysis (13C-MFA) aided by multiple tracer experiments and integrated data analysis. Using 13C-MFA, we validated a metabolic network model to explain myocardial energy substrate metabolism. Four different 13C-labeled substrates were queried (i.e., glucose, lactate, pyruvate, and oleate) based on a previously published study. We integrated the analysis of the complete set of isotopomer data gathered from these mouse heart perfusion experiments into a single comprehensive network model that delineates substrate contributions to both pyruvate and acetyl-CoA pools at a greater resolution than that offered by traditional methods using algebraic equations. To our knowledge, this is the first rigorous application of 13C-MFA to interrogate data from multiple tracer experiments in the perfused heart. We anticipate that this approach can be used widely to study energy substrate metabolism in this and other similar biological systems. PMID:27496880
NASA Astrophysics Data System (ADS)
Joyce, Hannah; Reaney, Sim
2015-04-01
Catchment systems provide multiple benefits for society, including: land for agriculture, climate regulation and recreational space. Yet, these systems also have undesirable externalities, such as flooding, and the benefits they create can be compromised through societal use. For example, agriculture, forestry and urban land use practices can increase the export of fine sediment and faecal indicator organisms (FIO) delivered to river systems. These diffuse landscape pressures are coupled with pressures on the in stream temperature environment from projected climate change. Such pressures can have detrimental impacts on water quality and ecological habitat and consequently the benefits they provide for society. These diffuse and in-stream pressures can be reduced through actions at the landscape scale but are commonly tackled individually. Any intervention may have benefits for other pressures and hence the challenge is to consider all of the different pressures simultaneously to find solutions with high levels of cross-pressure benefits. This research presents (1) a simple but spatially distributed model to predict the pattern of multiple pressures at the landscape scale, and (2) a method for spatially targeting the optimum location for riparian woodland planting as mitigation action against these pressures. The model follows a minimal information requirement approach along the lines of SCIMAP (www.scimap.org.uk). This approach defines the critical source areas of fine sediment diffuse pollution, rapid overland flow and FIOs, based on the analysis of the pattern of the pressure in the landscape and the connectivity from source areas to rivers. River temperature was modeled using a simple energy balance equation; focusing on temperature of inflowing and outflowing water across a catchment. The model has been calibrated using a long term observed temperature record. The modelling outcomes enabled the identification of the severity of each pressure in relative rather than absolute sense at the landscape scale. Riparian woodland planting is proposed as one mitigation action to address these pressures. This planting disconnects the transfer of material from the landscape to the river channel by promoting increased infiltration and also provides river shading and hence decreases the rate of water heating. To identify the optimal locations for riparian woodland planting, a Monte Carlo based approach was used to identify multiple mitigation options and their influence on the pressures identified. These results were integrated into a decision support tool, which allows the user to explore the implications of individual and a set of pressures. This is achieved by allowing the user to change the importance of different pressures to identify the optimal locations for a custom combination of pressures. For example, reductions in flood risk can be prioritized over reductions in fine sediment. This approach provides an innovative way of identifying and targeting multiple diffuse pressures at the catchment scale simultaneously, which has presented a challenge in previous management efforts. The approach has been tested in the River Ribble Catchment, North West England.
Numerical implementation of multiple peeling theory and its application to spider web anchorages.
Brely, Lucas; Bosia, Federico; Pugno, Nicola M
2015-02-06
Adhesion of spider web anchorages has been studied in recent years, including the specific functionalities achieved through different architectures. To better understand the delamination mechanisms of these and other biological or artificial fibrillar adhesives, and how their adhesion can be optimized, we develop a novel numerical model to simulate the multiple peeling of structures with arbitrary branching and adhesion angles, including complex architectures. The numerical model is based on a recently developed multiple peeling theory, which extends the energy-based single peeling theory of Kendall, and can be applied to arbitrarily complex structures. In particular, we numerically show that a multiple peeling problem can be treated as the superposition of single peeling configurations even for complex structures. Finally, we apply the developed numerical approach to study spider web anchorages, showing how their function is achieved through optimal geometrical configurations.
Numerical implementation of multiple peeling theory and its application to spider web anchorages
Brely, Lucas; Bosia, Federico; Pugno, Nicola M.
2015-01-01
Adhesion of spider web anchorages has been studied in recent years, including the specific functionalities achieved through different architectures. To better understand the delamination mechanisms of these and other biological or artificial fibrillar adhesives, and how their adhesion can be optimized, we develop a novel numerical model to simulate the multiple peeling of structures with arbitrary branching and adhesion angles, including complex architectures. The numerical model is based on a recently developed multiple peeling theory, which extends the energy-based single peeling theory of Kendall, and can be applied to arbitrarily complex structures. In particular, we numerically show that a multiple peeling problem can be treated as the superposition of single peeling configurations even for complex structures. Finally, we apply the developed numerical approach to study spider web anchorages, showing how their function is achieved through optimal geometrical configurations. PMID:25657835
Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras
Morris, Mark; Sellers, William I.
2015-01-01
Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints. PMID:25780778
Lee, Jimin; Hustad, Katherine C.; Weismer, Gary
2014-01-01
Purpose Speech acoustic characteristics of children with cerebral palsy (CP) were examined with a multiple speech subsystem approach; speech intelligibility was evaluated using a prediction model in which acoustic measures were selected to represent three speech subsystems. Method Nine acoustic variables reflecting different subsystems, and speech intelligibility, were measured in 22 children with CP. These children included 13 with a clinical diagnosis of dysarthria (SMI), and nine judged to be free of dysarthria (NSMI). Data from children with CP were compared to data from age-matched typically developing children (TD). Results Multiple acoustic variables reflecting the articulatory subsystem were different in the SMI group, compared to the NSMI and TD groups. A significant speech intelligibility prediction model was obtained with all variables entered into the model (Adjusted R-squared = .801). The articulatory subsystem showed the most substantial independent contribution (58%) to speech intelligibility. Incremental R-squared analyses revealed that any single variable explained less than 9% of speech intelligibility variability. Conclusions Children in the SMI group have articulatory subsystem problems as indexed by acoustic measures. As in the adult literature, the articulatory subsystem makes the primary contribution to speech intelligibility variance in dysarthria, with minimal or no contribution from other systems. PMID:24824584
Lee, Jimin; Hustad, Katherine C; Weismer, Gary
2014-10-01
Speech acoustic characteristics of children with cerebral palsy (CP) were examined with a multiple speech subsystems approach; speech intelligibility was evaluated using a prediction model in which acoustic measures were selected to represent three speech subsystems. Nine acoustic variables reflecting different subsystems, and speech intelligibility, were measured in 22 children with CP. These children included 13 with a clinical diagnosis of dysarthria (speech motor impairment [SMI] group) and 9 judged to be free of dysarthria (no SMI [NSMI] group). Data from children with CP were compared to data from age-matched typically developing children. Multiple acoustic variables reflecting the articulatory subsystem were different in the SMI group, compared to the NSMI and typically developing groups. A significant speech intelligibility prediction model was obtained with all variables entered into the model (adjusted R2 = .801). The articulatory subsystem showed the most substantial independent contribution (58%) to speech intelligibility. Incremental R2 analyses revealed that any single variable explained less than 9% of speech intelligibility variability. Children in the SMI group had articulatory subsystem problems as indexed by acoustic measures. As in the adult literature, the articulatory subsystem makes the primary contribution to speech intelligibility variance in dysarthria, with minimal or no contribution from other systems.
Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras.
Peyer, Kathrin E; Morris, Mark; Sellers, William I
2015-01-01
Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints.
Blackwood, Julie C; Hastings, Alan; Mumby, Peter J
2011-10-01
The interaction between multiple stressors on Caribbean coral reefs, namely, fishing effort and hurricane impacts, is a key element in the future sustainability of reefs. We develop an analytic model of coral-algal interactions and explicitly consider grazing by herbivorous reef fish. Further, we consider changes in structural complexity, or rugosity, in addition to the direct impacts of hurricanes, which are implemented as stochastic jump processes. The model simulations consider various levels of fishing effort corresponding to' several hurricane frequencies and impact levels dependent on geographic location. We focus on relatively short time scales so we do not explicitly include changes in ocean temperature, chemistry, or sea level rise. The general features of our approach would, however, apply to these other stressors and to the management of other systems in the face of multiple stressors. It is determined that the appropriate management policy, either local reef restoration or fisheries management, greatly depends on hurricane frequency and impact level. For sufficiently low hurricane impact and macroalgal growth rate, our results indicate that regions with lower-frequency hurricanes require stricter fishing regulations, whereas management in regions with higher-frequency hurricanes might be less concerned with enhancing grazing and instead consider whether local-scale restorative activities to increase vertical structure are cost-effective.
Pound, Michael P.; French, Andrew P.; Murchie, Erik H.; Pridmore, Tony P.
2014-01-01
Increased adoption of the systems approach to biological research has focused attention on the use of quantitative models of biological objects. This includes a need for realistic three-dimensional (3D) representations of plant shoots for quantification and modeling. Previous limitations in single-view or multiple-view stereo algorithms have led to a reliance on volumetric methods or expensive hardware to record plant structure. We present a fully automatic approach to image-based 3D plant reconstruction that can be achieved using a single low-cost camera. The reconstructed plants are represented as a series of small planar sections that together model the more complex architecture of the leaf surfaces. The boundary of each leaf patch is refined using the level-set method, optimizing the model based on image information, curvature constraints, and the position of neighboring surfaces. The reconstruction process makes few assumptions about the nature of the plant material being reconstructed and, as such, is applicable to a wide variety of plant species and topologies and can be extended to canopy-scale imaging. We demonstrate the effectiveness of our approach on data sets of wheat (Triticum aestivum) and rice (Oryza sativa) plants as well as a unique virtual data set that allows us to compute quantitative measures of reconstruction accuracy. The output is a 3D mesh structure that is suitable for modeling applications in a format that can be imported in the majority of 3D graphics and software packages. PMID:25332504
Holistic versus monomeric strategies for hydrological modelling of human-modified hydrosystems
NASA Astrophysics Data System (ADS)
Nalbantis, I.; Efstratiadis, A.; Rozos, E.; Kopsiafti, M.; Koutsoyiannis, D.
2011-03-01
The modelling of human-modified basins that are inadequately measured constitutes a challenge for hydrological science. Often, models for such systems are detailed and hydraulics-based for only one part of the system while for other parts oversimplified models or rough assumptions are used. This is typically a bottom-up approach, which seeks to exploit knowledge of hydrological processes at the micro-scale at some components of the system. Also, it is a monomeric approach in two ways: first, essential interactions among system components may be poorly represented or even omitted; second, differences in the level of detail of process representation can lead to uncontrolled errors. Additionally, the calibration procedure merely accounts for the reproduction of the observed responses using typical fitting criteria. The paper aims to raise some critical issues, regarding the entire modelling approach for such hydrosystems. For this, two alternative modelling strategies are examined that reflect two modelling approaches or philosophies: a dominant bottom-up approach, which is also monomeric and, very often, based on output information, and a top-down and holistic approach based on generalized information. Critical options are examined, which codify the differences between the two strategies: the representation of surface, groundwater and water management processes, the schematization and parameterization concepts and the parameter estimation methodology. The first strategy is based on stand-alone models for surface and groundwater processes and for water management, which are employed sequentially. For each model, a different (detailed or coarse) parameterization is used, which is dictated by the hydrosystem schematization. The second strategy involves model integration for all processes, parsimonious parameterization and hybrid manual-automatic parameter optimization based on multiple objectives. A test case is examined in a hydrosystem in Greece with high complexities, such as extended surface-groundwater interactions, ill-defined boundaries, sinks to the sea and anthropogenic intervention with unmeasured abstractions both from surface water and aquifers. Criteria for comparison are the physical consistency of parameters, the reproduction of runoff hydrographs at multiple sites within the studied basin, the likelihood of uncontrolled model outputs, the required amount of computational effort and the performance within a stochastic simulation setting. Our work allows for investigating the deterioration of model performance in cases where no balanced attention is paid to all components of human-modified hydrosystems and the related information. Also, sources of errors are identified and their combined effect are evaluated.
Managing data from multiple disciplines, scales, and sites to support synthesis and modeling
Olson, R. J.; Briggs, J. M.; Porter, J.H.; Mah, Grant R.; Stafford, S.G.
1999-01-01
The synthesis and modeling of ecological processes at multiple spatial and temporal scales involves bringing together and sharing data from numerous sources. This article describes a data and information system model that facilitates assembling, managing, and sharing diverse data from multiple disciplines, scales, and sites to support integrated ecological studies. Cross-site scientific-domain working groups coordinate the development of data associated with their particular scientific working group, including decisions about data requirements, data to be compiled, data formats, derived data products, and schedules across the sites. The Web-based data and information system consists of nodes for each working group plus a central node that provides data access, project information, data query, and other functionality. The approach incorporates scientists and computer experts in the working groups and provides incentives for individuals to submit documented data to the data and information system.
Zou, W; Ouyang, H
2016-02-01
We propose a multiple estimation adjustment (MEA) method to correct effect overestimation due to selection bias from a hypothesis-generating study (HGS) in pharmacogenetics. MEA uses a hierarchical Bayesian approach to model individual effect estimates from maximal likelihood estimation (MLE) in a region jointly and shrinks them toward the regional effect. Unlike many methods that model a fixed selection scheme, MEA capitalizes on local multiplicity independent of selection. We compared mean square errors (MSEs) in simulated HGSs from naive MLE, MEA and a conditional likelihood adjustment (CLA) method that model threshold selection bias. We observed that MEA effectively reduced MSE from MLE on null effects with or without selection, and had a clear advantage over CLA on extreme MLE estimates from null effects under lenient threshold selection in small samples, which are common among 'top' associations from a pharmacogenetics HGS.
Krieger, Stephen C; Sumowski, James
2018-02-01
Clinical course in multiple sclerosis (MS) is difficult to predict on group and individual levels. We discuss the topographical model of MS as a new approach to characterizing the clinical course, with the potential to personalize disability progression based on each individual patient's pattern of disease burden (eg, lesion location) and reserve. The dynamic clinical threshold depicted in this visual model may help clinicians to educate patients about clinical phenotype and disease burden, and foster an understanding of the difference between relapses and pseudoexacerbations. There is an emphasis on building reserve against cognitive and physical decline, encouraging agency among patients. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.
Mainali, Kumar P; Warren, Dan L; Dhileepan, Kunjithapatham; McConnachie, Andrew; Strathie, Lorraine; Hassan, Gul; Karki, Debendra; Shrestha, Bharat B; Parmesan, Camille
2015-12-01
Modeling the distributions of species, especially of invasive species in non-native ranges, involves multiple challenges. Here, we developed some novel approaches to species distribution modeling aimed at reducing the influences of such challenges and improving the realism of projections. We estimated species-environment relationships for Parthenium hysterophorus L. (Asteraceae) with four modeling methods run with multiple scenarios of (i) sources of occurrences and geographically isolated background ranges for absences, (ii) approaches to drawing background (absence) points, and (iii) alternate sets of predictor variables. We further tested various quantitative metrics of model evaluation against biological insight. Model projections were very sensitive to the choice of training dataset. Model accuracy was much improved using a global dataset for model training, rather than restricting data input to the species' native range. AUC score was a poor metric for model evaluation and, if used alone, was not a useful criterion for assessing model performance. Projections away from the sampled space (i.e., into areas of potential future invasion) were very different depending on the modeling methods used, raising questions about the reliability of ensemble projections. Generalized linear models gave very unrealistic projections far away from the training region. Models that efficiently fit the dominant pattern, but exclude highly local patterns in the dataset and capture interactions as they appear in data (e.g., boosted regression trees), improved generalization of the models. Biological knowledge of the species and its distribution was important in refining choices about the best set of projections. A post hoc test conducted on a new Parthenium dataset from Nepal validated excellent predictive performance of our 'best' model. We showed that vast stretches of currently uninvaded geographic areas on multiple continents harbor highly suitable habitats for parthenium. However, discrepancies between model predictions and parthenium invasion in Australia indicate successful management for this globally significant weed. © 2015 John Wiley & Sons Ltd.
Ko, Junsu; Park, Hahnbeom; Seok, Chaok
2012-08-10
Protein structures can be reliably predicted by template-based modeling (TBM) when experimental structures of homologous proteins are available. However, it is challenging to obtain structures more accurate than the single best templates by either combining information from multiple templates or by modeling regions that vary among templates or are not covered by any templates. We introduce GalaxyTBM, a new TBM method in which the more reliable core region is modeled first from multiple templates and less reliable, variable local regions, such as loops or termini, are then detected and re-modeled by an ab initio method. This TBM method is based on "Seok-server," which was tested in CASP9 and assessed to be amongst the top TBM servers. The accuracy of the initial core modeling is enhanced by focusing on more conserved regions in the multiple-template selection and multiple sequence alignment stages. Additional improvement is achieved by ab initio modeling of up to 3 unreliable local regions in the fixed framework of the core structure. Overall, GalaxyTBM reproduced the performance of Seok-server, with GalaxyTBM and Seok-server resulting in average GDT-TS of 68.1 and 68.4, respectively, when tested on 68 single-domain CASP9 TBM targets. For application to multi-domain proteins, GalaxyTBM must be combined with domain-splitting methods. Application of GalaxyTBM to CASP9 targets demonstrates that accurate protein structure prediction is possible by use of a multiple-template-based approach, and ab initio modeling of variable regions can further enhance the model quality.
Kernel learning at the first level of inference.
Cawley, Gavin C; Talbot, Nicola L C
2014-05-01
Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.
Li, YuHui; Jin, FeiTeng
2017-01-01
The inversion design approach is a very useful tool for the complex multiple-input-multiple-output nonlinear systems to implement the decoupling control goal, such as the airplane model and spacecraft model. In this work, the flight control law is proposed using the neural-based inversion design method associated with the nonlinear compensation for a general longitudinal model of the airplane. First, the nonlinear mathematic model is converted to the equivalent linear model based on the feedback linearization theory. Then, the flight control law integrated with this inversion model is developed to stabilize the nonlinear system and relieve the coupling effect. Afterwards, the inversion control combined with the neural network and nonlinear portion is presented to improve the transient performance and attenuate the uncertain effects on both external disturbances and model errors. Finally, the simulation results demonstrate the effectiveness of this controller. PMID:29410680
A comparative study of turbulence models for overset grids
NASA Technical Reports Server (NTRS)
Renze, Kevin J.; Buning, Pieter G.; Rajagopalan, R. G.
1992-01-01
The implementation of two different types of turbulence models for a flow solver using the Chimera overset grid method is examined. Various turbulence model characteristics, such as length scale determination and transition modeling, are found to have a significant impact on the computed pressure distribution for a multielement airfoil case. No inherent problem is found with using either algebraic or one-equation turbulence models with an overset grid scheme, but simulation of turbulence for multiple-body or complex geometry flows is very difficult regardless of the gridding method. For complex geometry flowfields, modification of the Baldwin-Lomax turbulence model is necessary to select the appropriate length scale in wall-bounded regions. The overset grid approach presents no obstacle to use of a one- or two-equation turbulence model. Both Baldwin-Lomax and Baldwin-Barth models have problems providing accurate eddy viscosity levels for complex multiple-body flowfields such as those involving the Space Shuttle.
A Novel Joint Problem of Routing, Scheduling, and Variable-Width Channel Allocation in WMNs
Liu, Wan-Yu; Chou, Chun-Hung
2014-01-01
This paper investigates a novel joint problem of routing, scheduling, and channel allocation for single-radio multichannel wireless mesh networks in which multiple channel widths can be adjusted dynamically through a new software technology so that more concurrent transmissions and suppressed overlapping channel interference can be achieved. Although the previous works have studied this joint problem, their linear programming models for the problem were not incorporated with some delicate constraints. As a result, this paper first constructs a linear programming model with more practical concerns and then proposes a simulated annealing approach with a novel encoding mechanism, in which the configurations of multiple time slots are devised to characterize the dynamic transmission process. Experimental results show that our approach can find the same or similar solutions as the optimal solutions for smaller-scale problems and can efficiently find good-quality solutions for a variety of larger-scale problems. PMID:24982990
A Practical Approach to Address Uncertainty in Stakeholder Deliberations.
Gregory, Robin; Keeney, Ralph L
2017-03-01
This article addresses the difficulties of incorporating uncertainty about consequence estimates as part of stakeholder deliberations involving multiple alternatives. Although every prediction of future consequences necessarily involves uncertainty, a large gap exists between common practices for addressing uncertainty in stakeholder deliberations and the procedures of prescriptive decision-aiding models advanced by risk and decision analysts. We review the treatment of uncertainty at four main phases of the deliberative process: with experts asked to describe possible consequences of competing alternatives, with stakeholders who function both as individuals and as members of coalitions, with the stakeholder committee composed of all stakeholders, and with decisionmakers. We develop and recommend a model that uses certainty equivalents as a theoretically robust and practical approach for helping diverse stakeholders to incorporate uncertainties when evaluating multiple-objective alternatives as part of public policy decisions. © 2017 Society for Risk Analysis.
A Robust Sound Source Localization Approach for Microphone Array with Model Errors
NASA Astrophysics Data System (ADS)
Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong
In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.
A New Variational Approach for Multiplicative Noise and Blur Removal
Ullah, Asmat; Chen, Wen; Khan, Mushtaq Ahmad; Sun, HongGuang
2017-01-01
This paper proposes a new variational model for joint multiplicative denoising and deblurring. It combines a total generalized variation filter (which has been proved to be able to reduce the blocky-effects by being aware of high-order smoothness) and shearlet transform (that effectively preserves anisotropic image features such as sharp edges, curves and so on). The new model takes the advantage of both regularizers since it is able to minimize the staircase effects while preserving sharp edges, textures and other fine image details. The existence and uniqueness of a solution to the proposed variational model is also discussed. The resulting energy functional is then solved by using alternating direction method of multipliers. Numerical experiments showing that the proposed model achieves satisfactory restoration results, both visually and quantitatively in handling the blur (motion, Gaussian, disk, and Moffat) and multiplicative noise (Gaussian, Gamma, or Rayleigh) reduction. A comparison with other recent methods in this field is provided as well. The proposed model can also be applied for restoring both single and multi-channel images contaminated with multiplicative noise, and permit cross-channel blurs when the underlying image has more than one channel. Numerical tests on color images are conducted to demonstrate the effectiveness of the proposed model. PMID:28141802
Michael C. Dietze; Rodrigo Vargas; Andrew D. Richardson; Paul C. Stoy; Alan G. Barr; Ryan S. Anderson; M. Altaf Arain; Ian T. Baker; T. Andrew Black; Jing M. Chen; Philippe Ciais; Lawrence B. Flanagan; Christopher M. Gough; Robert F. Grant; David Hollinger; R. Cesar Izaurralde; Christopher J. Kucharik; Peter Lafleur; Shugang Liu; Erandathie Lokupitiya; Yiqi Luo; J. William Munger; Changhui Peng; Benjamin Poulter; David T. Price; Daniel M. Ricciuto; William J. Riley; Alok Kumar Sahoo; Kevin Schaefer; Andrew E. Suyker; Hanqin Tian; Christina Tonitto; Hans Verbeeck; Shashi B. Verma; Weifeng Wang; Ensheng Weng
2011-01-01
Ecosystem models are important tools for diagnosing the carbon cycle and projecting its behavior across space and time. Despite the fact that ecosystems respond to drivers at multiple time scales, most assessments of model performance do not discriminate different time scales. Spectral methods, such as wavelet analyses, present an alternative approach that enables the...
Multiple Detector Optimization for Hidden Radiation Source Detection
2015-03-26
important in achieving operationally useful methods for optimizing detector emplacement, the 2-D attenuation model approach promises to speed up the...process of hidden source detection significantly. The model focused on detection of the full energy peak of a radiation source. Methods to optimize... radioisotope identification is possible without using a computationally intensive stochastic model such as the Monte Carlo n-Particle (MCNP) code
ADDING REALISM TO NUCLEAR MATERIAL DISSOLVING ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, B.
2011-08-15
Two new criticality modeling approaches have greatly increased the efficiency of dissolver operations in H-Canyon. The first new approach takes credit for the linear, physical distribution of the mass throughout the entire length of the fuel assembly. This distribution of mass is referred to as the linear density. Crediting the linear density of the fuel bundles results in using lower fissile concentrations, which allows higher masses to be charged to the dissolver. Also, this approach takes credit for the fact that only part of the fissile mass is wetted at a time. There are multiple assemblies stacked on top ofmore » each other in a bundle. On average, only 50-75% of the mass (the bottom two or three assemblies) is wetted at a time. This means that only 50-75% (depending on operating level) of the mass is moderated and is contributing to the reactivity of the system. The second new approach takes credit for the progression of the dissolving process. Previously, dissolving analysis looked at a snapshot in time where the same fissile material existed both in the wells and in the bulk solution at the same time. The second new approach models multiple consecutive phases that simulate the fissile material moving from a high concentration in the wells to a low concentration in the bulk solution. This approach is more realistic and allows higher fissile masses to be charged to the dissolver.« less
NASA Astrophysics Data System (ADS)
Prahutama, Alan; Suparti; Wahyu Utami, Tiani
2018-03-01
Regression analysis is an analysis to model the relationship between response variables and predictor variables. The parametric approach to the regression model is very strict with the assumption, but nonparametric regression model isn’t need assumption of model. Time series data is the data of a variable that is observed based on a certain time, so if the time series data wanted to be modeled by regression, then we should determined the response and predictor variables first. Determination of the response variable in time series is variable in t-th (yt), while the predictor variable is a significant lag. In nonparametric regression modeling, one developing approach is to use the Fourier series approach. One of the advantages of nonparametric regression approach using Fourier series is able to overcome data having trigonometric distribution. In modeling using Fourier series needs parameter of K. To determine the number of K can be used Generalized Cross Validation method. In inflation modeling for the transportation sector, communication and financial services using Fourier series yields an optimal K of 120 parameters with R-square 99%. Whereas if it was modeled by multiple linear regression yield R-square 90%.
A longitudinal multilevel CFA-MTMM model for interchangeable and structurally different methods
Koch, Tobias; Schultze, Martin; Eid, Michael; Geiser, Christian
2014-01-01
One of the key interests in the social sciences is the investigation of change and stability of a given attribute. Although numerous models have been proposed in the past for analyzing longitudinal data including multilevel and/or latent variable modeling approaches, only few modeling approaches have been developed for studying the construct validity in longitudinal multitrait-multimethod (MTMM) measurement designs. The aim of the present study was to extend the spectrum of current longitudinal modeling approaches for MTMM analysis. Specifically, a new longitudinal multilevel CFA-MTMM model for measurement designs with structurally different and interchangeable methods (called Latent-State-Combination-Of-Methods model, LS-COM) is presented. Interchangeable methods are methods that are randomly sampled from a set of equivalent methods (e.g., multiple student ratings for teaching quality), whereas structurally different methods are methods that cannot be easily replaced by one another (e.g., teacher, self-ratings, principle ratings). Results of a simulation study indicate that the parameters and standard errors in the LS-COM model are well recovered even in conditions with only five observations per estimated model parameter. The advantages and limitations of the LS-COM model relative to other longitudinal MTMM modeling approaches are discussed. PMID:24860515
Ryser, Marc D.; Lee, Walter T.; Readyz, Neal E.; Leder, Kevin Z.; Foo, Jasmine
2017-01-01
High rates of local recurrence in tobacco-related head and neck squamous cell carcinoma (HNSCC) are commonly attributed to unresected fields of precancerous tissue. Since they are not easily detectable at the time of surgery without additional biopsies, there is a need for non-invasive methods to predict the extent and dynamics of these fields. Here we developed a spatial stochastic model of tobacco-related HNSCC at the tissue level and calibrated the model using a Bayesian framework and population-level incidence data from the Surveillance, Epidemiology, and End Results (SEER) registry. Probabilistic model analyses were performed to predict the field geometry at time of diagnosis, and model predictions of age-specific recurrence risks were tested against outcome data from SEER. The calibrated models predicted a strong dependence of the local field size on age at diagnosis, with a doubling of the expected field diameter between ages at diagnosis of 50 and 90 years, respectively. Similarly, the probability of harboring multiple, clonally unrelated fields at the time of diagnosis were found to increase substantially with patient age. Based on these findings, we hypothesized a higher recurrence risk in older compared to younger patients when treated by surgery alone; we successfully tested this hypothesis using age-stratified outcome data. Further clinical studies are needed to validate the model predictions in a patient-specific setting. This work highlights the importance of spatial structure in models of epithelial carcinogenesis, and suggests that patient age at diagnosis may be a critical predictor of the size and multiplicity of precancerous lesions. Major Findings Patient age at diagnosis was found to be a critical predictor of the size and multiplicity of precancerous lesions. This finding challenges the current one-size-fits-all approach to surgical excision margins. PMID:27913438
Deciphering Rashomon: an approach to verbal autopsies of maternal deaths.
Iyer, Aditi; Sen, Gita; Sreevathsa, Anuradha
2013-01-01
The paper discusses an approach to verbal autopsies that engages with the Rashomon phenomenon affecting ex post facto constructions of death and responds to the call for maternal safety. This method differs from other verbal autopsies in its approach to data collection and its framework of analysis. In our approach, data collection entails working with and triangulating multiple narratives, and minimising power inequalities in the investigation process. The framework of analysis focuses on the missed opportunities for death prevention as an alternative to (or deepening of) the Three Delays Model. This framework assesses the behavioural responses of health providers, as well as community and family members at each opportunity for death prevention and categorises them into four groups: non-actions, inadequate actions, inappropriate actions and unavoidably delayed actions. We demonstrate the application of this approach to show how verbal autopsies can delve beneath multiple narratives and rigorously identify health system, behavioural and cultural factors that contribute to avoidable maternal mortality.
Mixed raster content (MRC) model for compound image compression
NASA Astrophysics Data System (ADS)
de Queiroz, Ricardo L.; Buckley, Robert R.; Xu, Ming
1998-12-01
This paper will describe the Mixed Raster Content (MRC) method for compressing compound images, containing both binary test and continuous-tone images. A single compression algorithm that simultaneously meets the requirements for both text and image compression has been elusive. MRC takes a different approach. Rather than using a single algorithm, MRC uses a multi-layered imaging model for representing the results of multiple compression algorithms, including ones developed specifically for text and for images. As a result, MRC can combine the best of existing or new compression algorithms and offer different quality-compression ratio tradeoffs. The algorithms used by MRC set the lower bound on its compression performance. Compared to existing algorithms, MRC has some image-processing overhead to manage multiple algorithms and the imaging model. This paper will develop the rationale for the MRC approach by describing the multi-layered imaging model in light of a rate-distortion trade-off. Results will be presented comparing images compressed using MRC, JPEG and state-of-the-art wavelet algorithms such as SPIHT. MRC has been approved or proposed as an architectural model for several standards, including ITU Color Fax, IETF Internet Fax, and JPEG 2000.
Reduced rank models for travel time estimation of low order mode pulses.
Chandrayadula, Tarun K; Wage, Kathleen E; Worcester, Peter F; Dzieciuch, Matthew A; Mercer, James A; Andrew, Rex K; Howe, Bruce M
2013-10-01
Mode travel time estimation in the presence of internal waves (IWs) is a challenging problem. IWs perturb the sound speed, which results in travel time wander and mode scattering. A standard approach to travel time estimation is to pulse compress the broadband signal, pick the peak of the compressed time series, and average the peak time over multiple receptions to reduce variance. The peak-picking approach implicitly assumes there is a single strong arrival and does not perform well when there are multiple arrivals due to scattering. This article presents a statistical model for the scattered mode arrivals and uses the model to design improved travel time estimators. The model is based on an Empirical Orthogonal Function (EOF) analysis of the mode time series. Range-dependent simulations and data from the Long-range Ocean Acoustic Propagation Experiment (LOAPEX) indicate that the modes are represented by a small number of EOFs. The reduced-rank EOF model is used to construct a travel time estimator based on the Matched Subspace Detector (MSD). Analysis of simulation and experimental data show that the MSDs are more robust to IW scattering than peak picking. The simulation analysis also highlights how IWs affect the mode excitation by the source.
LATENT SPACE MODELS FOR MULTIVIEW NETWORK DATA
Salter-Townshend, Michael; McCormick, Tyler H.
2018-01-01
Social relationships consist of interactions along multiple dimensions. In social networks, this means that individuals form multiple types of relationships with the same person (e.g., an individual will not trust all of his/her acquaintances). Statistical models for these data require understanding two related types of dependence structure: (i) structure within each relationship type, or network view, and (ii) the association between views. In this paper, we propose a statistical framework that parsimoniously represents dependence between relationship types while also maintaining enough flexibility to allow individuals to serve different roles in different relationship types. Our approach builds on work on latent space models for networks [see, e.g., J. Amer. Statist. Assoc. 97 (2002) 1090–1098]. These models represent the propensity for two individuals to form edges as conditionally independent given the distance between the individuals in an unobserved social space. Our work departs from previous work in this area by representing dependence structure between network views through a multivariate Bernoulli likelihood, providing a representation of between-view association. This approach infers correlations between views not explained by the latent space model. Using our method, we explore 6 multiview network structures across 75 villages in rural southern Karnataka, India [Banerjee et al. (2013)]. PMID:29721127
LATENT SPACE MODELS FOR MULTIVIEW NETWORK DATA.
Salter-Townshend, Michael; McCormick, Tyler H
2017-09-01
Social relationships consist of interactions along multiple dimensions. In social networks, this means that individuals form multiple types of relationships with the same person (e.g., an individual will not trust all of his/her acquaintances). Statistical models for these data require understanding two related types of dependence structure: (i) structure within each relationship type, or network view, and (ii) the association between views. In this paper, we propose a statistical framework that parsimoniously represents dependence between relationship types while also maintaining enough flexibility to allow individuals to serve different roles in different relationship types. Our approach builds on work on latent space models for networks [see, e.g., J. Amer. Statist. Assoc. 97 (2002) 1090-1098]. These models represent the propensity for two individuals to form edges as conditionally independent given the distance between the individuals in an unobserved social space. Our work departs from previous work in this area by representing dependence structure between network views through a multivariate Bernoulli likelihood, providing a representation of between-view association. This approach infers correlations between views not explained by the latent space model. Using our method, we explore 6 multiview network structures across 75 villages in rural southern Karnataka, India [Banerjee et al. (2013)].
Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination
NASA Astrophysics Data System (ADS)
Li, Weihua; Sankarasubramanian, A.
2012-12-01
Model errors are inevitable in any prediction exercise. One approach that is currently gaining attention in reducing model errors is by combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictions. A new dynamic approach (MM-1) to combine multiple hydrological models by evaluating their performance/skill contingent on the predictor state is proposed. We combine two hydrological models, "abcd" model and variable infiltration capacity (VIC) model, to develop multimodel streamflow predictions. To quantify precisely under what conditions the multimodel combination results in improved predictions, we compare multimodel scheme MM-1 with optimal model combination scheme (MM-O) by employing them in predicting the streamflow generated from a known hydrologic model (abcd model orVICmodel) with heteroscedastic error variance as well as from a hydrologic model that exhibits different structure than that of the candidate models (i.e., "abcd" model or VIC model). Results from the study show that streamflow estimated from single models performed better than multimodels under almost no measurement error. However, under increased measurement errors and model structural misspecification, both multimodel schemes (MM-1 and MM-O) consistently performed better than the single model prediction. Overall, MM-1 performs better than MM-O in predicting the monthly flow values as well as in predicting extreme monthly flows. Comparison of the weights obtained from each candidate model reveals that as measurement errors increase, MM-1 assigns weights equally for all the models, whereas MM-O assigns higher weights for always the best-performing candidate model under the calibration period. Applying the multimodel algorithms for predicting streamflows over four different sites revealed that MM-1 performs better than all single models and optimal model combination scheme, MM-O, in predicting the monthly flows as well as the flows during wetter months.
Stochastic Approaches Within a High Resolution Rapid Refresh Ensemble
NASA Astrophysics Data System (ADS)
Jankov, I.
2017-12-01
It is well known that global and regional numerical weather prediction (NWP) ensemble systems are under-dispersive, producing unreliable and overconfident ensemble forecasts. Typical approaches to alleviate this problem include the use of multiple dynamic cores, multiple physics suite configurations, or a combination of the two. While these approaches may produce desirable results, they have practical and theoretical deficiencies and are more difficult and costly to maintain. An active area of research that promotes a more unified and sustainable system is the use of stochastic physics. Stochastic approaches include Stochastic Parameter Perturbations (SPP), Stochastic Kinetic Energy Backscatter (SKEB), and Stochastic Perturbation of Physics Tendencies (SPPT). The focus of this study is to assess model performance within a convection-permitting ensemble at 3-km grid spacing across the Contiguous United States (CONUS) using a variety of stochastic approaches. A single physics suite configuration based on the operational High-Resolution Rapid Refresh (HRRR) model was utilized and ensemble members produced by employing stochastic methods. Parameter perturbations (using SPP) for select fields were employed in the Rapid Update Cycle (RUC) land surface model (LSM) and Mellor-Yamada-Nakanishi-Niino (MYNN) Planetary Boundary Layer (PBL) schemes. Within MYNN, SPP was applied to sub-grid cloud fraction, mixing length, roughness length, mass fluxes and Prandtl number. In the RUC LSM, SPP was applied to hydraulic conductivity and tested perturbing soil moisture at initial time. First iterative testing was conducted to assess the initial performance of several configuration settings (e.g. variety of spatial and temporal de-correlation lengths). Upon selection of the most promising candidate configurations using SPP, a 10-day time period was run and more robust statistics were gathered. SKEB and SPPT were included in additional retrospective tests to assess the impact of using all three stochastic approaches to address model uncertainty. Results from the stochastic perturbation testing were compared to a baseline multi-physics control ensemble. For probabilistic forecast performance the Model Evaluation Tools (MET) verification package was used.