Science.gov

Sample records for adjusted multivariate models

  1. Adjustment of automatic control systems of production facilities at coal processing plants using multivariant physico- mathematical models

    NASA Astrophysics Data System (ADS)

    Evtushenko, V. F.; Myshlyaev, L. P.; Makarov, G. V.; Ivushkin, K. A.; Burkova, E. V.

    2016-10-01

    The structure of multi-variant physical and mathematical models of control system is offered as well as its application for adjustment of automatic control system (ACS) of production facilities on the example of coal processing plant.

  2. Stress and Personal Resource as Predictors of the Adjustment of Parents to Autistic Children: A Multivariate Model

    ERIC Educational Resources Information Center

    Siman-Tov, Ayelet; Kaniel, Shlomo

    2011-01-01

    The research validates a multivariate model that predicts parental adjustment to coping successfully with an autistic child. The model comprises four elements: parental stress, parental resources, parental adjustment and the child's autism symptoms. 176 parents of children aged between 6 to 16 diagnosed with PDD answered several questionnaires…

  3. Multivariate Models of Parent-Late Adolescent Gender Dyads: The Importance of Parenting Processes in Predicting Adjustment

    ERIC Educational Resources Information Center

    McKinney, Cliff; Renk, Kimberly

    2008-01-01

    Although parent-adolescent interactions have been examined, relevant variables have not been integrated into a multivariate model. As a result, this study examined a multivariate model of parent-late adolescent gender dyads in an attempt to capture important predictors in late adolescents' important and unique transition to adulthood. The sample…

  4. Multivariate Model of Infant Competence.

    ERIC Educational Resources Information Center

    Kierscht, Marcia Selland; Vietze, Peter M.

    This paper describes a multivariate model of early infant competence formulated from variables representing infant-environment transaction including: birthweight, habituation index, personality ratings of infant social orientation and task orientation, ratings of maternal responsiveness to infant distress and social signals, and observational…

  5. Moment Adjusted Imputation for Multivariate Measurement Error Data with Applications to Logistic Regression

    PubMed Central

    Thomas, Laine; Stefanski, Leonard A.; Davidian, Marie

    2013-01-01

    In clinical studies, covariates are often measured with error due to biological fluctuations, device error and other sources. Summary statistics and regression models that are based on mismeasured data will differ from the corresponding analysis based on the “true” covariate. Statistical analysis can be adjusted for measurement error, however various methods exhibit a tradeo between convenience and performance. Moment Adjusted Imputation (MAI) is method for measurement error in a scalar latent variable that is easy to implement and performs well in a variety of settings. In practice, multiple covariates may be similarly influenced by biological fluctuastions, inducing correlated multivariate measurement error. The extension of MAI to the setting of multivariate latent variables involves unique challenges. Alternative strategies are described, including a computationally feasible option that is shown to perform well. PMID:24072947

  6. Multivariate pluvial flood damage models

    SciTech Connect

    Van Ootegem, Luc; Verhofstadt, Elsy; Van Herck, Kristine; Creten, Tom

    2015-09-15

    Depth–damage-functions, relating the monetary flood damage to the depth of the inundation, are commonly used in the case of fluvial floods (floods caused by a river overflowing). We construct four multivariate damage models for pluvial floods (caused by extreme rainfall) by differentiating on the one hand between ground floor floods and basement floods and on the other hand between damage to residential buildings and damage to housing contents. We do not only take into account the effect of flood-depth on damage, but also incorporate the effects of non-hazard indicators (building characteristics, behavioural indicators and socio-economic variables). By using a Tobit-estimation technique on identified victims of pluvial floods in Flanders (Belgium), we take into account the effect of cases of reported zero damage. Our results show that the flood depth is an important predictor of damage, but with a diverging impact between ground floor floods and basement floods. Also non-hazard indicators are important. For example being aware of the risk just before the water enters the building reduces content damage considerably, underlining the importance of warning systems and policy in this case of pluvial floods. - Highlights: • Prediction of damage of pluvial floods using also non-hazard information • We include ‘no damage cases’ using a Tobit model. • The damage of flood depth is stronger for ground floor than for basement floods. • Non-hazard indicators are especially important for content damage. • Potential gain of policies that increase awareness of flood risks.

  7. A Multivariate Analysis of Emotional and Behavioral Adjustment and Preschool Educational Outcomes

    ERIC Educational Resources Information Center

    Fantuzzo, John; Bulotsky, Rebecca; McDermott, Paul; Mosca, Samuel; Lutz, Megan Noone

    2003-01-01

    The study examined the multivariate relationship between dimensions of preschool emotional and behavioral adjustment assessed at the beginning of the year by the Adjustment Scales for Preschool Intervention (ASPI) and multiple learning and social competencies at the end of the year with an urban Head Start sample. This study also examined the…

  8. Genetic and Environmental Components of Adolescent Adjustment and Parental Behavior: A Multivariate Analysis

    ERIC Educational Resources Information Center

    Loehlin, John C.; Neiderhiser, Jenae M.; Reiss, David

    2005-01-01

    Adolescent adjustment measures may be related to each other and to the social environment in various ways. Are these relationships similar in genetic and environmental sources of covariation, or different? A multivariate behaviorgenetic analysis was made of 6 adjustment and 3 treatment composites from the study Nonshared Environment in Adolescent…

  9. The multivariate statistical structure of DRASTIC model

    NASA Astrophysics Data System (ADS)

    Pacheco, Fernando A. L.; Sanches Fernandes, Luís F.

    2013-01-01

    SummaryAn assessment of aquifer intrinsic vulnerability was conducted in the Sordo river basin, a small watershed located in the Northeast of Portugal that drains to a lake used as public resource of drinking water. The method adopted to calculate intrinsic vulnerability was the DRASTIC model, which hinges on a weighted addition of seven hydrogeologic features, but was combined with a pioneering approach for feature reduction and adjustment of feature weights to local settings, based on a multivariate statistical method. Basically, with the adopted statistical technique-Correspondence Analysis-one identified and minimized redundancy between DRASTIC features, allowing for the calculation of a composite index based on just three of them: topography, recharge and aquifer material. The combined algorithm was coined vector-DRASTIC and proved to describe more realistically intrinsic vulnerability than DRASTC. The proof resulted from a validation of DRASTIC and vector-DRASTIC by the results of a groundwater pollution risk assessment standing on the spatial distribution of land uses and nitrate concentrations in groundwater, referred to as [NO3-]-DRASTIC method. Vector-DRASTIC and [NO3-]-DRASTIC portray the Sordo river basin as an environment with a self-capability to neutralize contaminants, preventing its propagation downstream. This observation was confirmed by long-standing low nitrate concentrations in the lake water and constitutes additional validation of vector-DRASTIC results. Nevertheless, some general recommendations are proposed in regard to agriculture management practices for water quality protection, as part of an overall watershed approach.

  10. Preliminary Multivariable Cost Model for Space Telescopes

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip

    2010-01-01

    Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. Previously, the authors published two single variable cost models based on 19 flight missions. The current paper presents the development of a multi-variable space telescopes cost model. The validity of previously published models are tested. Cost estimating relationships which are and are not significant cost drivers are identified. And, interrelationships between variables are explored

  11. Learning Adaptive Forecasting Models from Irregularly Sampled Multivariate Clinical Data.

    PubMed

    Liu, Zitao; Hauskrecht, Milos

    2016-02-01

    Building accurate predictive models of clinical multivariate time series is crucial for understanding of the patient condition, the dynamics of a disease, and clinical decision making. A challenging aspect of this process is that the model should be flexible and adaptive to reflect well patient-specific temporal behaviors and this also in the case when the available patient-specific data are sparse and short span. To address this problem we propose and develop an adaptive two-stage forecasting approach for modeling multivariate, irregularly sampled clinical time series of varying lengths. The proposed model (1) learns the population trend from a collection of time series for past patients; (2) captures individual-specific short-term multivariate variability; and (3) adapts by automatically adjusting its predictions based on new observations. The proposed forecasting model is evaluated on a real-world clinical time series dataset. The results demonstrate the benefits of our approach on the prediction tasks for multivariate, irregularly sampled clinical time series, and show that it can outperform both the population based and patient-specific time series prediction models in terms of prediction accuracy.

  12. Learning Adaptive Forecasting Models from Irregularly Sampled Multivariate Clinical Data

    PubMed Central

    Liu, Zitao; Hauskrecht, Milos

    2016-01-01

    Building accurate predictive models of clinical multivariate time series is crucial for understanding of the patient condition, the dynamics of a disease, and clinical decision making. A challenging aspect of this process is that the model should be flexible and adaptive to reflect well patient-specific temporal behaviors and this also in the case when the available patient-specific data are sparse and short span. To address this problem we propose and develop an adaptive two-stage forecasting approach for modeling multivariate, irregularly sampled clinical time series of varying lengths. The proposed model (1) learns the population trend from a collection of time series for past patients; (2) captures individual-specific short-term multivariate variability; and (3) adapts by automatically adjusting its predictions based on new observations. The proposed forecasting model is evaluated on a real-world clinical time series dataset. The results demonstrate the benefits of our approach on the prediction tasks for multivariate, irregularly sampled clinical time series, and show that it can outperform both the population based and patient-specific time series prediction models in terms of prediction accuracy. PMID:27525189

  13. DUALITY IN MULTIVARIATE RECEPTOR MODEL. (R831078)

    EPA Science Inventory

    Multivariate receptor models are used for source apportionment of multiple observations of compositional data of air pollutants that obey mass conservation. Singular value decomposition of the data leads to two sets of eigenvectors. One set of eigenvectors spans a space in whi...

  14. Response Surface Modeling Using Multivariate Orthogonal Functions

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; DeLoach, Richard

    2001-01-01

    A nonlinear modeling technique was used to characterize response surfaces for non-dimensional longitudinal aerodynamic force and moment coefficients, based on wind tunnel data from a commercial jet transport model. Data were collected using two experimental procedures - one based on modem design of experiments (MDOE), and one using a classical one factor at a time (OFAT) approach. The nonlinear modeling technique used multivariate orthogonal functions generated from the independent variable data as modeling functions in a least squares context to characterize the response surfaces. Model terms were selected automatically using a prediction error metric. Prediction error bounds computed from the modeling data alone were found to be- a good measure of actual prediction error for prediction points within the inference space. Root-mean-square model fit error and prediction error were less than 4 percent of the mean response value in all cases. Efficacy and prediction performance of the response surface models identified from both MDOE and OFAT experiments were investigated.

  15. A multivariate Bayesian model for embryonic growth.

    PubMed

    Willemsen, Sten P; Eilers, Paul H C; Steegers-Theunissen, Régine P M; Lesaffre, Emmanuel

    2015-04-15

    Most longitudinal growth curve models evaluate the evolution of each of the anthropometric measurements separately. When applied to a 'reference population', this exercise leads to univariate reference curves against which new individuals can be evaluated. However, growth should be evaluated in totality, that is, by evaluating all body characteristics jointly. Recently, Cole et al. suggested the Superimposition by Translation and Rotation (SITAR) model, which expresses individual growth curves by three subject-specific parameters indicating their deviation from a flexible overall growth curve. This model allows the characterization of normal growth in a flexible though compact manner. In this paper, we generalize the SITAR model in a Bayesian way to multiple dimensions. The multivariate SITAR model allows us to create multivariate reference regions, which is advantageous for prediction. The usefulness of the model is illustrated on longitudinal measurements of embryonic growth obtained in the first semester of pregnancy, collected in the ongoing Rotterdam Predict study. Further, we demonstrate how the model can be used to find determinants of embryonic growth.

  16. Multivariate Markov chain modeling for stock markets

    NASA Astrophysics Data System (ADS)

    Maskawa, Jun-ichi

    2003-06-01

    We study a multivariate Markov chain model as a stochastic model of the price changes of portfolios in the framework of the mean field approximation. The time series of price changes are coded into the sequences of up and down spins according to their signs. We start with the discussion for small portfolios consisting of two stock issues. The generalization of our model to arbitrary size of portfolio is constructed by a recurrence relation. The resultant form of the joint probability of the stationary state coincides with Gibbs measure assigned to each configuration of spin glass model. Through the analysis of actual portfolios, it has been shown that the synchronization of the direction of the price changes is well described by the model.

  17. Adaptable Multivariate Calibration Models for Spectral Applications

    SciTech Connect

    THOMAS,EDWARD V.

    1999-12-20

    Multivariate calibration techniques have been used in a wide variety of spectroscopic situations. In many of these situations spectral variation can be partitioned into meaningful classes. For example, suppose that multiple spectra are obtained from each of a number of different objects wherein the level of the analyte of interest varies within each object over time. In such situations the total spectral variation observed across all measurements has two distinct general sources of variation: intra-object and inter-object. One might want to develop a global multivariate calibration model that predicts the analyte of interest accurately both within and across objects, including new objects not involved in developing the calibration model. However, this goal might be hard to realize if the inter-object spectral variation is complex and difficult to model. If the intra-object spectral variation is consistent across objects, an effective alternative approach might be to develop a generic intra-object model that can be adapted to each object separately. This paper contains recommendations for experimental protocols and data analysis in such situations. The approach is illustrated with an example involving the noninvasive measurement of glucose using near-infrared reflectance spectroscopy. Extensions to calibration maintenance and calibration transfer are discussed.

  18. Flexible multivariate marginal models for analyzing multivariate longitudinal data, with applications in R.

    PubMed

    Asar, Ozgür; Ilk, Ozlem

    2014-07-01

    Most of the available multivariate statistical models dictate on fitting different parameters for the covariate effects on each multiple responses. This might be unnecessary and inefficient for some cases. In this article, we propose a modelling framework for multivariate marginal models to analyze multivariate longitudinal data which provides flexible model building strategies. We show that the model handles several response families such as binomial, count and continuous. We illustrate the model on the Kenya Morbidity data set. A simulation study is conducted to examine the parameter estimates. An R package mmm2 is proposed to fit the model.

  19. Investigating College and Graduate Students' Multivariable Reasoning in Computational Modeling

    ERIC Educational Resources Information Center

    Wu, Hsin-Kai; Wu, Pai-Hsing; Zhang, Wen-Xin; Hsu, Ying-Shao

    2013-01-01

    Drawing upon the literature in computational modeling, multivariable reasoning, and causal attribution, this study aims at characterizing multivariable reasoning practices in computational modeling and revealing the nature of understanding about multivariable causality. We recruited two freshmen, two sophomores, two juniors, two seniors, four…

  20. mmm: an R package for analyzing multivariate longitudinal data with multivariate marginal models.

    PubMed

    Asar, Özgür; İlk, Özlem

    2013-12-01

    Modeling multivariate longitudinal data has many challenges in terms of both statistical and computational aspects. Statistical challenges occur due to complex dependence structures. Computational challenges are due to the complex algorithms, the use of numerical methods, and potential convergence problems. Therefore, there is a lack of software for such data. This paper introduces an R package mmm prepared for marginal modeling of multivariate longitudinal data. Parameter estimations are achieved by generalized estimating equations approach. A real life data set is applied to illustrate the core features of the package, and sample R code snippets are provided. It is shown that the multivariate marginal models considered in this paper and mmm are valid for binary, continuous and count multivariate longitudinal responses.

  1. Multivariate Models of Adult Pacific Salmon Returns

    PubMed Central

    Burke, Brian J.; Peterson, William T.; Beckman, Brian R.; Morgan, Cheryl; Daly, Elizabeth A.; Litz, Marisa

    2013-01-01

    Most modeling and statistical approaches encourage simplicity, yet ecological processes are often complex, as they are influenced by numerous dynamic environmental and biological factors. Pacific salmon abundance has been highly variable over the last few decades and most forecasting models have proven inadequate, primarily because of a lack of understanding of the processes affecting variability in survival. Better methods and data for predicting the abundance of returning adults are therefore required to effectively manage the species. We combined 31 distinct indicators of the marine environment collected over an 11-year period into a multivariate analysis to summarize and predict adult spring Chinook salmon returns to the Columbia River in 2012. In addition to forecasts, this tool quantifies the strength of the relationship between various ecological indicators and salmon returns, allowing interpretation of ecosystem processes. The relative importance of indicators varied, but a few trends emerged. Adult returns of spring Chinook salmon were best described using indicators of bottom-up ecological processes such as composition and abundance of zooplankton and fish prey as well as measures of individual fish, such as growth and condition. Local indicators of temperature or coastal upwelling did not contribute as much as large-scale indicators of temperature variability, matching the spatial scale over which salmon spend the majority of their ocean residence. Results suggest that effective management of Pacific salmon requires multiple types of data and that no single indicator can represent the complex early-ocean ecology of salmon. PMID:23326586

  2. MULTIVARIATE RECEPTOR MODELS AND MODEL UNCERTAINTY. (R825173)

    EPA Science Inventory

    Abstract

    Estimation of the number of major pollution sources, the source composition profiles, and the source contributions are the main interests in multivariate receptor modeling. Due to lack of identifiability of the receptor model, however, the estimation cannot be...

  3. Modeling Baseline Shifts in Multivariate Disease Outbreak Detection

    PubMed Central

    Que, Jialan; Tsui, Fu-Chiang

    2013-01-01

    Objective Outbreak detection algorithms monitoring only disease-relevant data streams may be prone to false alarms due to baseline shifts. In this paper, we propose a Multinomial-Generalized-Dirichlet (MGD) model to adjust for baseline shifts. Introduction Population surges or large events may cause shift of data collected by biosurveillance systems [1]. For example, the Cherry Blossom Festival brings hundreds of thousands of people to DC every year, which results in simultaneous elevations in multiple data streams (Fig. 1). In this paper, we propose an MGD model to accommodate the needs of dealing with baseline shifts. Methods Existing multivariate algorithms only model disease-relevant data streams (e.g., anti-fever medication sales or patient visits with constitutional syndrome for detection of flu outbreak). On the contrary, we also incorporate a non-disease-relevant data stream as a control factor. We assume that the counts from all data streams follow a Multinomial distribution. Given this distribution, the expected value of the distribution parameter is not subject to change during a baseline shift; however, it has to change in order to model an outbreak. Therefore, this distribution inherently adjusts for the baseline shifts. In addition, we use the generalized Dirichlet (GD) distribution to model the parameter, since GD distribution is one of the conjugate prior of Multinomial [2]. We call this model the Multinomial-Generalized-Dirichlet (MGD) model. Results We applied MGD model in our previous proposed Rank-Based Spatial Clustering (MRSC) algorithm [3]. We simulated both outbreak cases and baseline shift phenomena. The experiment includes two groups of data sets. The first includes the data sets only injected with outbreak cases, and the second includes the ones with both outbreak cases and baseline shifts. We apply MRSC algorithm and a reference method, the Multivariate Bayesian Scan Statistic (MBSS) algorithm (which only analyzes the disease

  4. Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models

    ERIC Educational Resources Information Center

    Price, Larry R.

    2012-01-01

    The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…

  5. Comparison between Mothers and Fathers in Coping with Autistic Children: A Multivariate Model

    ERIC Educational Resources Information Center

    Kaniel, Shlomo; Siman-Tov, Ayelet

    2011-01-01

    The main purpose of this research is to compare the differences between how mothers and fathers cope with autistic children based on a multivariate model that describes the relationships between parental psychological resources, parental stress appraisal and parental adjustment. 176 parents who lived in Israel (88 mothers and 88 fathers) of…

  6. Bayesian Analysis of Multivariate Probit Models with Surrogate Outcome Data

    ERIC Educational Resources Information Center

    Poon, Wai-Yin; Wang, Hai-Bin

    2010-01-01

    A new class of parametric models that generalize the multivariate probit model and the errors-in-variables model is developed to model and analyze ordinal data. A general model structure is assumed to accommodate the information that is obtained via surrogate variables. A hybrid Gibbs sampler is developed to estimate the model parameters. To…

  7. A unifying modeling framework for highly multivariate disease mapping.

    PubMed

    Botella-Rocamora, P; Martinez-Beneito, M A; Banerjee, S

    2015-04-30

    Multivariate disease mapping refers to the joint mapping of multiple diseases from regionally aggregated data and continues to be the subject of considerable attention for biostatisticians and spatial epidemiologists. The key issue is to map multiple diseases accounting for any correlations among themselves. Recently, Martinez-Beneito (2013) provided a unifying framework for multivariate disease mapping. While attractive in that it colligates a variety of existing statistical models for mapping multiple diseases, this and other existing approaches are computationally burdensome and preclude the multivariate analysis of moderate to large numbers of diseases. Here, we propose an alternative reformulation that accrues substantial computational benefits enabling the joint mapping of tens of diseases. Furthermore, the approach subsumes almost all existing classes of multivariate disease mapping models and offers substantial insight into the properties of statistical disease mapping models.

  8. A multivariate heuristic model for fuzzy time-series forecasting.

    PubMed

    Huarng, Kun-Huang; Yu, Tiffany Hui-Kuang; Hsu, Yu Wei

    2007-08-01

    Fuzzy time-series models have been widely applied due to their ability to handle nonlinear data directly and because no rigid assumptions for the data are needed. In addition, many such models have been shown to provide better forecasting results than their conventional counterparts. However, since most of these models require complicated matrix computations, this paper proposes the adoption of a multivariate heuristic function that can be integrated with univariate fuzzy time-series models into multivariate models. Such a multivariate heuristic function can easily be extended and integrated with various univariate models. Furthermore, the integrated model can handle multiple variables to improve forecasting results and, at the same time, avoid complicated computations due to the inclusion of multiple variables.

  9. Preliminary Multi-Variable Parametric Cost Model for Space Telescopes

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Hendrichs, Todd

    2010-01-01

    This slide presentation reviews creating a preliminary multi-variable cost model for the contract costs of making a space telescope. There is discussion of the methodology for collecting the data, definition of the statistical analysis methodology, single variable model results, testing of historical models and an introduction of the multi variable models.

  10. A Multivariate Model of Conceptual Change

    ERIC Educational Resources Information Center

    Taasoobshirazi, Gita; Heddy, Benjamin; Bailey, MarLynn; Farley, John

    2016-01-01

    The present study used the Cognitive Reconstruction of Knowledge Model (CRKM) model of conceptual change as a framework for developing and testing how key cognitive, motivational, and emotional variables are linked to conceptual change in physics. This study extends an earlier study developed by Taasoobshirazi and Sinatra ("J Res Sci…

  11. A Multivariate Model of Physics Problem Solving

    ERIC Educational Resources Information Center

    Taasoobshirazi, Gita; Farley, John

    2013-01-01

    A model of expertise in physics problem solving was tested on undergraduate science, physics, and engineering majors enrolled in an introductory-level physics course. Structural equation modeling was used to test hypothesized relationships among variables linked to expertise in physics problem solving including motivation, metacognitive planning,…

  12. A Multivariate Model of Achievement in Geometry

    ERIC Educational Resources Information Center

    Bailey, MarLynn; Taasoobshirazi, Gita; Carr, Martha

    2014-01-01

    Previous studies have shown that several key variables influence student achievement in geometry, but no research has been conducted to determine how these variables interact. A model of achievement in geometry was tested on a sample of 102 high school students. Structural equation modeling was used to test hypothesized relationships among…

  13. Robust Multivariable Controller Design via Implicit Model-Following Methods.

    DTIC Science & Technology

    1983-12-01

    HD-Ri38 309 ROBUST MULTIVARIABLE CONTROLLER DESIGN VIA IMPLICIT 1/4 MODEL-FOLLOWING METHODS(U) AIR FORCE INST OF TECH WRIGHT-PATTERSON AFB OH SCHOOL...aaS. a%. 1 .111 I Q~ 18 0 ROBUST MULTIVARIABLE CONTROLLER DESIGN -~ :VIA IMPLICIT MODEL-FOLLOWING METHODS ’.% THESIS , AFIT/GE/EE/83D-48 William G... CONTROLLER DESIGN VIA IMPLICIT MODEL-FOLLOWING METHODS THESIS AFIT/GE/EE/83D-48 William G. Miller Capt USAF ,. Approved for pubi release; distribution

  14. MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)

    EPA Science Inventory

    We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...

  15. Multivariate Models of Mothers' and Fathers' Aggression toward Their Children

    ERIC Educational Resources Information Center

    Smith Slep, Amy M.; O'Leary, Susan G.

    2007-01-01

    Multivariate, biopsychosocial, explanatory models of mothers' and fathers' psychological and physical aggression toward their 3- to 7-year-old children were fitted and cross-validated in 453 representatively sampled families. Models explaining mothers' and fathers' aggression were substantially similar. Surprisingly, many variables identified as…

  16. Calibrated predictions for multivariate competing risks models.

    PubMed

    Gorfine, Malka; Hsu, Li; Zucker, David M; Parmigiani, Giovanni

    2014-04-01

    Prediction models for time-to-event data play a prominent role in assessing the individual risk of a disease, such as cancer. Accurate disease prediction models provide an efficient tool for identifying individuals at high risk, and provide the groundwork for estimating the population burden and cost of disease and for developing patient care guidelines. We focus on risk prediction of a disease in which family history is an important risk factor that reflects inherited genetic susceptibility, shared environment, and common behavior patterns. In this work family history is accommodated using frailty models, with the main novel feature being allowing for competing risks, such as other diseases or mortality. We show through a simulation study that naively treating competing risks as independent right censoring events results in non-calibrated predictions, with the expected number of events overestimated. Discrimination performance is not affected by ignoring competing risks. Our proposed prediction methodologies correctly account for competing events, are very well calibrated, and easy to implement.

  17. A generalized multivariate regression model for modelling ocean wave heights

    NASA Astrophysics Data System (ADS)

    Wang, X. L.; Feng, Y.; Swail, V. R.

    2012-04-01

    In this study, a generalized multivariate linear regression model is developed to represent the relationship between 6-hourly ocean significant wave heights (Hs) and the corresponding 6-hourly mean sea level pressure (MSLP) fields. The model is calibrated using the ERA-Interim reanalysis of Hs and MSLP fields for 1981-2000, and is validated using the ERA-Interim reanalysis for 2001-2010 and ERA40 reanalysis of Hs and MSLP for 1958-2001. The performance of the fitted model is evaluated in terms of Pierce skill score, frequency bias index, and correlation skill score. Being not normally distributed, wave heights are subjected to a data adaptive Box-Cox transformation before being used in the model fitting. Also, since 6-hourly data are being modelled, lag-1 autocorrelation must be and is accounted for. The models with and without Box-Cox transformation, and with and without accounting for autocorrelation, are inter-compared in terms of their prediction skills. The fitted MSLP-Hs relationship is then used to reconstruct historical wave height climate from the 6-hourly MSLP fields taken from the Twentieth Century Reanalysis (20CR, Compo et al. 2011), and to project possible future wave height climates using CMIP5 model simulations of MSLP fields. The reconstructed and projected wave heights, both seasonal means and maxima, are subject to a trend analysis that allows for non-linear (polynomial) trends.

  18. FACTOR ANALYTIC MODELS OF CLUSTERED MULTIVARIATE DATA WITH INFORMATIVE CENSORING

    EPA Science Inventory

    This paper describes a general class of factor analytic models for the analysis of clustered multivariate data in the presence of informative missingness. We assume that there are distinct sets of cluster-level latent variables related to the primary outcomes and to the censorin...

  19. A Multivariate Descriptive Model of Motivation for Orthodontic Treatment.

    ERIC Educational Resources Information Center

    Hackett, Paul M. W.; And Others

    1993-01-01

    Motivation for receiving orthodontic treatment was studied among 109 young adults, and a multivariate model of the process is proposed. The combination of smallest scale analysis and Partial Order Scalogram Analysis by base Coordinates (POSAC) illustrates an interesting methodology for health treatment studies and explores motivation for dental…

  20. Multivariate Linear Models of the Multitrait-Multimethod Matrix.

    ERIC Educational Resources Information Center

    Wothke, Werner

    Several multivariate statistical methodologies have been proposed to ensure objective and quantitative evaluation of the multitrait-multimethod matrix. The paper examines the performance of confirmatory factor analysis and covariance component models. It is shown, both empirically and formally, that confirmatory factor analysis is not a reliable…

  1. Coercively Adjusted Auto Regression Model for Forecasting in Epilepsy EEG

    PubMed Central

    Kim, Sun-Hee; Faloutsos, Christos; Yang, Hyung-Jeong

    2013-01-01

    Recently, data with complex characteristics such as epilepsy electroencephalography (EEG) time series has emerged. Epilepsy EEG data has special characteristics including nonlinearity, nonnormality, and nonperiodicity. Therefore, it is important to find a suitable forecasting method that covers these special characteristics. In this paper, we propose a coercively adjusted autoregression (CA-AR) method that forecasts future values from a multivariable epilepsy EEG time series. We use the technique of random coefficients, which forcefully adjusts the coefficients with −1 and 1. The fractal dimension is used to determine the order of the CA-AR model. We applied the CA-AR method reflecting special characteristics of data to forecast the future value of epilepsy EEG data. Experimental results show that when compared to previous methods, the proposed method can forecast faster and accurately. PMID:23710252

  2. Studying Resist Stochastics with the Multivariate Poisson Propagation Model

    DOE PAGES

    Naulleau, Patrick; Anderson, Christopher; Chao, Weilun; ...

    2014-01-01

    Progress in the ultimate performance of extreme ultraviolet resist has arguably decelerated in recent years suggesting an approach to stochastic limits both in photon counts and material parameters. Here we report on the performance of a variety of leading extreme ultraviolet resist both with and without chemical amplification. The measured performance is compared to stochastic modeling results using the Multivariate Poisson Propagation Model. The results show that the best materials are indeed nearing modeled performance limits.

  3. The multivariate Dirichlet-multinomial distribution and its application in forensic genetics to adjust for subpopulation effects using the θ-correction.

    PubMed

    Tvedebrink, Torben; Eriksen, Poul Svante; Morling, Niels

    2015-11-01

    In this paper, we discuss the construction of a multivariate generalisation of the Dirichlet-multinomial distribution. An example from forensic genetics in the statistical analysis of DNA mixtures motivates the study of this multivariate extension. In forensic genetics, adjustment of the match probabilities due to remote ancestry in the population is often done using the so-called θ-correction. This correction increases the probability of observing multiple copies of rare alleles in a subpopulation and thereby reduces the weight of the evidence for rare genotypes. A recent publication by Cowell et al. (2015) showed elegantly how to use Bayesian networks for efficient computations of likelihood ratios in a forensic genetic context. However, their underlying population genetic model assumed independence of alleles, which is not realistic in real populations. We demonstrate how the so-called θ-correction can be incorporated in Bayesian networks to make efficient computations by modifying the Markov structure of Cowell et al. (2015). By numerical examples, we show how the θ-correction incorporated in the multivariate Dirichlet-multinomial distribution affects the weight of evidence.

  4. Bias and Precision of Measures of Association for a Fixed-Effect Multivariate Analysis of Variance Model

    ERIC Educational Resources Information Center

    Kim, Soyoung; Olejnik, Stephen

    2005-01-01

    The sampling distributions of five popular measures of association with and without two bias adjusting methods were examined for the single factor fixed-effects multivariate analysis of variance model. The number of groups, sample sizes, number of outcomes, and the strength of association were manipulated. The results indicate that all five…

  5. A pairwise interaction model for multivariate functional and longitudinal data.

    PubMed

    Chiou, Jeng-Min; Müller, Hans-Georg

    2016-06-01

    Functional data vectors consisting of samples of multivariate data where each component is a random function are encountered increasingly often but have not yet been comprehensively investigated. We introduce a simple pairwise interaction model that leads to an interpretable and straightforward decomposition of multivariate functional data and of their variation into component-specific processes and pairwise interaction processes. The latter quantify the degree of pairwise interactions between the components of the functional data vectors, while the component-specific processes reflect the functional variation of a particular functional vector component that cannot be explained by the other components. Thus the proposed model provides an extension of the usual notion of a covariance or correlation matrix for multivariate vector data to functional data vectors and generates an interpretable functional interaction map. The decomposition provided by the model can also serve as a basis for subsequent analysis, such as study of the network structure of functional data vectors. The decomposition of the total variance into componentwise and interaction contributions can be quantified by an [Formula: see text]-like decomposition. We provide consistency results for the proposed methods and illustrate the model by applying it to sparsely sampled longitudinal data from the Baltimore Longitudinal Study of Aging, examining the relationships between body mass index and blood fats.

  6. Analysis of Forest Foliage Using a Multivariate Mixture Model

    NASA Technical Reports Server (NTRS)

    Hlavka, C. A.; Peterson, David L.; Johnson, L. F.; Ganapol, B.

    1997-01-01

    Data with wet chemical measurements and near infrared spectra of ground leaf samples were analyzed to test a multivariate regression technique for estimating component spectra which is based on a linear mixture model for absorbance. The resulting unmixed spectra for carbohydrates, lignin, and protein resemble the spectra of extracted plant starches, cellulose, lignin, and protein. The unmixed protein spectrum has prominent absorption spectra at wavelengths which have been associated with nitrogen bonds.

  7. Various forms of indexing HDMR for modelling multivariate classification problems

    SciTech Connect

    Aksu, Çağrı; Tunga, M. Alper

    2014-12-10

    The Indexing HDMR method was recently developed for modelling multivariate interpolation problems. The method uses the Plain HDMR philosophy in partitioning the given multivariate data set into less variate data sets and then constructing an analytical structure through these partitioned data sets to represent the given multidimensional problem. Indexing HDMR makes HDMR be applicable to classification problems having real world data. Mostly, we do not know all possible class values in the domain of the given problem, that is, we have a non-orthogonal data structure. However, Plain HDMR needs an orthogonal data structure in the given problem to be modelled. In this sense, the main idea of this work is to offer various forms of Indexing HDMR to successfully model these real life classification problems. To test these different forms, several well-known multivariate classification problems given in UCI Machine Learning Repository were used and it was observed that the accuracy results lie between 80% and 95% which are very satisfactory.

  8. Multivariable Parametric Cost Model for Ground Optical Telescope Assembly

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Rowell, Ginger Holmes; Reese, Gayle; Byberg, Alicia

    2005-01-01

    A parametric cost model for ground-based telescopes is developed using multivariable statistical analysis of both engineering and performance parameters. While diameter continues to be the dominant cost driver, diffraction-limited wavelength is found to be a secondary driver. Other parameters such as radius of curvature are examined. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e., multi-telescope phased-array systems). Additionally, single variable models Based on aperture diameter are derived.

  9. Multivariable Parametric Cost Model for Ground Optical: Telescope Assembly

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Rowell, Ginger Holmes; Reese, Gayle; Byberg, Alicia

    2004-01-01

    A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis of both engineering and performance parameters. While diameter continues to be the dominant cost driver, diffraction limited wavelength is found to be a secondary driver. Other parameters such as radius of curvature were examined. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter were derived.

  10. Usual Dietary Intakes: SAS Macros for Fitting Multivariate Measurement Error Models & Estimating Multivariate Usual Intake Distributions

    Cancer.gov

    The following SAS macros can be used to create a multivariate usual intake distribution for multiple dietary components that are consumed nearly every day or episodically. A SAS macro for performing balanced repeated replication (BRR) variance estimation is also included.

  11. Enhancing scientific reasoning by refining students' models of multivariable causality

    NASA Astrophysics Data System (ADS)

    Keselman, Alla

    Inquiry learning as an educational method is gaining increasing support among elementary and middle school educators. In inquiry activities at the middle school level, students are typically asked to conduct investigations and infer causal relationships about multivariable causal systems. In these activities, students usually demonstrate significant strategic weaknesses and insufficient metastrategic understanding of task demands. Present work suggests that these weaknesses arise from students' deficient mental models of multivariable causality, in which effects of individual features are neither additive, nor constant. This study is an attempt to develop an intervention aimed at enhancing scientific reasoning by refining students' models of multivariable causality. Three groups of students engaged in a scientific investigation activity over seven weekly sessions. By creating unique combinations of five features potentially involved in earthquake mechanism and observing associated risk meter readings, students had to find out which of the features were causal, and to learn to predict earthquake risk. Additionally, students in the instructional and practice groups engaged in self-directed practice in making scientific predictions. The instructional group also participated in weekly instructional sessions on making predictions based on multivariable causality. Students in the practice and instructional conditions showed small to moderate improvement in their attention to the evidence and in their metastrategic ability to recognize effective investigative strategies in the work of other students. They also demonstrated a trend towards making a greater number of valid inferences than the control group students. Additionally, students in the instructional condition showed significant improvement in their ability to draw inferences based on multiple records. They also developed more accurate knowledge about non-causal features of the system. These gains were maintained

  12. Adjusting power for a baseline covariate in linear models

    PubMed Central

    Glueck, Deborah H.; Muller, Keith E.

    2009-01-01

    SUMMARY The analysis of covariance provides a common approach to adjusting for a baseline covariate in medical research. With Gaussian errors, adding random covariates does not change either the theory or the computations of general linear model data analysis. However, adding random covariates does change the theory and computation of power analysis. Many data analysts fail to fully account for this complication in planning a study. We present our results in five parts. (i) A review of published results helps document the importance of the problem and the limitations of available methods. (ii) A taxonomy for general linear multivariate models and hypotheses allows identifying a particular problem. (iii) We describe how random covariates introduce the need to consider quantiles and conditional values of power. (iv) We provide new exact and approximate methods for power analysis of a range of multivariate models with a Gaussian baseline covariate, for both small and large samples. The new results apply to the Hotelling-Lawley test and the four tests in the “univariate” approach to repeated measures (unadjusted, Huynh-Feldt, Geisser-Greenhouse, Box). The techniques allow rapid calculation and an interactive, graphical approach to sample size choice. (v) Calculating power for a clinical trial of a treatment for increasing bone density illustrates the new methods. We particularly recommend using quantile power with a new Satterthwaite-style approximation. PMID:12898543

  13. Multivariate data assimilation in an integrated hydrological modelling system

    NASA Astrophysics Data System (ADS)

    Madsen, Henrik; Zhang, Donghua; Ridler, Marc; Refsgaard, Jens Christian; Høgh Jensen, Karsten

    2016-04-01

    The immensely increasing availability of in-situ and remotely sensed hydrological data has offered new opportunities for monitoring and forecasting water resources by combining observation data with hydrological modelling. Efficient multivariate data assimilation in integrated groundwater - surface water hydrological modelling systems are required to fully utilize and optimally combine the different types of observation data. A particular challenge is the assimilation of observation data of different hydrological variables from different monitoring instruments, representing a wide range of spatial and temporal scales and different levels of uncertainty. A multivariate data assimilation framework has been implemented in the MIKE SHE integrated hydrological modelling system by linking the MIKE SHE code with a generic data assimilation library. The data assimilation library supports different state-of-the-art ensemble-based Kalman filter methods, and includes procedures for localisation, joint state, parameter and model error estimation, and bias-aware filtering. Furthermore, it supports use of different stochastic error models to describe model and measurement errors. Results are presented that demonstrate the use of the data assimilation framework for assimilation of different data types in a catchment-scale MIKE SHE model.

  14. Multivariate Statistical Modelling of Drought and Heat Wave Events

    NASA Astrophysics Data System (ADS)

    Manning, Colin; Widmann, Martin; Vrac, Mathieu; Maraun, Douglas; Bevaqua, Emanuele

    2016-04-01

    Multivariate Statistical Modelling of Drought and Heat Wave Events C. Manning1,2, M. Widmann1, M. Vrac2, D. Maraun3, E. Bevaqua2,3 1. School of Geography, Earth and Environmental Sciences, University of Birmingham, Edgbaston, Birmingham, UK 2. Laboratoire des Sciences du Climat et de l'Environnement, (LSCE-IPSL), Centre d'Etudes de Saclay, Gif-sur-Yvette, France 3. Wegener Center for Climate and Global Change, University of Graz, Brandhofgasse 5, 8010 Graz, Austria Compound extreme events are a combination of two or more contributing events which in themselves may not be extreme but through their joint occurrence produce an extreme impact. Compound events are noted in the latest IPCC report as an important type of extreme event that have been given little attention so far. As part of the CE:LLO project (Compound Events: muLtivariate statisticaL mOdelling) we are developing a multivariate statistical model to gain an understanding of the dependence structure of certain compound events. One focus of this project is on the interaction between drought and heat wave events. Soil moisture has both a local and non-local effect on the occurrence of heat waves where it strongly controls the latent heat flux affecting the transfer of sensible heat to the atmosphere. These processes can create a feedback whereby a heat wave maybe amplified or suppressed by the soil moisture preconditioning, and vice versa, the heat wave may in turn have an effect on soil conditions. An aim of this project is to capture this dependence in order to correctly describe the joint probabilities of these conditions and the resulting probability of their compound impact. We will show an application of Pair Copula Constructions (PCCs) to study the aforementioned compound event. PCCs allow in theory for the formulation of multivariate dependence structures in any dimension where the PCC is a decomposition of a multivariate distribution into a product of bivariate components modelled using copulas. A

  15. Multivariate moment closure techniques for stochastic kinetic models

    SciTech Connect

    Lakatos, Eszter Ale, Angelique; Kirk, Paul D. W.; Stumpf, Michael P. H.

    2015-09-07

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporally evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.

  16. Multivariate moment closure techniques for stochastic kinetic models.

    PubMed

    Lakatos, Eszter; Ale, Angelique; Kirk, Paul D W; Stumpf, Michael P H

    2015-09-07

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporally evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.

  17. Quality Reporting of Multivariable Regression Models in Observational Studies: Review of a Representative Sample of Articles Published in Biomedical Journals.

    PubMed

    Real, Jordi; Forné, Carles; Roso-Llorach, Albert; Martínez-Sánchez, Jose M

    2016-05-01

    Controlling for confounders is a crucial step in analytical observational studies, and multivariable models are widely used as statistical adjustment techniques. However, the validation of the assumptions of the multivariable regression models (MRMs) should be made clear in scientific reporting. The objective of this study is to review the quality of statistical reporting of the most commonly used MRMs (logistic, linear, and Cox regression) that were applied in analytical observational studies published between 2003 and 2014 by journals indexed in MEDLINE.Review of a representative sample of articles indexed in MEDLINE (n = 428) with observational design and use of MRMs (logistic, linear, and Cox regression). We assessed the quality of reporting about: model assumptions and goodness-of-fit, interactions, sensitivity analysis, crude and adjusted effect estimate, and specification of more than 1 adjusted model.The tests of underlying assumptions or goodness-of-fit of the MRMs used were described in 26.2% (95% CI: 22.0-30.3) of the articles and 18.5% (95% CI: 14.8-22.1) reported the interaction analysis. Reporting of all items assessed was higher in articles published in journals with a higher impact factor.A low percentage of articles indexed in MEDLINE that used multivariable techniques provided information demonstrating rigorous application of the model selected as an adjustment method. Given the importance of these methods to the final results and conclusions of observational studies, greater rigor is required in reporting the use of MRMs in the scientific literature.

  18. Multivariate longitudinal data analysis with mixed effects hidden Markov models.

    PubMed

    Raffa, Jesse D; Dubin, Joel A

    2015-09-01

    Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies.

  19. Modeling longitudinal data, I: principles of multivariate analysis.

    PubMed

    Ravani, Pietro; Barrett, Brendan; Parfrey, Patrick

    2009-01-01

    Statistical models are used to study the relationship between exposure and disease while accounting for the potential role of other factors' impact on outcomes. This adjustment is useful to obtain unbiased estimates of true effects or to predict future outcomes. Statistical models include a systematic component and an error component. The systematic component explains the variability of the response variable as a function of the predictors and is summarized in the effect estimates (model coefficients). The error element of the model represents the variability in the data unexplained by the model and is used to build measures of precision around the point estimates (confidence intervals).

  20. Estimating Polygenic Models for Multivariate Data on Large Pedigrees

    PubMed Central

    Thompson, E. A.; Shaw, R. G.

    1992-01-01

    We have developed algorithms for the likelihood estimation of additive genetic models for quantitative traits on large pedigrees. The approach uses the expectation L-maximization (EM) algorithm, but avoids intensive computation. In this paper, we focus on extensions of previous work to the case of multivariate data. We exemplify the approach by analyses of bivariate data on a four-generation, 949-member pedigree of the snail Lymnaea elodes, and on a three-generation pedigree of the guppy Poecilia reticulata containing about 400 individuals. PMID:1516823

  1. Multivariate Markovian modeling of tuberculosis: forecast for the United States.

    PubMed Central

    Debanne, S. M.; Bielefeld, R. A.; Cauthen, G. M.; Daniel, T. M.; Rowland, D. Y.

    2000-01-01

    We have developed a computer-implemented, multivariate Markov chain model to project tuberculosis (TB) incidence in the United States from 1980 to 2010 in disaggregated demographic groups. Uncertainty in model parameters and in the projections is represented by fuzzy numbers. Projections are made under the assumption that current TB control measures will remain unchanged for the projection period. The projections of the model demonstrate an intermediate increase in national TB incidence (similar to that which actually occurred) followed by continuing decline. The rate of decline depends strongly on geographic, racial, and ethnic characteristics. The model predicts that the rate of decline in the number of cases among Hispanics will be slower than among white non-Hispanics and black non-Hispanics a prediction supported by the most recent data. PMID:10756148

  2. Optimal model-free prediction from multivariate time series

    NASA Astrophysics Data System (ADS)

    Runge, Jakob; Donner, Reik V.; Kurths, Jürgen

    2015-05-01

    Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.

  3. Optimal model-free prediction from multivariate time series.

    PubMed

    Runge, Jakob; Donner, Reik V; Kurths, Jürgen

    2015-05-01

    Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.

  4. Mark-specific hazard ratio model with missing multivariate marks.

    PubMed

    Juraska, Michal; Gilbert, Peter B

    2016-10-01

    An objective of randomized placebo-controlled preventive HIV vaccine efficacy (VE) trials is to assess the relationship between vaccine effects to prevent HIV acquisition and continuous genetic distances of the exposing HIVs to multiple HIV strains represented in the vaccine. The set of genetic distances, only observed in failures, is collectively termed the 'mark.' The objective has motivated a recent study of a multivariate mark-specific hazard ratio model in the competing risks failure time analysis framework. Marks of interest, however, are commonly subject to substantial missingness, largely due to rapid post-acquisition viral evolution. In this article, we investigate the mark-specific hazard ratio model with missing multivariate marks and develop two inferential procedures based on (i) inverse probability weighting (IPW) of the complete cases, and (ii) augmentation of the IPW estimating functions by leveraging auxiliary data predictive of the mark. Asymptotic properties and finite-sample performance of the inferential procedures are presented. This research also provides general inferential methods for semiparametric density ratio/biased sampling models with missing data. We apply the developed procedures to data from the HVTN 502 'Step' HIV VE trial.

  5. Multivariate Models for Normal and Binary Responses in Intervention Studies

    ERIC Educational Resources Information Center

    Pituch, Keenan A.; Whittaker, Tiffany A.; Chang, Wanchen

    2016-01-01

    Use of multivariate analysis (e.g., multivariate analysis of variance) is common when normally distributed outcomes are collected in intervention research. However, when mixed responses--a set of normal and binary outcomes--are collected, standard multivariate analyses are no longer suitable. While mixed responses are often obtained in…

  6. Multivariable frequency weighted model order reduction for control synthesis

    NASA Technical Reports Server (NTRS)

    Schmidt, David K.

    1989-01-01

    Quantitative criteria are presented for model simplification, or order reduction, such that the reduced order model may be used to synthesize and evaluate a control law, and the stability robustness obtained using the reduced order model will be preserved when controlling the full-order system. The error introduced due to model simplification is treated as modeling uncertainty, and some of the results from multivariate robustness theory are brought to bear on the model simplification problem. A numerical procedure developed previously is shown to lead to results that meet the necessary criteria. The procedure is applied to reduce the model of a flexible aircraft. Also, the importance of the control law itself, in meeting the modeling criteria, is underscored. An example is included that demonstrates that an apparently robust control law actually amplifies modest modeling errors in the critical frequency region, and leads to undesirable results. The cause of this problem is associated with the canceling of lightly damped transmission zeroes in the plant. An attempt is made to expand on some of the earlier results and to further clarify the theoretical basis behind the proposed methodology.

  7. Implementation of a multivariate regional index-flood model

    NASA Astrophysics Data System (ADS)

    Requena, Ana Isabel; Chebana, Fateh; Mediero, Luis; Garrote, Luis

    2014-05-01

    A multivariate flood frequency approach is required to obtain appropriate estimates of the design flood associated to a given return period, as the nature of floods is multivariate. A regional frequency analysis is usually conducted to procure estimates or reduce the corresponding uncertainty when no information is available at ungauged sites or a short record is observed at gauged sites. In the present study a multivariate regional methodology based on the index-flood model is presented, seeking to enrich and complete the existing methods by i) considering more general two-parameter copulas for simulating synthetic homogeneous regions to test homogeneity; ii) using the latest definitions of bivariate return periods for quantile estimation; and iii) applying recent procedures for the selection of a subset of bivariate design events from the wider quantile curves. A complete description of the selection processes of both marginal distributions and copula is also included. The proposed methodology provides an entire procedure focused on its practical application. The proposed methodology was applied to a case study located in the Ebro basin in the north of Spain. Series of annual maximum flow peaks (Q) and their associated hydrograph volumes (V ) were selected as flood variables. The initial region was divided into two homogeneous sub-regions by a cluster analysis and a multivariate homogeneity test. The Gumbel and Generalised Extreme Value distributions were selected as marginal distributions to fit the two flood variables. The BB1 copula was found to be the best regional copula for characterising the dependence relation between variables. The OR bivariate joint return period related to the (non-exceedance) probability of the event{Q ≤ qδ§ V ≤ v}was considered for quantile estimation. The index flood was based on the mean of the flood variables. Multiple linear regressions were used to estimate the index flood at ungauged sites. Basin concentration time

  8. A Gaussian Copula Model for Multivariate Survival Data

    PubMed Central

    Othus, Megan; Li, Yi

    2011-01-01

    We consider a Gaussian copula model for multivariate survival times. Estimation of the copula association parameter is easily implemented with existing software using a two-stage estimation procedure. Using the Gaussian copula, we are able to test whether the association parameter is equal to zero. When the association term is positive, the model can be extended to incorporate cluster-level frailty terms. Asymptotic properties are derived under the two-stage estimation scheme. Simulation studies verify finite sample utility. We apply the method to a Children’s Oncology Group multi-center study of acute lymphoblastic leukemia. The analysis estimates marginal treatment effects and examines potential clustering within treatment institution. PMID:22162742

  9. A multivariate variational objective analysis-assimilation method. Part 1: Development of the basic model

    NASA Technical Reports Server (NTRS)

    Achtemeier, Gary L.; Ochs, Harry T., III

    1988-01-01

    The variational method of undetermined multipliers is used to derive a multivariate model for objective analysis. The model is intended for the assimilation of 3-D fields of rawinsonde height, temperature and wind, and mean level temperature observed by satellite into a dynamically consistent data set. Relative measurement errors are taken into account. The dynamic equations are the two nonlinear horizontal momentum equations, the hydrostatic equation, and an integrated continuity equation. The model Euler-Lagrange equations are eleven linear and/or nonlinear partial differential and/or algebraic equations. A cyclical solution sequence is described. Other model features include a nonlinear terrain-following vertical coordinate that eliminates truncation error in the pressure gradient terms of the horizontal momentum equations and easily accommodates satellite observed mean layer temperatures in the middle and upper troposphere. A projection of the pressure gradient onto equivalent pressure surfaces removes most of the adverse impacts of the lower coordinate surface on the variational adjustment.

  10. Multivariate screening in food adulteration: untargeted versus targeted modelling.

    PubMed

    López, M Isabel; Trullols, Esther; Callao, M Pilar; Ruisánchez, Itziar

    2014-03-15

    Two multivariate screening strategies (untargeted and targeted modelling) have been developed to compare their ability to detect food fraud. As a case study, possible adulteration of hazelnut paste is considered. Two different adulterants were studied, almond paste and chickpea flour. The models were developed from near-infrared (NIR) data coupled with soft independent modelling of class analogy (SIMCA) as a classification technique. Regarding the untargeted strategy, only unadulterated samples were modelled, obtaining 96.3% of correct classification. The prediction of adulterated samples gave errors between 5.5% and 2%. Regarding targeted modelling, two classes were modelled: Class 1 (unadulterated samples) and Class 2 (almond adulterated samples). Samples adulterated with chickpea were predicted to prove its ability to deal with non-modelled adulterants. The results show that samples adulterated with almond were mainly classified in their own class (90.9%) and samples with chickpea were classified in Class 2 (67.3%) or not in any class (30.9%), but no one only as unadulterated.

  11. Hidden Markov latent variable models with multivariate longitudinal data.

    PubMed

    Song, Xinyuan; Xia, Yemao; Zhu, Hongtu

    2017-03-01

    Cocaine addiction is chronic and persistent, and has become a major social and health problem in many countries. Existing studies have shown that cocaine addicts often undergo episodic periods of addiction to, moderate dependence on, or swearing off cocaine. Given its reversible feature, cocaine use can be formulated as a stochastic process that transits from one state to another, while the impacts of various factors, such as treatment received and individuals' psychological problems on cocaine use, may vary across states. This article develops a hidden Markov latent variable model to study multivariate longitudinal data concerning cocaine use from a California Civil Addict Program. The proposed model generalizes conventional latent variable models to allow bidirectional transition between cocaine-addiction states and conventional hidden Markov models to allow latent variables and their dynamic interrelationship. We develop a maximum-likelihood approach, along with a Monte Carlo expectation conditional maximization (MCECM) algorithm, to conduct parameter estimation. The asymptotic properties of the parameter estimates and statistics for testing the heterogeneity of model parameters are investigated. The finite sample performance of the proposed methodology is demonstrated by simulation studies. The application to cocaine use study provides insights into the prevention of cocaine use.

  12. Sparse multivariate autoregressive modeling for mild cognitive impairment classification.

    PubMed

    Li, Yang; Wee, Chong-Yaw; Jie, Biao; Peng, Ziwen; Shen, Dinggang

    2014-07-01

    Brain connectivity network derived from functional magnetic resonance imaging (fMRI) is becoming increasingly prevalent in the researches related to cognitive and perceptual processes. The capability to detect causal or effective connectivity is highly desirable for understanding the cooperative nature of brain network, particularly when the ultimate goal is to obtain good performance of control-patient classification with biological meaningful interpretations. Understanding directed functional interactions between brain regions via brain connectivity network is a challenging task. Since many genetic and biomedical networks are intrinsically sparse, incorporating sparsity property into connectivity modeling can make the derived models more biologically plausible. Accordingly, we propose an effective connectivity modeling of resting-state fMRI data based on the multivariate autoregressive (MAR) modeling technique, which is widely used to characterize temporal information of dynamic systems. This MAR modeling technique allows for the identification of effective connectivity using the Granger causality concept and reducing the spurious causality connectivity in assessment of directed functional interaction from fMRI data. A forward orthogonal least squares (OLS) regression algorithm is further used to construct a sparse MAR model. By applying the proposed modeling to mild cognitive impairment (MCI) classification, we identify several most discriminative regions, including middle cingulate gyrus, posterior cingulate gyrus, lingual gyrus and caudate regions, in line with results reported in previous findings. A relatively high classification accuracy of 91.89 % is also achieved, with an increment of 5.4 % compared to the fully-connected, non-directional Pearson-correlation-based functional connectivity approach.

  13. Sparse Multivariate Autoregressive Modeling for Mild Cognitive Impairment Classification

    PubMed Central

    Li, Yang; Wee, Chong-Yaw; Jie, Biao; Peng, Ziwen

    2014-01-01

    Brain connectivity network derived from functional magnetic resonance imaging (fMRI) is becoming increasingly prevalent in the researches related to cognitive and perceptual processes. The capability to detect causal or effective connectivity is highly desirable for understanding the cooperative nature of brain network, particularly when the ultimate goal is to obtain good performance of control-patient classification with biological meaningful interpretations. Understanding directed functional interactions between brain regions via brain connectivity network is a challenging task. Since many genetic and biomedical networks are intrinsically sparse, incorporating sparsity property into connectivity modeling can make the derived models more biologically plausible. Accordingly, we propose an effective connectivity modeling of resting-state fMRI data based on the multivariate autoregressive (MAR) modeling technique, which is widely used to characterize temporal information of dynamic systems. This MAR modeling technique allows for the identification of effective connectivity using the Granger causality concept and reducing the spurious causality connectivity in assessment of directed functional interaction from fMRI data. A forward orthogonal least squares (OLS) regression algorithm is further used to construct a sparse MAR model. By applying the proposed modeling to mild cognitive impairment (MCI) classification, we identify several most discriminative regions, including middle cingulate gyrus, posterior cingulate gyrus, lingual gyrus and caudate regions, in line with results reported in previous findings. A relatively high classification accuracy of 91.89 % is also achieved, with an increment of 5.4 % compared to the fully-connected, non-directional Pearson-correlation-based functional connectivity approach. PMID:24595922

  14. Testing a theoretical model for examining the relationship between family adjustment and expatriates' work adjustment.

    PubMed

    Caligiuri, P M; Hyland, M M; Joshi, A; Bross, A S

    1998-08-01

    Based on theoretical perspectives from the work/family literature, this study tested a model for examining expatriate families' adjustment while on global assignments as an antecedent to expatriates' adjustment to working in a host country. Data were collected from 110 families that had been relocated for global assignments. Longitudinal data, assessing family characteristics before the assignment and cross-cultural adjustment approximately 6 months into the assignment, were coded. This study found that family characteristics (family support, family communication, family adaptability) were related to expatriates' adjustment to working in the host country. As hypothesized, the families' cross-cultural adjustment mediated the effect of family characteristics on expatriates' host-country work adjustment.

  15. Multivariate models of inter-subject anatomical variability

    PubMed Central

    Ashburner, John; Klöppel, Stefan

    2011-01-01

    This paper presents a very selective review of some of the approaches for multivariate modelling of inter-subject variability among brain images. It focusses on applying probabilistic kernel-based pattern recognition approaches to pre-processed anatomical MRI, with the aim of most accurately modelling the difference between populations of subjects. Some of the principles underlying the pattern recognition approaches of Gaussian process classification and regression are briefly described, although the reader is advised to look elsewhere for full implementational details. Kernel pattern recognition methods require matrices that encode the degree of similarity between the images of each pair of subjects. This review focusses on similarity measures derived from the relative shapes of the subjects' brains. Pre-processing is viewed as generative modelling of anatomical variability, and there is a special emphasis on the diffeomorphic image registration framework, which provides a very parsimonious representation of relative shapes. Although the review is largely methodological, excessive mathematical notation is avoided as far as possible, as the paper attempts to convey a more intuitive understanding of various concepts. The paper should be of interest to readers wishing to apply pattern recognition methods to MRI data, with the aim of clinical diagnosis or biomarker development. It also tries to explain that the best models are those that most accurately predict, so similar approaches should also be relevant to basic science. Knowledge of some basic linear algebra and probability theory should make the review easier to follow, although it may still have something to offer to those readers whose mathematics may be more limited. PMID:20347998

  16. Design and tuning of standard additive model based fuzzy PID controllers for multivariable process systems.

    PubMed

    Harinath, Eranda; Mann, George K I

    2008-06-01

    This paper describes a design and two-level tuning method for fuzzy proportional-integral derivative (FPID) controllers for a multivariable process where the fuzzy inference uses the inference of standard additive model. The proposed method can be used for any n x n multi-input-multi-output process and guarantees closed-loop stability. In the two-level tuning scheme, the tuning follows two steps: low-level tuning followed by high-level tuning. The low-level tuning adjusts apparent linear gains, whereas the high-level tuning changes the nonlinearity in the normalized fuzzy output. In this paper, two types of FPID configurations are considered, and their performances are evaluated by using a real-time multizone temperature control problem having a 3 x 3 process system.

  17. Multivariate dynamical modelling of structural change during development.

    PubMed

    Ziegler, Gabriel; Ridgway, Gerard R; Blakemore, Sarah-Jayne; Ashburner, John; Penny, Will

    2017-02-15

    Here we introduce a multivariate framework for characterising longitudinal changes in structural MRI using dynamical systems. The general approach enables modelling changes of states in multiple imaging biomarkers typically observed during brain development, plasticity, ageing and degeneration, e.g. regional gray matter volume of multiple regions of interest (ROIs). Structural brain states follow intrinsic dynamics according to a linear system with additional inputs accounting for potential driving forces of brain development. In particular, the inputs to the system are specified to account for known or latent developmental growth/decline factors, e.g. due to effects of growth hormones, puberty, or sudden behavioural changes etc. Because effects of developmental factors might be region-specific, the sensitivity of each ROI to contributions of each factor is explicitly modelled. In addition to the external effects of developmental factors on regional change, the framework enables modelling and inference about directed (potentially reciprocal) interactions between brain regions, due to competition for space, or structural connectivity, and suchlike. This approach accounts for repeated measures in typical MRI studies of development and aging. Model inversion and posterior distributions are obtained using earlier established variational methods enabling Bayesian evidence-based comparisons between various models of structural change. Using this approach we demonstrate dynamic cortical changes during brain maturation between 6 and 22 years of age using a large openly available longitudinal paediatric dataset with 637 scans from 289 individuals. In particular, we model volumetric changes in 26 bilateral ROIs, which cover large portions of cortical and subcortical gray matter. We account for (1) puberty-related effects on gray matter regions; (2) effects of an early transient growth process with additional time-lag parameter; (3) sexual dimorphism by modelling parameter

  18. Effect of flux adjustments on temperature variability in climate models

    NASA Astrophysics Data System (ADS)

    CMIP investigators; Duffy, P. B.; Bell, J.; Covey, C.; Sloan, L.

    2000-03-01

    It has been suggested that “flux adjustments” in climate models suppress simulated temperature variability. If true, this might invalidate the conclusion that at least some of observed temperature increases since 1860 are anthropogenic, since this conclusion is based in part on estimates of natural temperature variability derived from flux-adjusted models. We assess variability of surface air temperatures in 17 simulations of internal temperature variability submitted to the Coupled Model Intercomparison Project. By comparing variability in flux-adjusted vs. non-flux adjusted simulations, we find no evidence that flux adjustments suppress temperature variability in climate models; other, largely unknown, factors are much more important in determining simulated temperature variability. Therefore the conclusion that at least some of observed temperature increases are anthropogenic cannot be questioned on the grounds that it is based in part on results of flux-adjusted models. Also, reducing or eliminating flux adjustments would probably do little to improve simulations of temperature variability.

  19. Comparison of Multidimensional Item Response Models: Multivariate Normal Ability Distributions versus Multivariate Polytomous Ability Distributions. Research Report. ETS RR-08-45

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; von Davier, Matthias; Lee, Yi-Hsuan

    2008-01-01

    Multidimensional item response models can be based on multivariate normal ability distributions or on multivariate polytomous ability distributions. For the case of simple structure in which each item corresponds to a unique dimension of the ability vector, some applications of the two-parameter logistic model to empirical data are employed to…

  20. Modeling multivariate survival data by a semiparametric random effects proportional odds model.

    PubMed

    Lam, K F; Lee, Y W; Leung, T L

    2002-06-01

    In this article, the focus is on the analysis of multivariate survival time data with various types of dependence structures. Examples of multivariate survival data include clustered data and repeated measurements from the same subject, such as the interrecurrence times of cancer tumors. A random effect semiparametric proportional odds model is proposed as an alternative to the proportional hazards model. The distribution of the random effects is assumed to be multivariate normal and the random effect is assumed to act additively to the baseline log-odds function. This class of models, which includes the usual shared random effects model, the additive variance components model, and the dynamic random effects model as special cases, is highly flexible and is capable of modeling a wide range of multivariate survival data. A unified estimation procedure is proposed to estimate the regression and dependence parameters simultaneously by means of a marginal-likelihood approach. Unlike the fully parametric case, the regression parameter estimate is not sensitive to the choice of correlation structure of the random effects. The marginal likelihood is approximated by the Monte Carlo method. Simulation studies are carried out to investigate the performance of the proposed method. The proposed method is applied to two well-known data sets, including clustered data and recurrent event times data.

  1. Pleiotropy Analysis of Quantitative Traits at Gene Level by Multivariate Functional Linear Models

    PubMed Central

    Wang, Yifan; Liu, Aiyi; Mills, James L.; Boehnke, Michael; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao; Wu, Colin O.; Fan, Ruzong

    2015-01-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai–Bartlett trace, Hotelling–Lawley trace, and Wilks’s Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. PMID:25809955

  2. Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.

    PubMed

    Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong

    2015-05-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case.

  3. A multivariate spatial crash frequency model for identifying sites with promise based on crash types.

    PubMed

    Jonathan, Aguero-Valverde; Wu, Kun-Feng Ken; Donnell, Eric T

    2016-02-01

    Many studies have proposed the use of a systemic approach to identify sites with promise (SWiPs). Proponents of the systemic approach to road safety management suggest that it is more effective in reducing crash frequency than the traditional hot spot approach. The systemic approach aims to identify SWiPs by crash type(s) and, therefore, effectively connects crashes to their corresponding countermeasures. Nevertheless, a major challenge to implementing this approach is the low precision of crash frequency models, which results from the systemic approach considering subsets (crash types) of total crashes leading to higher variability in modeling outcomes. This study responds to the need for more precise statistical output and proposes a multivariate spatial model for simultaneously modeling crash frequencies for different crash types. The multivariate spatial model not only induces a multivariate correlation structure between crash types at the same site, but also spatial correlation among adjacent sites to enhance model precision. This study utilized crash, traffic, and roadway inventory data on rural two-lane highways in Pennsylvania to construct and test the multivariate spatial model. Four models with and without the multivariate and spatial correlations were tested and compared. The results show that the model that considers both multivariate and spatial correlation has the best fit. Moreover, it was found that the multivariate correlation plays a stronger role than the spatial correlation when modeling crash frequencies in terms of different crash types.

  4. Multivariate Effect Size Estimation: Confidence Interval Construction via Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2010-01-01

    A latent variable modeling method is outlined for constructing a confidence interval (CI) of a popular multivariate effect size measure. The procedure uses the conventional multivariate analysis of variance (MANOVA) setup and is applicable with large samples. The approach provides a population range of plausible values for the proportion of…

  5. Multi-Variable Model-Based Parameter Estimation Model for Antenna Radiation Pattern Prediction

    NASA Technical Reports Server (NTRS)

    Deshpande, Manohar D.; Cravey, Robin L.

    2002-01-01

    A new procedure is presented to develop multi-variable model-based parameter estimation (MBPE) model to predict far field intensity of antenna. By performing MBPE model development procedure on a single variable at a time, the present method requires solution of smaller size matrices. The utility of the present method is demonstrated by determining far field intensity due to a dipole antenna over a frequency range of 100-1000 MHz and elevation angle range of 0-90 degrees.

  6. Multivariate Search of the Standard Model Higgs Boson at LHC

    SciTech Connect

    Mjahed, Mostafa

    2007-01-12

    resent an attempt to identify the SM Higgs boson at LHC in the channel (pp-bar {yields} HX {yields} W+ W-X {yields} l+ vl- v X). We use a multivariate processing of data as a tool for a better discrimination between signal and background (via Principal Components Analysis, Genetic Algorithms and Neural Network). Events were produced at LHC energies (MH = 140 - 200 GeV), using the Lund Monte Carlo generator PYTHIA 6.1. Higgs boson events (pp-bar {yields} HX {yields} W+W-X {yields} l+ vl- v X) and the most relevant background are considered.

  7. Multivariate emulation of computer simulators: model selection and diagnostics with application to a humanitarian relief model.

    PubMed

    Overstall, Antony M; Woods, David C

    2016-08-01

    We present a common framework for Bayesian emulation methodologies for multivariate output simulators, or computer models, that employ either parametric linear models or non-parametric Gaussian processes. Novel diagnostics suitable for multivariate covariance separable emulators are developed and techniques to improve the adequacy of an emulator are discussed and implemented. A variety of emulators are compared for a humanitarian relief simulator, modelling aid missions to Sicily after a volcanic eruption and earthquake, and a sensitivity analysis is conducted to determine the sensitivity of the simulator output to changes in the input variables. The results from parametric and non-parametric emulators are compared in terms of prediction accuracy, uncertainty quantification and scientific interpretability.

  8. Modelling world gold prices and USD foreign exchange relationship using multivariate GARCH model

    NASA Astrophysics Data System (ADS)

    Ping, Pung Yean; Ahmad, Maizah Hura Binti

    2014-12-01

    World gold price is a popular investment commodity. The series have often been modeled using univariate models. The objective of this paper is to show that there is a co-movement between gold price and USD foreign exchange rate. Using the effect of the USD foreign exchange rate on the gold price, a model that can be used to forecast future gold prices is developed. For this purpose, the current paper proposes a multivariate GARCH (Bivariate GARCH) model. Using daily prices of both series from 01.01.2000 to 05.05.2014, a causal relation between the two series understudied are found and a bivariate GARCH model is produced.

  9. Multivariate adaptive regression splines models for the prediction of energy expenditure in children and adolescents

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Advanced mathematical models have the potential to capture the complex metabolic and physiological processes that result in heat production, or energy expenditure (EE). Multivariate adaptive regression splines (MARS), is a nonparametric method that estimates complex nonlinear relationships by a seri...

  10. Multivariate test power approximations for balanced linear mixed models in studies with missing data.

    PubMed

    Ringham, Brandy M; Kreidler, Sarah M; Muller, Keith E; Glueck, Deborah H

    2016-07-30

    Multilevel and longitudinal studies are frequently subject to missing data. For example, biomarker studies for oral cancer may involve multiple assays for each participant. Assays may fail, resulting in missing data values that can be assumed to be missing completely at random. Catellier and Muller proposed a data analytic technique to account for data missing at random in multilevel and longitudinal studies. They suggested modifying the degrees of freedom for both the Hotelling-Lawley trace F statistic and its null case reference distribution. We propose parallel adjustments to approximate power for this multivariate test in studies with missing data. The power approximations use a modified non-central F statistic, which is a function of (i) the expected number of complete cases, (ii) the expected number of non-missing pairs of responses, or (iii) the trimmed sample size, which is the planned sample size reduced by the anticipated proportion of missing data. The accuracy of the method is assessed by comparing the theoretical results to the Monte Carlo simulated power for the Catellier and Muller multivariate test. Over all experimental conditions, the closest approximation to the empirical power of the Catellier and Muller multivariate test is obtained by adjusting power calculations with the expected number of complete cases. The utility of the method is demonstrated with a multivariate power analysis for a hypothetical oral cancer biomarkers study. We describe how to implement the method using standard, commercially available software products and give example code. Copyright © 2015 John Wiley & Sons, Ltd.

  11. Multivariate multiparameter extreme value models and return periods: A copula approach

    NASA Astrophysics Data System (ADS)

    Salvadori, G.; de Michele, C.

    2010-10-01

    Multivariate extreme value models are a fundamental tool in order to assess potentially dangerous events. The target of this paper is two-fold. On the one hand we outline how, exploiting recent theoretical developments in the theory of copulas, new multivariate extreme value distributions can be easily constructed; in particular, we show how a suitable number of parameters can be introduced, a feature not shared by traditional extreme value models. On the other hand, we introduce a proper new definition of multivariate return period and show the differences with (and the advantages over) the definition presently used in literature. An illustration involving flood data is presented and discussed, and a generalization of the well-known multivariate logistic Gumbel model is also given.

  12. Semiparametric Bayesian inference on skew-normal joint modeling of multivariate longitudinal and survival data.

    PubMed

    Tang, An-Min; Tang, Nian-Sheng

    2015-02-28

    We propose a semiparametric multivariate skew-normal joint model for multivariate longitudinal and multivariate survival data. One main feature of the posited model is that we relax the commonly used normality assumption for random effects and within-subject error by using a centered Dirichlet process prior to specify the random effects distribution and using a multivariate skew-normal distribution to specify the within-subject error distribution and model trajectory functions of longitudinal responses semiparametrically. A Bayesian approach is proposed to simultaneously obtain Bayesian estimates of unknown parameters, random effects and nonparametric functions by combining the Gibbs sampler and the Metropolis-Hastings algorithm. Particularly, a Bayesian local influence approach is developed to assess the effect of minor perturbations to within-subject measurement error and random effects. Several simulation studies and an example are presented to illustrate the proposed methodologies.

  13. Multivariate spatial models of excess crash frequency at area level: case of Costa Rica.

    PubMed

    Aguero-Valverde, Jonathan

    2013-10-01

    Recently, areal models of crash frequency have being used in the analysis of various area-wide factors affecting road crashes. On the other hand, disease mapping methods are commonly used in epidemiology to assess the relative risk of the population at different spatial units. A natural next step is to combine these two approaches to estimate the excess crash frequency at area level as a measure of absolute crash risk. Furthermore, multivariate spatial models of crash severity are explored in order to account for both frequency and severity of crashes and control for the spatial correlation frequently found in crash data. This paper aims to extent the concept of safety performance functions to be used in areal models of crash frequency. A multivariate spatial model is used for that purpose and compared to its univariate counterpart. Full Bayes hierarchical approach is used to estimate the models of crash frequency at canton level for Costa Rica. An intrinsic multivariate conditional autoregressive model is used for modeling spatial random effects. The results show that the multivariate spatial model performs better than its univariate counterpart in terms of the penalized goodness-of-fit measure Deviance Information Criteria. Additionally, the effects of the spatial smoothing due to the multivariate spatial random effects are evident in the estimation of excess equivalent property damage only crashes.

  14. A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

    2011-01-01

    Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

  15. Preliminary Multi-Variable Cost Model for Space Telescopes

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Hendrichs, Todd

    2010-01-01

    Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. This paper reviews the methodology used to develop space telescope cost models; summarizes recently published single variable models; and presents preliminary results for two and three variable cost models. Some of the findings are that increasing mass reduces cost; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and technology development as a function of time reduces cost at the rate of 50% per 17 years.

  16. Shape adjustment of cable mesh reflector antennas considering modeling uncertainties

    NASA Astrophysics Data System (ADS)

    Du, Jingli; Bao, Hong; Cui, Chuanzhen

    2014-04-01

    Cable mesh antennas are the most important implement to construct large space antennas nowadays. Reflector surface of cable mesh antennas has to be carefully adjusted to achieve required accuracy, which is an effective way to compensate manufacturing and assembly errors or other imperfections. In this paper shape adjustment of cable mesh antennas is addressed. The required displacement of the reflector surface is determined with respect to a modified paraboloid whose axial vertex offset is also considered as a variable. Then the adjustment problem is solved by minimizing the RMS error with respect to the desired paraboloid using minimal norm least squares method. To deal with the modeling uncertainties, the adjustment is achieved by solving a simple worst-case optimization problem instead of directly using the least squares method. A numerical example demonstrates the worst-case method is of good convergence and accuracy, and is robust to perturbations.

  17. Modeling the Pineapple Express phenomenon via Multivariate Extreme Value Theory

    NASA Astrophysics Data System (ADS)

    Weller, G.; Cooley, D. S.

    2011-12-01

    The pineapple express (PE) phenomenon is responsible for producing extreme winter precipitation events in the coastal and mountainous regions of the western United States. Because the PE phenomenon is also associated with warm temperatures, the heavy precipitation and associated snowmelt can cause destructive flooding. In order to study impacts, it is important that regional climate models from NARCCAP are able to reproduce extreme precipitation events produced by PE. We define a daily precipitation quantity which captures the spatial extent and intensity of precipitation events produced by the PE phenomenon. We then use statistical extreme value theory to model the tail dependence of this quantity as seen in an observational data set and each of the six NARCCAP regional models driven by NCEP reanalysis. We find that most NCEP-driven NARCCAP models do exhibit tail dependence between daily model output and observations. Furthermore, we find that not all extreme precipitation events are pineapple express events, as identified by Dettinger et al. (2011). The synoptic-scale atmospheric processes that drive extreme precipitation events produced by PE have only recently begun to be examined. Much of the current work has focused on pattern recognition, rather than quantitative analysis. We use daily mean sea-level pressure (MSLP) fields from NCEP to develop a "pineapple express index" for extreme precipitation, which exhibits tail dependence with our observed precipitation quantity for pineapple express events. We build a statistical model that connects daily precipitation output from the WRFG model, daily MSLP fields from NCEP, and daily observed precipitation in the western US. Finally, we use this model to simulate future observed precipitation based on WRFG output driven by the CCSM model, and our pineapple express index derived from future CCSM output. Our aim is to use this model to develop a better understanding of the frequency and intensity of extreme

  18. Storm Water Management Model Climate Adjustment Tool (SWMM-CAT)

    EPA Science Inventory

    The US EPA’s newest tool, the Stormwater Management Model (SWMM) – Climate Adjustment Tool (CAT) is meant to help municipal stormwater utilities better address potential climate change impacts affecting their operations. SWMM, first released in 1971, models hydrology and hydrauli...

  19. A Multivariate Model for Coastal Water Quality Mapping Using Satellite Remote Sensing Images

    PubMed Central

    Su, Yuan-Fong; Liou, Jun-Jih; Hou, Ju-Chen; Hung, Wei-Chun; Hsu, Shu-Mei; Lien, Yi-Ting; Su, Ming-Daw; Cheng, Ke-Sheng; Wang, Yeng-Fung

    2008-01-01

    This study demonstrates the feasibility of coastal water quality mapping using satellite remote sensing images. Water quality sampling campaigns were conducted over a coastal area in northern Taiwan for measurements of three water quality variables including Secchi disk depth, turbidity, and total suspended solids. SPOT satellite images nearly concurrent with the water quality sampling campaigns were also acquired. A spectral reflectance estimation scheme proposed in this study was applied to SPOT multispectral images for estimation of the sea surface reflectance. Two models, univariate and multivariate, for water quality estimation using the sea surface reflectance derived from SPOT images were established. The multivariate model takes into consideration the wavelength-dependent combined effect of individual seawater constituents on the sea surface reflectance and is superior over the univariate model. Finally, quantitative coastal water quality mapping was accomplished by substituting the pixel-specific spectral reflectance into the multivariate water quality estimation model. PMID:27873872

  20. A Multivariate Model for Coastal Water Quality Mapping Using Satellite Remote Sensing Images.

    PubMed

    Su, Yuan-Fong; Liou, Jun-Jih; Hou, Ju-Chen; Hung, Wei-Chun; Hsu, Shu-Mei; Lien, Yi-Ting; Su, Ming-Daw; Cheng, Ke-Sheng; Wang, Yeng-Fung

    2008-10-10

    his study demonstrates the feasibility of coastal water quality mapping using satellite remote sensing images. Water quality sampling campaigns were conducted over a coastal area in northern Taiwan for measurements of three water quality variables including Secchi disk depth, turbidity, and total suspended solids. SPOT satellite images nearly concurrent with the water quality sampling campaigns were also acquired. A spectral reflectance estimation scheme proposed in this study was applied to SPOT multispectral images for estimation of the sea surface reflectance. Two models, univariate and multivariate, for water quality estimation using the sea surface reflectance derived from SPOT images were established. The multivariate model takes into consideration the wavelength-dependent combined effect of individual seawater constituents on the sea surface reflectance and is superior over the univariate model. Finally, quantitative coastal water quality mapping was accomplished by substituting the pixel-specific spectral reflectance into the multivariate water quality estimation model.

  1. Modeling a multivariable reactor and on-line model predictive control.

    PubMed

    Yu, D W; Yu, D L

    2005-10-01

    A nonlinear first principle model is developed for a laboratory-scaled multivariable chemical reactor rig in this paper and the on-line model predictive control (MPC) is implemented to the rig. The reactor has three variables-temperature, pH, and dissolved oxygen with nonlinear dynamics-and is therefore used as a pilot system for the biochemical industry. A nonlinear discrete-time model is derived for each of the three output variables and their model parameters are estimated from the real data using an adaptive optimization method. The developed model is used in a nonlinear MPC scheme. An accurate multistep-ahead prediction is obtained for MPC, where the extended Kalman filter is used to estimate system unknown states. The on-line control is implemented and a satisfactory tracking performance is achieved. The MPC is compared with three decentralized PID controllers and the advantage of the nonlinear MPC over the PID is clearly shown.

  2. A Multivariate Model of Stakeholder Preference for Lethal Cat Management

    PubMed Central

    Wald, Dara M.; Jacobson, Susan K.

    2014-01-01

    Identifying stakeholder beliefs and attitudes is critical for resolving management conflicts. Debate over outdoor cat management is often described as a conflict between two groups, environmental advocates and animal welfare advocates, but little is known about the variables predicting differences among these critical stakeholder groups. We administered a mail survey to randomly selected stakeholders representing both of these groups (n = 1,596) in Florida, where contention over the management of outdoor cats has been widespread. We used a structural equation model to evaluate stakeholder intention to support non-lethal management. The cognitive hierarchy model predicted that values influenced beliefs, which predicted general and specific attitudes, which in turn, influenced behavioral intentions. We posited that specific attitudes would mediate the effect of general attitudes, beliefs, and values on management support. Model fit statistics suggested that the final model fit the data well (CFI = 0.94, RMSEA = 0.062). The final model explained 74% of the variance in management support, and positive attitudes toward lethal management (humaneness) had the largest direct effect on management support. Specific attitudes toward lethal management and general attitudes toward outdoor cats mediated the relationship between positive (p<0.05) and negative cat-related impact beliefs (p<0.05) and support for management. These results supported the specificity hypothesis and the use of the cognitive hierarchy to assess stakeholder intention to support non-lethal cat management. Our findings suggest that stakeholders can simultaneously perceive both positive and negative beliefs about outdoor cats, which influence attitudes toward and support for non-lethal management. PMID:24736744

  3. Multivariate crash modeling for motor vehicle and non-motorized modes at the macroscopic level.

    PubMed

    Lee, Jaeyoung; Abdel-Aty, Mohamed; Jiang, Ximiao

    2015-05-01

    Macroscopic traffic crash analyses have been conducted to incorporate traffic safety into long-term transportation planning. This study aims at developing a multivariate Poisson lognormal conditional autoregressive model at the macroscopic level for crashes by different transportation modes such as motor vehicle, bicycle, and pedestrian crashes. Many previous studies have shown the presence of common unobserved factors across different crash types. Thus, it was expected that adopting multivariate model structure would show a better modeling performance since it can capture shared unobserved features across various types. The multivariate model and univariate model were estimated based on traffic analysis zones (TAZs) and compared. It was found that the multivariate model significantly outperforms the univariate model. It is expected that the findings from this study can contribute to more reliable traffic crash modeling, especially when focusing on different modes. Also, variables that are found significant for each mode can be used to guide traffic safety policy decision makers to allocate resources more efficiently for the zones with higher risk of a particular transportation mode.

  4. IRT-ZIP Modeling for Multivariate Zero-Inflated Count Data

    ERIC Educational Resources Information Center

    Wang, Lijuan

    2010-01-01

    This study introduces an item response theory-zero-inflated Poisson (IRT-ZIP) model to investigate psychometric properties of multiple items and predict individuals' latent trait scores for multivariate zero-inflated count data. In the model, two link functions are used to capture two processes of the zero-inflated count data. Item parameters are…

  5. MULTIVARIATE RECEPTOR MODELS-CURRENT PRACTICE AND FUTURE TRENDS. (R826238)

    EPA Science Inventory

    Multivariate receptor models have been applied to the analysis of air quality data for sometime. However, solving the general mixture problem is important in several other fields. This paper looks at the panoply of these models with a view of identifying common challenges and ...

  6. An Examination of the Domain of Multivariable Functions Using the Pirie-Kieren Model

    ERIC Educational Resources Information Center

    Sengul, Sare; Yildiz, Sevda Goktepe

    2016-01-01

    The aim of this study is to employ the Pirie-Kieren model so as to examine the understandings relating to the domain of multivariable functions held by primary school mathematics preservice teachers. The data obtained was categorized according to Pirie-Kieren model and demonstrated visually in tables and bar charts. The study group consisted of…

  7. Mixture of normal distributions in multivariate null intercept measurement error model.

    PubMed

    Aoki, Reiko; Pinto Júnior, Dorival Leão; Achcar, Jorge Alberto; Bolfarine, Heleno

    2006-01-01

    In this paper we propose the use of a multivariate null intercept measurement error model, where the true unobserved value of the covariate follows a mixture of two normal distributions. The proposed model is applied to a dental clinical trial presented in Hadgu and Koch (1999). A Bayesian approach is considered and a Gibbs Sampler is used to perform the computations.

  8. A multivariate random-parameters Tobit model for analyzing highway crash rates by injury severity.

    PubMed

    Zeng, Qiang; Wen, Huiying; Huang, Helai; Pei, Xin; Wong, S C

    2017-02-01

    In this study, a multivariate random-parameters Tobit model is proposed for the analysis of crash rates by injury severity. In the model, both correlation across injury severity and unobserved heterogeneity across road-segment observations are accommodated. The proposed model is compared with a multivariate (fixed-parameters) Tobit model in the Bayesian context, by using a crash dataset collected from the Traffic Information System of Hong Kong. The dataset contains crash, road geometric and traffic information on 224 directional road segments for a five-year period (2002-2006). The multivariate random-parameters Tobit model provides a much better fit than its fixed-parameters counterpart, according to the deviance information criteria and Bayesian R(2), while it reveals a higher correlation between crash rates at different severity levels. The parameter estimates show that a few risk factors (bus stop, lane changing opportunity and lane width) have heterogeneous effects on crash-injury-severity rates. For the other factors, the variances of their random parameters are insignificant at the 95% credibility level, then the random parameters are set to be fixed across observations. Nevertheless, most of these fixed coefficients are estimated with higher precisions (i.e., smaller variances) in the random-parameters model. Thus, the random-parameters Tobit model, which provides a more comprehensive understanding of the factors' effects on crash rates by injury severity, is superior to the multivariate Tobit model and should be considered a good alternative for traffic safety analysis.

  9. Multivariate modeling of settling depth of apple fruit (Red Delicious variety) in water.

    PubMed

    Kheiralipour, Kamran; Marzbani, Farshid

    2016-03-01

    Settling depth of apple was determined by a water column and a digital camera. The depth was experimentally modeled using multivariate regression using a coded program in MATLAB software. The best models were based on the density, dropping height volume/mass with coefficient of determination and mean square error of 0.90 and 4.08, respectively.

  10. Computer-Aided Decisions in Human Services: Expert Systems and Multivariate Models.

    ERIC Educational Resources Information Center

    Sicoly, Fiore

    1989-01-01

    This comparison of two approaches to the development of computerized supports for decision making--expert systems and multivariate models--focuses on computerized systems that assist professionals with tasks related to diagnosis or classification in human services. Validation of both expert systems and statistical models is emphasized. (39…

  11. A Multivariate Model for the Meta-Analysis of Study Level Survival Data at Multiple Times

    ERIC Educational Resources Information Center

    Jackson, Dan; Rollins, Katie; Coughlin, Patrick

    2014-01-01

    Motivated by our meta-analytic dataset involving survival rates after treatment for critical leg ischemia, we develop and apply a new multivariate model for the meta-analysis of study level survival data at multiple times. Our data set involves 50 studies that provide mortality rates at up to seven time points, which we model simultaneously, and…

  12. Multivariate Models for Prediction of Human Skin Sensitization ...

    EPA Pesticide Factsheets

    One of the lnteragency Coordinating Committee on the Validation of Alternative Method's (ICCVAM) top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays - the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT) and KeratinoSens TM assay - six physicochemical properties and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches , logistic regression and support vector machine, to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three logistic regression and three support vector machine) with the highest accuracy (92%) used: (1) DPRA, h-CLAT and read-across; (2) DPRA, h-CLAT, read-across and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens and log P. The models performed better at predicting human skin sensitization hazard than the murine

  13. Multivariable parametric cost model for space and ground telescopes

    NASA Astrophysics Data System (ADS)

    Stahl, H. Philip; Henrichs, Todd

    2016-09-01

    Parametric cost models can be used by designers and project managers to perform relative cost comparisons between major architectural cost drivers and allow high-level design trades; enable cost-benefit analysis for technology development investment; and, provide a basis for estimating total project cost between related concepts. This paper hypothesizes a single model, based on published models and engineering intuition, for both ground and space telescopes: OTA Cost (X) D (1.75 +/- 0.05) λ (-0.5 +/- 0.25) T-0.25 e (-0.04) Y Specific findings include: space telescopes cost 50X to 100X more ground telescopes; diameter is the most important CER; cost is reduced by approximately 50% every 20 years (presumably because of technology advance and process improvements); and, for space telescopes, cost associated with wavelength performance is balanced by cost associated with operating temperature. Finally, duplication only reduces cost for the manufacture of identical systems (i.e. multiple aperture sparse arrays or interferometers). And, while duplication does reduce the cost of manufacturing the mirrors of segmented primary mirror, this cost savings does not appear to manifest itself in the final primary mirror assembly (presumably because the structure for a segmented mirror is more complicated than for a monolithic mirror).

  14. Estimation and model selection of semiparametric multivariate survival functions under general censorship.

    PubMed

    Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang

    2010-07-01

    We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root-n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.

  15. An Investigation of Multivariate Adaptive Regression Splines for Modeling and Analysis of Univariate and Semi-Multivariate Time Series Systems

    DTIC Science & Technology

    1991-09-01

    GRAFSTAT from IBM Research; I am grateful to Dr . Peter Welch for supplying GRAFSTAT. To P.A.W. Lewis, Thank you for your support, confidence and...34Multivariate Adaptive Regression Splines", Annals of Statistics, v. 19, no. 2, pp. 1-142, 1991. Geib , A., Applied Optimal Estimation, M.I.T. Press, Cambridge

  16. Modelling household finances: A Bayesian approach to a multivariate two-part model.

    PubMed

    Brown, Sarah; Ghosh, Pulak; Su, Li; Taylor, Karl

    2015-09-01

    We contribute to the empirical literature on household finances by introducing a Bayesian multivariate two-part model, which has been developed to further our understanding of household finances. Our flexible approach allows for the potential interdependence between the holding of assets and liabilities at the household level and also encompasses a two-part process to allow for differences in the influences on asset or liability holding and on the respective amounts held. Furthermore, the framework is dynamic in order to allow for persistence in household finances over time. Our findings endorse the joint modelling approach and provide evidence supporting the importance of dynamics. In addition, we find that certain independent variables exert different influences on the binary and continuous parts of the model thereby highlighting the flexibility of our framework and revealing a detailed picture of the nature of household finances.

  17. Modelling household finances: A Bayesian approach to a multivariate two-part model

    PubMed Central

    Brown, Sarah; Ghosh, Pulak; Su, Li; Taylor, Karl

    2016-01-01

    We contribute to the empirical literature on household finances by introducing a Bayesian multivariate two-part model, which has been developed to further our understanding of household finances. Our flexible approach allows for the potential interdependence between the holding of assets and liabilities at the household level and also encompasses a two-part process to allow for differences in the influences on asset or liability holding and on the respective amounts held. Furthermore, the framework is dynamic in order to allow for persistence in household finances over time. Our findings endorse the joint modelling approach and provide evidence supporting the importance of dynamics. In addition, we find that certain independent variables exert different influences on the binary and continuous parts of the model thereby highlighting the flexibility of our framework and revealing a detailed picture of the nature of household finances. PMID:27212801

  18. Applications of multivariate modeling to neuroimaging group analysis: A comprehensive alternative to univariate general linear model

    PubMed Central

    Chen, Gang; Adleman, Nancy E.; Saad, Ziad S.; Leibenluft, Ellen; Cox, RobertW.

    2014-01-01

    All neuroimaging packages can handle group analysis with t-tests or general linear modeling (GLM). However, they are quite hamstrung when there are multiple within-subject factors or when quantitative covariates are involved in the presence of a within-subject factor. In addition, sphericity is typically assumed for the variance–covariance structure when there are more than two levels in a within-subject factor. To overcome such limitations in the traditional AN(C)OVA and GLM, we adopt a multivariate modeling (MVM) approach to analyzing neuroimaging data at the group level with the following advantages: a) there is no limit on the number of factors as long as sample sizes are deemed appropriate; b) quantitative covariates can be analyzed together with within- subject factors; c) when a within-subject factor is involved, three testing methodologies are provided: traditional univariate testing (UVT)with sphericity assumption (UVT-UC) and with correction when the assumption is violated (UVT-SC), and within-subject multivariate testing (MVT-WS); d) to correct for sphericity violation at the voxel level, we propose a hybrid testing (HT) approach that achieves equal or higher power via combining traditional sphericity correction methods (Greenhouse–Geisser and Huynh–Feldt) with MVT-WS. PMID:24954281

  19. Modeling inflation rates and exchange rates in Ghana: application of multivariate GARCH models.

    PubMed

    Nortey, Ezekiel Nn; Ngoh, Delali D; Doku-Amponsah, Kwabena; Ofori-Boateng, Kenneth

    2015-01-01

    This paper was aimed at investigating the volatility and conditional relationship among inflation rates, exchange rates and interest rates as well as to construct a model using multivariate GARCH DCC and BEKK models using Ghana data from January 1990 to December 2013. The study revealed that the cumulative depreciation of the cedi to the US dollar from 1990 to 2013 is 7,010.2% and the yearly weighted depreciation of the cedi to the US dollar for the period is 20.4%. There was evidence that, the fact that inflation rate was stable, does not mean that exchange rates and interest rates are expected to be stable. Rather, when the cedi performs well on the forex, inflation rates and interest rates react positively and become stable in the long run. The BEKK model is robust to modelling and forecasting volatility of inflation rates, exchange rates and interest rates. The DCC model is robust to model the conditional and unconditional correlation among inflation rates, exchange rates and interest rates. The BEKK model, which forecasted high exchange rate volatility for the year 2014, is very robust for modelling the exchange rates in Ghana. The mean equation of the DCC model is also robust to forecast inflation rates in Ghana.

  20. Modeling of turbulent supersonic H2-air combustion with a multivariate beta PDF

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.; Hassan, H. A.

    1993-01-01

    Recent calculations of turbulent supersonic reacting shear flows using an assumed multivariate beta PDF (probability density function) resulted in reduced production rates and a delay in the onset of combustion. This result is not consistent with available measurements. The present research explores two possible reasons for this behavior: use of PDF's that do not yield Favre averaged quantities, and the gradient diffusion assumption. A new multivariate beta PDF involving species densities is introduced which makes it possible to compute Favre averaged mass fractions. However, using this PDF did not improve comparisons with experiment. A countergradient diffusion model is then introduced. Preliminary calculations suggest this to be the cause of the discrepancy.

  1. Multivariate Bayesian modeling of known and unknown causes of events--an application to biosurveillance.

    PubMed

    Shen, Yanna; Cooper, Gregory F

    2012-09-01

    This paper investigates Bayesian modeling of known and unknown causes of events in the context of disease-outbreak detection. We introduce a multivariate Bayesian approach that models multiple evidential features of every person in the population. This approach models and detects (1) known diseases (e.g., influenza and anthrax) by using informative prior probabilities and (2) unknown diseases (e.g., a new, highly contagious respiratory virus that has never been seen before) by using relatively non-informative prior probabilities. We report the results of simulation experiments which support that this modeling method can improve the detection of new disease outbreaks in a population. A contribution of this paper is that it introduces a multivariate Bayesian approach for jointly modeling both known and unknown causes of events. Such modeling has general applicability in domains where the space of known causes is incomplete.

  2. A Multivariate Model of Parent-Adolescent Relationship Variables in Early Adolescence

    ERIC Educational Resources Information Center

    McKinney, Cliff; Renk, Kimberly

    2011-01-01

    Given the importance of predicting outcomes for early adolescents, this study examines a multivariate model of parent-adolescent relationship variables, including parenting, family environment, and conflict. Participants, who completed measures assessing these variables, included 710 culturally diverse 11-14-year-olds who were attending a middle…

  3. Modeling Associations among Multivariate Longitudinal Categorical Variables in Survey Data: A Semiparametric Bayesian Approach

    ERIC Educational Resources Information Center

    Tchumtchoua, Sylvie; Dey, Dipak K.

    2012-01-01

    This paper proposes a semiparametric Bayesian framework for the analysis of associations among multivariate longitudinal categorical variables in high-dimensional data settings. This type of data is frequent, especially in the social and behavioral sciences. A semiparametric hierarchical factor analysis model is developed in which the…

  4. Web-Based Tools for Modelling and Analysis of Multivariate Data: California Ozone Pollution Activity

    ERIC Educational Resources Information Center

    Dinov, Ivo D.; Christou, Nicolas

    2011-01-01

    This article presents a hands-on web-based activity motivated by the relation between human health and ozone pollution in California. This case study is based on multivariate data collected monthly at 20 locations in California between 1980 and 2006. Several strategies and tools for data interrogation and exploratory data analysis, model fitting…

  5. A Multivariate Multilevel Approach to the Modeling of Accuracy and Speed of Test Takers

    ERIC Educational Resources Information Center

    Klein Entink, R. H.; Fox, J. P.; van der Linden, W. J.

    2009-01-01

    Response times on test items are easily collected in modern computerized testing. When collecting both (binary) responses and (continuous) response times on test items, it is possible to measure the accuracy and speed of test takers. To study the relationships between these two constructs, the model is extended with a multivariate multilevel…

  6. Tracking Problem Solving by Multivariate Pattern Analysis and Hidden Markov Model Algorithms

    ERIC Educational Resources Information Center

    Anderson, John R.

    2012-01-01

    Multivariate pattern analysis can be combined with Hidden Markov Model algorithms to track the second-by-second thinking as people solve complex problems. Two applications of this methodology are illustrated with a data set taken from children as they interacted with an intelligent tutoring system for algebra. The first "mind reading" application…

  7. Analyzing Multiple Multivariate Time Series Data Using Multilevel Dynamic Factor Models.

    PubMed

    Song, Hairong; Zhang, Zhiyong

    2014-01-01

    Multivariate time series data offer researchers opportunities to study dynamics of various systems in social and behavioral sciences. Dynamic factor model (DFM), as an idiographic approach for studying intraindividual variability and dynamics, has typically been applied to time series data obtained from a single unit. When multivariate time series data are collected from multiple units, how to synchronize dynamical information becomes a silent issue. To address this issue, the current study presented a multilevel dynamic factor model (MDFM) that analyzes multiple multivariate time series in multilevel SEM frameworks. MDFM not only disentangles within- and between-person variability but also models dynamics of the intraindividual processes. To illustrate the uses of MDFMs, we applied lag0, lag1, and lag2 MDFMs to empirical data on affect collected from 205 dating couples who had at least 50 consecutive days of observations. We also considered a model extension where the dynamical coefficients were allowed to be randomly varying in the population. The empirical analysis yielded interesting findings regarding affect regulation and coregulation within couples, demonstrating promising uses of MDFMs in analyzing multiple multivariate time series. In the end, we discussed a number of methodological issues in the applications of MDFMs and pointed out possible directions for future research.

  8. A General Multivariate Latent Growth Model with Applications to Student Achievement

    ERIC Educational Resources Information Center

    Bianconcini, Silvia; Cagnone, Silvia

    2012-01-01

    The evaluation of the formative process in the University system has been assuming an ever increasing importance in the European countries. Within this context, the analysis of student performance and capabilities plays a fundamental role. In this work, the authors propose a multivariate latent growth model for studying the performances of a…

  9. Rotation in the Dynamic Factor Modeling of Multivariate Stationary Time Series.

    ERIC Educational Resources Information Center

    Molenaar, Peter C. M.; Nesselroade, John R.

    2001-01-01

    Proposes a special rotation procedure for the exploratory dynamic factor model for stationary multivariate time series. The rotation procedure applies separately to each univariate component series of a q-variate latent factor series and transforms such a component, initially represented as white noise, into a univariate moving-average.…

  10. Multivariate-$t$ nonlinear mixed models with application to censored multi-outcome AIDS studies.

    PubMed

    Lin, Tsung-I; Wang, Wan-Lun

    2017-03-20

    In multivariate longitudinal HIV/AIDS studies, multi-outcome repeated measures on each patient over time may contain outliers, and the viral loads are often subject to a upper or lower limit of detection depending on the quantification assays. In this article, we consider an extension of the multivariate nonlinear mixed-effects model by adopting a joint multivariate-$t$ distribution for random effects and within-subject errors and taking the censoring information of multiple responses into account. The proposed model is called the multivariate-$t$ nonlinear mixed-effects model with censored responses (MtNLMMC), allowing for analyzing multi-outcome longitudinal data exhibiting nonlinear growth patterns with censorship and fat-tailed behavior. Utilizing the Taylor-series linearization method, a pseudo-data version of expectation conditional maximization either (ECME) algorithm is developed for iteratively carrying out maximum likelihood estimation. We illustrate our techniques with two data examples from HIV/AIDS studies. Experimental results signify that the MtNLMMC performs favorably compared to its Gaussian analogue and some existing approaches.

  11. Multivariable Model for Time to First Treatment in Patients With Chronic Lymphocytic Leukemia

    PubMed Central

    Wierda, William G.; O'Brien, Susan; Wang, Xuemei; Faderl, Stefan; Ferrajoli, Alessandra; Do, Kim-Anh; Garcia-Manero, Guillermo; Cortes, Jorge; Thomas, Deborah; Koller, Charles A.; Burger, Jan A.; Lerner, Susan; Schlette, Ellen; Abruzzo, Lynne; Kantarjian, Hagop M.; Keating, Michael J.

    2011-01-01

    Purpose The clinical course for patients with chronic lymphocytic leukemia (CLL) is diverse; some patients have indolent disease, never needing treatment, whereas others have aggressive disease requiring early treatment. We continue to use criteria for active disease to initiate therapy. Multivariable analysis was performed to identify prognostic factors independently associated with time to first treatment for patients with CLL. Patients and Methods Traditional laboratory, clinical prognostic, and newer prognostic factors such as fluorescent in situ hybridization (FISH), IGHV mutation status, and ZAP-70 expression evaluated at first patient visit to MD Anderson Cancer Center were correlated by multivariable analysis with time to first treatment. This multivariable model was used to develop a nomogram—a weighted tool to calculate 2- and 4-year probability of treatment and estimate median time to first treatment. Results There were 930 previously untreated patients who had traditional and new prognostic factors evaluated; they did not have active CLL requiring initiation of treatment within 3 months of first visit and were observed for time to first treatment. The following were independently associated with shorter time to first treatment: three involved lymph node sites, increased size of cervical lymph nodes, presence of 17p deletion or 11q deletion by FISH, increased serum lactate dehydrogenase, and unmutated IGHV mutation status. Conclusion We developed a multivariable model that incorporates traditional and newer prognostic factors to identify patients at high risk for progression to treatment. This model may be useful to identify patients for early interventional trials. PMID:21969505

  12. Adjustment in mothers of children with Asperger syndrome: an application of the double ABCX model of family adjustment.

    PubMed

    Pakenham, Kenneth I; Samios, Christina; Sofronoff, Kate

    2005-05-01

    The present study examined the applicability of the double ABCX model of family adjustment in explaining maternal adjustment to caring for a child diagnosed with Asperger syndrome. Forty-seven mothers completed questionnaires at a university clinic while their children were participating in an anxiety intervention. The children were aged between 10 and 12 years. Results of correlations showed that each of the model components was related to one or more domains of maternal adjustment in the direction predicted, with the exception of problem-focused coping. Hierarchical regression analyses demonstrated that, after controlling for the effects of relevant demographics, stressor severity, pile-up of demands and coping were related to adjustment. Findings indicate the utility of the double ABCX model in guiding research into parental adjustment when caring for a child with Asperger syndrome. Limitations of the study and clinical implications are discussed.

  13. Applying the multivariate time-rescaling theorem to neural population models

    PubMed Central

    Gerhard, Felipe; Haslinger, Robert; Pipa, Gordon

    2011-01-01

    Statistical models of neural activity are integral to modern neuroscience. Recently, interest has grown in modeling the spiking activity of populations of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing. However any statistical model must be validated by an appropriate goodness-of-fit test. Kolmogorov-Smirnov tests based upon the time-rescaling theorem have proven to be useful for evaluating point-process-based statistical models of single-neuron spike trains. Here we discuss the extension of the time-rescaling theorem to the multivariate (neural population) case. We show that even in the presence of strong correlations between spike trains, models which neglect couplings between neurons can be erroneously passed by the univariate time-rescaling test. We present the multivariate version of the time-rescaling theorem, and provide a practical step-by-step procedure for applying it towards testing the sufficiency of neural population models. Using several simple analytically tractable models and also more complex simulated and real data sets, we demonstrate that important features of the population activity can only be detected using the multivariate extension of the test. PMID:21395436

  14. Predicting coliform concentrations in upland impoundments: design and calibration of a multivariate model.

    PubMed Central

    Kay, D; McDonald, A

    1983-01-01

    This paper reports on the calibration and use of a multiple regression model designed to predict concentrations of Escherichia coli and total coliforms in two upland British impoundments. The multivariate approach has improved predictive capability over previous univariate linear models because it includes predictor variables for the timing and magnitude of hydrological input to the reservoirs and physiochemical parameters of water quality. The significance of these results for catchment management research is considered. PMID:6639016

  15. Efficient multivariate linear mixed model algorithms for genome-wide association studies.

    PubMed

    Zhou, Xiang; Stephens, Matthew

    2014-04-01

    Multivariate linear mixed models (mvLMMs) are powerful tools for testing associations between single-nucleotide polymorphisms and multiple correlated phenotypes while controlling for population stratification in genome-wide association studies. We present efficient algorithms in the genome-wide efficient mixed model association (GEMMA) software for fitting mvLMMs and computing likelihood ratio tests. These algorithms offer improved computation speed, power and P-value calibration over existing methods, and can deal with more than two phenotypes.

  16. Multivariate Calibration Models for Sorghum Composition using Near-Infrared Spectroscopy

    SciTech Connect

    Wolfrum, E.; Payne, C.; Stefaniak, T.; Rooney, W.; Dighe, N.; Bean, B.; Dahlberg, J.

    2013-03-01

    NREL developed calibration models based on near-infrared (NIR) spectroscopy coupled with multivariate statistics to predict compositional properties relevant to cellulosic biofuels production for a variety of sorghum cultivars. A robust calibration population was developed in an iterative fashion. The quality of models developed using the same sample geometry on two different types of NIR spectrometers and two different sample geometries on the same spectrometer did not vary greatly.

  17. A Joint Modeling Approach for Right Censored High Dimensional Multivariate Longitudinal Data

    PubMed Central

    Jaffa, Miran A.; Gebregziabher, Mulugeta; Jaffa, Ayad A

    2015-01-01

    Analysis of multivariate longitudinal data becomes complicated when the outcomes are of high dimension and informative right censoring is prevailing. Here, we propose a likelihood based approach for high dimensional outcomes wherein we jointly model the censoring process along with the slopes of the multivariate outcomes in the same likelihood function. We utilized pseudo likelihood function to generate parameter estimates for the population slopes and Empirical Bayes estimates for the individual slopes. The proposed approach was applied to jointly model longitudinal measures of blood urea nitrogen, plasma creatinine, and estimated glomerular filtration rate which are key markers of kidney function in a cohort of renal transplant patients followed from kidney transplant to kidney failure. Feasibility of the proposed joint model for high dimensional multivariate outcomes was successfully demonstrated and its performance was compared to that of a pairwise bivariate model. Our simulation study results suggested that there was a significant reduction in bias and mean squared errors associated with the joint model compared to the pairwise bivariate model. PMID:25688330

  18. What works in offender profiling? A comparison of typological, thematic, and multivariate models.

    PubMed

    Goodwill, Alasdair M; Alison, Laurence J; Beech, Anthony R

    2009-01-01

    Utilizing a sample of 85 stranger rapists, three models (Hazelwood's (1987) Power and Anger FBI model, the Behavioral Thematic evaluation of Canter, Bennell, Alison, and Reddy (2003), and the Massachusetts Treatment Center: Rape classification system revision 3 (MTC:R3, Knight & Prentky, 1990)) were contrasted with a multivariate regression approach to assess their ability to predict an offender's previous convictions from crime scene information. In respect of the three aforementioned models, logistic regression and AUC analysis indicated that the Power and Anger FBI model was the most effective, followed by the MTC:R3, and then the Behavioral Thematic evaluation. However, predictive analyses based on a multivariate approach using a mixture of crime scene behaviors, as opposed to the grouping of behaviors into themes or types as in the three models, far exceeded the predictive ability of the three models under AUC analysis. The results suggest that emphasis should be placed on further exploration of the predictive validity of each of the individual behaviors that comprise existing thematic, typological, and multivariate classification systems, especially those that are subject to inter-situational variation.

  19. A multivariate spatial mixture model for areal data: examining regional differences in standardized test scores

    PubMed Central

    Neelon, Brian; Gelfand, Alan E.; Miranda, Marie Lynn

    2013-01-01

    Summary Researchers in the health and social sciences often wish to examine joint spatial patterns for two or more related outcomes. Examples include infant birth weight and gestational length, psychosocial and behavioral indices, and educational test scores from different cognitive domains. We propose a multivariate spatial mixture model for the joint analysis of continuous individual-level outcomes that are referenced to areal units. The responses are modeled as a finite mixture of multivariate normals, which accommodates a wide range of marginal response distributions and allows investigators to examine covariate effects within subpopulations of interest. The model has a hierarchical structure built at the individual level (i.e., individuals are nested within areal units), and thus incorporates both individual- and areal-level predictors as well as spatial random effects for each mixture component. Conditional autoregressive (CAR) priors on the random effects provide spatial smoothing and allow the shape of the multivariate distribution to vary flexibly across geographic regions. We adopt a Bayesian modeling approach and develop an efficient Markov chain Monte Carlo model fitting algorithm that relies primarily on closed-form full conditionals. We use the model to explore geographic patterns in end-of-grade math and reading test scores among school-age children in North Carolina. PMID:26401059

  20. Model Based Predictive Control of Multivariable Hammerstein Processes with Fuzzy Logic Hypercube Interpolated Models

    PubMed Central

    Coelho, Antonio Augusto Rodrigues

    2016-01-01

    This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723

  1. Adjusted adaptive Lasso for covariate model-building in nonlinear mixed-effect pharmacokinetic models.

    PubMed

    Haem, Elham; Harling, Kajsa; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Karlsson, Mats O

    2017-02-01

    One important aim in population pharmacokinetics (PK) and pharmacodynamics is identification and quantification of the relationships between the parameters and covariates. Lasso has been suggested as a technique for simultaneous estimation and covariate selection. In linear regression, it has been shown that Lasso possesses no oracle properties, which means it asymptotically performs as though the true underlying model was given in advance. Adaptive Lasso (ALasso) with appropriate initial weights is claimed to possess oracle properties; however, it can lead to poor predictive performance when there is multicollinearity between covariates. This simulation study implemented a new version of ALasso, called adjusted ALasso (AALasso), to take into account the ratio of the standard error of the maximum likelihood (ML) estimator to the ML coefficient as the initial weight in ALasso to deal with multicollinearity in non-linear mixed-effect models. The performance of AALasso was compared with that of ALasso and Lasso. PK data was simulated in four set-ups from a one-compartment bolus input model. Covariates were created by sampling from a multivariate standard normal distribution with no, low (0.2), moderate (0.5) or high (0.7) correlation. The true covariates influenced only clearance at different magnitudes. AALasso, ALasso and Lasso were compared in terms of mean absolute prediction error and error of the estimated covariate coefficient. The results show that AALasso performed better in small data sets, even in those in which a high correlation existed between covariates. This makes AALasso a promising method for covariate selection in nonlinear mixed-effect models.

  2. Comparative evaluation of spectroscopic models using different multivariate statistical tools in a multicancer scenario

    NASA Astrophysics Data System (ADS)

    Ghanate, A. D.; Kothiwale, S.; Singh, S. P.; Bertrand, Dominique; Krishna, C. Murali

    2011-02-01

    Cancer is now recognized as one of the major causes of morbidity and mortality. Histopathological diagnosis, the gold standard, is shown to be subjective, time consuming, prone to interobserver disagreement, and often fails to predict prognosis. Optical spectroscopic methods are being contemplated as adjuncts or alternatives to conventional cancer diagnostics. The most important aspect of these approaches is their objectivity, and multivariate statistical tools play a major role in realizing it. However, rigorous evaluation of the robustness of spectral models is a prerequisite. The utility of Raman spectroscopy in the diagnosis of cancers has been well established. Until now, the specificity and applicability of spectral models have been evaluated for specific cancer types. In this study, we have evaluated the utility of spectroscopic models representing normal and malignant tissues of the breast, cervix, colon, larynx, and oral cavity in a broader perspective, using different multivariate tests. The limit test, which was used in our earlier study, gave high sensitivity but suffered from poor specificity. The performance of other methods such as factorial discriminant analysis and partial least square discriminant analysis are at par with more complex nonlinear methods such as decision trees, but they provide very little information about the classification model. This comparative study thus demonstrates not just the efficacy of Raman spectroscopic models but also the applicability and limitations of different multivariate tools for discrimination under complex conditions such as the multicancer scenario.

  3. Multivariate Analysis and Modeling of Sediment Pollution Using Neural Network Models and Geostatistics

    NASA Astrophysics Data System (ADS)

    Golay, Jean; Kanevski, Mikhaïl

    2013-04-01

    The present research deals with the exploration and modeling of a complex dataset of 200 measurement points of sediment pollution by heavy metals in Lake Geneva. The fundamental idea was to use multivariate Artificial Neural Networks (ANN) along with geostatistical models and tools in order to improve the accuracy and the interpretability of data modeling. The results obtained with ANN were compared to those of traditional geostatistical algorithms like ordinary (co)kriging and (co)kriging with an external drift. Exploratory data analysis highlighted a great variety of relationships (i.e. linear, non-linear, independence) between the 11 variables of the dataset (i.e. Cadmium, Mercury, Zinc, Copper, Titanium, Chromium, Vanadium and Nickel as well as the spatial coordinates of the measurement points and their depth). Then, exploratory spatial data analysis (i.e. anisotropic variography, local spatial correlations and moving window statistics) was carried out. It was shown that the different phenomena to be modeled were characterized by high spatial anisotropies, complex spatial correlation structures and heteroscedasticity. A feature selection procedure based on General Regression Neural Networks (GRNN) was also applied to create subsets of variables enabling to improve the predictions during the modeling phase. The basic modeling was conducted using a Multilayer Perceptron (MLP) which is a workhorse of ANN. MLP models are robust and highly flexible tools which can incorporate in a nonlinear manner different kind of high-dimensional information. In the present research, the input layer was made of either two (spatial coordinates) or three neurons (when depth as auxiliary information could possibly capture an underlying trend) and the output layer was composed of one (univariate MLP) to eight neurons corresponding to the heavy metals of the dataset (multivariate MLP). MLP models with three input neurons can be referred to as Artificial Neural Networks with EXternal

  4. External validation of multivariable prediction models: a systematic review of methodological conduct and reporting

    PubMed Central

    2014-01-01

    Background Before considering whether to use a multivariable (diagnostic or prognostic) prediction model, it is essential that its performance be evaluated in data that were not used to develop the model (referred to as external validation). We critically appraised the methodological conduct and reporting of external validation studies of multivariable prediction models. Methods We conducted a systematic review of articles describing some form of external validation of one or more multivariable prediction models indexed in PubMed core clinical journals published in 2010. Study data were extracted in duplicate on design, sample size, handling of missing data, reference to the original study developing the prediction models and predictive performance measures. Results 11,826 articles were identified and 78 were included for full review, which described the evaluation of 120 prediction models. in participant data that were not used to develop the model. Thirty-three articles described both the development of a prediction model and an evaluation of its performance on a separate dataset, and 45 articles described only the evaluation of an existing published prediction model on another dataset. Fifty-seven percent of the prediction models were presented and evaluated as simplified scoring systems. Sixteen percent of articles failed to report the number of outcome events in the validation datasets. Fifty-four percent of studies made no explicit mention of missing data. Sixty-seven percent did not report evaluating model calibration whilst most studies evaluated model discrimination. It was often unclear whether the reported performance measures were for the full regression model or for the simplified models. Conclusions The vast majority of studies describing some form of external validation of a multivariable prediction model were poorly reported with key details frequently not presented. The validation studies were characterised by poor design, inappropriate handling

  5. Assessment of Genetic Heterogeneity in Structured Plant Populations Using Multivariate Whole-Genome Regression Models

    PubMed Central

    Lehermeier, Christina; Schön, Chris-Carolin; de los Campos, Gustavo

    2015-01-01

    Plant breeding populations exhibit varying levels of structure and admixture; these features are likely to induce heterogeneity of marker effects across subpopulations. Traditionally, structure has been dealt with as a potential confounder, and various methods exist to “correct” for population stratification. However, these methods induce a mean correction that does not account for heterogeneity of marker effects. The animal breeding literature offers a few recent studies that consider modeling genetic heterogeneity in multibreed data, using multivariate models. However, these methods have received little attention in plant breeding where population structure can have different forms. In this article we address the problem of analyzing data from heterogeneous plant breeding populations, using three approaches: (a) a model that ignores population structure [A-genome-based best linear unbiased prediction (A-GBLUP)], (b) a stratified (i.e., within-group) analysis (W-GBLUP), and (c) a multivariate approach that uses multigroup data and accounts for heterogeneity (MG-GBLUP). The performance of the three models was assessed on three different data sets: a diversity panel of rice (Oryza sativa), a maize (Zea mays L.) half-sib panel, and a wheat (Triticum aestivum L.) data set that originated from plant breeding programs. The estimated genomic correlations between subpopulations varied from null to moderate, depending on the genetic distance between subpopulations and traits. Our assessment of prediction accuracy features cases where ignoring population structure leads to a parsimonious more powerful model as well as others where the multivariate and stratified approaches have higher predictive power. In general, the multivariate approach appeared slightly more robust than either the A- or the W-GBLUP. PMID:26122758

  6. Application of bioreactor design principles and multivariate analysis for development of cell culture scale down models.

    PubMed

    Tescione, Lia; Lambropoulos, James; Paranandi, Madhava Ram; Makagiansar, Helena; Ryll, Thomas

    2015-01-01

    A bench scale cell culture model representative of manufacturing scale (2,000 L) was developed based on oxygen mass transfer principles, for a CHO-based process producing a recombinant human protein. Cell culture performance differences across scales are characterized most often by sub-optimal performance in manufacturing scale bioreactors. By contrast in this study, reduced growth rates were observed at bench scale during the initial model development. Bioreactor models based on power per unit volume (P/V), volumetric mass transfer coefficient (kL a), and oxygen transfer rate (OTR) were evaluated to address this scale performance difference. Lower viable cell densities observed for the P/V model were attributed to higher sparge rates and reduced oxygen mass transfer efficiency (kL a) of the small scale hole spargers. Increasing the sparger kL a by decreasing the pore size resulted in a further decrease in growth at bench scale. Due to sensitivity of the cell line to gas sparge rate and bubble size that was revealed by the P/V and kL a models, an OTR model based on oxygen enrichment and increased P/V was selected that generated endpoint sparge rates representative of 2,000 L scale. This final bench scale model generated similar growth rates as manufacturing. In order to take into account other routinely monitored process parameters besides growth, a multivariate statistical approach was applied to demonstrate validity of the small scale model. After the model was selected based on univariate and multivariate analysis, product quality was generated and verified to fall within the 95% confidence limit of the multivariate model.

  7. Selection of important variables and determination of functional form for continuous predictors in multivariable model building.

    PubMed

    Sauerbrei, Willi; Royston, Patrick; Binder, Harald

    2007-12-30

    In developing regression models, data analysts are often faced with many predictor variables that may influence an outcome variable. After more than half a century of research, the 'best' way of selecting a multivariable model is still unresolved. It is generally agreed that subject matter knowledge, when available, should guide model building. However, such knowledge is often limited, and data-dependent model building is required. We limit the scope of the modelling exercise to selecting important predictors and choosing interpretable and transportable functions for continuous predictors. Assuming linear functions, stepwise selection and all-subset strategies are discussed; the key tuning parameters are the nominal P-value for testing a variable for inclusion and the penalty for model complexity, respectively. We argue that stepwise procedures perform better than a literature-based assessment would suggest. Concerning selection of functional form for continuous predictors, the principal competitors are fractional polynomial functions and various types of spline techniques. We note that a rigorous selection strategy known as multivariable fractional polynomials (MFP) has been developed. No spline-based procedure for simultaneously selecting variables and functional forms has found wide acceptance. Results of FP and spline modelling are compared in two data sets. It is shown that spline modelling, while extremely flexible, can generate fitted curves with uninterpretable 'wiggles', particularly when automatic methods for choosing the smoothness are employed. We give general recommendations to practitioners for carrying out variable and function selection. While acknowledging that further research is needed, we argue why MFP is our preferred approach for multivariable model building with continuous covariates.

  8. A multivariate-based conflict prediction model for a Brazilian freeway.

    PubMed

    Caleffi, Felipe; Anzanello, Michel José; Cybis, Helena Beatriz Bettella

    2017-01-01

    Real-time collision risk prediction models relying on traffic data can be useful in dynamic management systems seeking at improving traffic safety. Models have been proposed to predict crash occurrence and collision risk in order to proactively improve safety. This paper presents a multivariate-based framework for selecting variables for a conflict prediction model on the Brazilian BR-290/RS freeway. The Bhattacharyya Distance (BD) and Principal Component Analysis (PCA) are applied to a dataset comprised of variables that potentially help to explain occurrence of traffic conflicts; the parameters yielded by such multivariate techniques give rise to a variable importance index that guides variables removal for later selection. Next, the selected variables are inserted into a Linear Discriminant Analysis (LDA) model to estimate conflict occurrence. A matched control-case technique is applied using traffic data processed from surveillance cameras at a segment of a Brazilian freeway. Results indicate that the variables that significantly impacted on the model are associated to total flow, difference between standard deviation of lanes' occupancy, and the speed's coefficient of variation. The model allowed to asses a characteristic behavior of major Brazilian's freeways, by identifying the Brazilian typical heterogeneity of traffic pattern among lanes, which leads to aggressive maneuvers. Results also indicate that the developed LDA-PCA model outperforms the LDA-BD model. The LDA-PCA model yields average 76% classification accuracy, and average 87% sensitivity (which measures the rate of conflicts correctly predicted).

  9. A multivariate conditional model for streamflow prediction and spatial precipitation refinement

    NASA Astrophysics Data System (ADS)

    Liu, Zhiyong; Zhou, Ping; Chen, Xiuzhi; Guan, Yinghui

    2015-10-01

    The effective prediction and estimation of hydrometeorological variables are important for water resources planning and management. In this study, we propose a multivariate conditional model for streamflow prediction and the refinement of spatial precipitation estimates. This model consists of high dimensional vine copulas, conditional bivariate copula simulations, and a quantile-copula function. The vine copula is employed because of its flexibility in modeling the high dimensional joint distribution of multivariate data by building a hierarchy of conditional bivariate copulas. We investigate two cases to evaluate the performance and applicability of the proposed approach. In the first case, we generate one month ahead streamflow forecasts that incorporate multiple predictors including antecedent precipitation and streamflow records in a basin located in South China. The prediction accuracy of the vine-based model is compared with that of traditional data-driven models such as the support vector regression (SVR) and the adaptive neuro-fuzzy inference system (ANFIS). The results indicate that the proposed model produces more skillful forecasts than SVR and ANFIS. Moreover, this probabilistic model yields additional information concerning the predictive uncertainty. The second case involves refining spatial precipitation estimates derived from the tropical rainfall measuring mission precipitationproduct for the Yangtze River basin by incorporating remotely sensed soil moisture data and the observed precipitation from meteorological gauges over the basin. The validation results indicate that the proposed model successfully refines the spatial precipitation estimates. Although this model is tested for specific cases, it can be extended to other hydrometeorological variables for predictions and spatial estimations.

  10. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    SciTech Connect

    Rupšys, P.

    2015-10-28

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  11. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    NASA Astrophysics Data System (ADS)

    Rupšys, P.

    2015-10-01

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  12. Modeling climate effects on hip fracture rate by the multivariate GARCH model in Montreal region, Canada

    NASA Astrophysics Data System (ADS)

    Modarres, Reza; Ouarda, Taha B. M. J.; Vanasse, Alain; Orzanco, Maria Gabriela; Gosselin, Pierre

    2014-07-01

    Changes in extreme meteorological variables and the demographic shift towards an older population have made it important to investigate the association of climate variables and hip fracture by advanced methods in order to determine the climate variables that most affect hip fracture incidence. The nonlinear autoregressive moving average with exogenous variable-generalized autoregressive conditional heteroscedasticity (ARMA X-GARCH) and multivariate GARCH (MGARCH) time series approaches were applied to investigate the nonlinear association between hip fracture rate in female and male patients aged 40-74 and 75+ years and climate variables in the period of 1993-2004, in Montreal, Canada. The models describe 50-56 % of daily variation in hip fracture rate and identify snow depth, air temperature, day length and air pressure as the influencing variables on the time-varying mean and variance of the hip fracture rate. The conditional covariance between climate variables and hip fracture rate is increasing exponentially, showing that the effect of climate variables on hip fracture rate is most acute when rates are high and climate conditions are at their worst. In Montreal, climate variables, particularly snow depth and air temperature, appear to be important predictors of hip fracture incidence. The association of climate variables and hip fracture does not seem to change linearly with time, but increases exponentially under harsh climate conditions. The results of this study can be used to provide an adaptive climate-related public health program and ti guide allocation of services for avoiding hip fracture risk.

  13. Forecasting of municipal solid waste quantity in a developing country using multivariate grey models

    SciTech Connect

    Intharathirat, Rotchana; Abdul Salam, P.; Kumar, S.; Untong, Akarapong

    2015-05-15

    Highlights: • Grey model can be used to forecast MSW quantity accurately with the limited data. • Prediction interval overcomes the uncertainty of MSW forecast effectively. • A multivariate model gives accuracy associated with factors affecting MSW quantity. • Population, urbanization, employment and household size play role for MSW quantity. - Abstract: In order to plan, manage and use municipal solid waste (MSW) in a sustainable way, accurate forecasting of MSW generation and composition plays a key role. It is difficult to carry out the reliable estimates using the existing models due to the limited data available in the developing countries. This study aims to forecast MSW collected in Thailand with prediction interval in long term period by using the optimized multivariate grey model which is the mathematical approach. For multivariate models, the representative factors of residential and commercial sectors affecting waste collected are identified, classified and quantified based on statistics and mathematics of grey system theory. Results show that GMC (1, 5), the grey model with convolution integral, is the most accurate with the least error of 1.16% MAPE. MSW collected would increase 1.40% per year from 43,435–44,994 tonnes per day in 2013 to 55,177–56,735 tonnes per day in 2030. This model also illustrates that population density is the most important factor affecting MSW collected, followed by urbanization, proportion employment and household size, respectively. These mean that the representative factors of commercial sector may affect more MSW collected than that of residential sector. Results can help decision makers to develop the measures and policies of waste management in long term period.

  14. Multivariate Bias Correction Procedures for Improving Water Quality Predictions using Mechanistic Models

    NASA Astrophysics Data System (ADS)

    Libera, D.; Arumugam, S.

    2015-12-01

    Water quality observations are usually not available on a continuous basis because of the expensive cost and labor requirements so calibrating and validating a mechanistic model is often difficult. Further, any model predictions inherently have bias (i.e., under/over estimation) and require techniques that preserve the long-term mean monthly attributes. This study suggests and compares two multivariate bias-correction techniques to improve the performance of the SWAT model in predicting daily streamflow, TN Loads across the southeast based on split-sample validation. The first approach is a dimension reduction technique, canonical correlation analysis that regresses the observed multivariate attributes with the SWAT model simulated values. The second approach is from signal processing, importance weighting, that applies a weight based off the ratio of the observed and model densities to the model data to shift the mean, variance, and cross-correlation towards the observed values. These procedures were applied to 3 watersheds chosen from the Water Quality Network in the Southeast Region; specifically watersheds with sufficiently large drainage areas and number of observed data points. The performance of these two approaches are also compared with independent estimates from the USGS LOADEST model. Uncertainties in the bias-corrected estimates due to limited water quality observations are also discussed.

  15. Note: Multivariate system spectroscopic model using Lorentz oscillators and partial least squares regression analysis

    NASA Astrophysics Data System (ADS)

    Gad, R. S.; Parab, J. S.; Naik, G. M.

    2010-11-01

    Multivariate system spectroscopic model plays important role in understanding chemometrics of ensemble under study. Here in this manuscript we discuss various approaches of modeling of spectroscopic system and demonstrate how Lorentz oscillator can be used to model any general spectroscopic system. Chemometric studies require customized templates design for the corresponding variants participating in ensemble, which generates the characteristic matrix of the ensemble under study. The typical biological system that resembles human blood tissue consisting of five major constituents i.e., alanine, urea, lactate, glucose, ascorbate; has been tested on the model. The model was validated using three approaches, namely, root mean square error (RMSE) analysis in the range of ±5% confidence interval, clerk gird error plot, and RMSE versus percent noise level study. Also the model was tested across various template sizes (consisting of samples ranging from 10 up to 1000) to ascertain the validity of partial least squares regression. The model has potential in understanding the chemometrics of proteomics pathways.

  16. Multivariate Markov processes for stochastic systems with delays: application to the stochastic Gompertz model with delay.

    PubMed

    Frank, T D

    2002-07-01

    Using the method of steps, we describe stochastic processes with delays in terms of Markov diffusion processes. Thus, multivariate Langevin equations and Fokker-Planck equations are derived for stochastic delay differential equations. Natural, periodic, and reflective boundary conditions are discussed. Both Ito and Stratonovich calculus are used. In particular, our Fokker-Planck approach recovers the generalized delay Fokker-Planck equation proposed by Guillouzic et al. The results obtained are applied to a model for population growth: the Gompertz model with delay and multiplicative white noise.

  17. A model-based examination of multivariate physical modes in the Gulf of Alaska

    NASA Astrophysics Data System (ADS)

    Hermann, A. J.; Ladd, C.; Cheng, W.; Curchitser, E. N.; Hedstrom, K.

    2016-10-01

    We use multivariate output from a hydrodynamic model of the Gulf of Alaska (GOA) to explore the covariance among its physical state and air/sea fluxes. We attempt to summarize this coupled variability using a limited set of patterns, and examine their correlation to three large-scale climate indices relevant to the Northeast Pacific. This analysis is focused on perturbations from monthly climatology of the following attributes of the GOA: sea surface temperature, sea surface height, mixed layer depth, sea surface salinity, latent heat flux, sensible heat flux, shortwave irradiance, net long wave irradiance, currents at 40 m depth, and wind stress. We identified two multivariate modes, both substantially correlated with the Pacific Decadal Oscillation (PDO) and Multivariate El Nino (MEI) indices on interannual timescales, which together account for ~30% of the total normalized variance of the perturbation time series. These two modes indicate the following covarying events during periods of positive PDO/MEI: (1) anomalously warm, wet and windy conditions (typically in winter), with elevated coastal SSH, followed 2-5 months later by (2) reduced cloud cover, with emerging shelf-break eddies. Similar modes are found when the analysis is performed separately on the eastern and western GOA; in general, modal amplitudes appear stronger in the western GOA.

  18. Synthetic Multivariate Models to Accommodate Unmodeled Interfering Components During Quantitative Spectral Analyses

    SciTech Connect

    Haaland, David M.

    1999-07-14

    The analysis precision of any multivariate calibration method will be severely degraded if unmodeled sources of spectral variation are present in the unknown sample spectra. This paper describes a synthetic method for correcting for the errors generated by the presence of unmodeled components or other sources of unmodeled spectral variation. If the spectral shape of the unmodeled component can be obtained and mathematically added to the original calibration spectra, then a new synthetic multivariate calibration model can be generated to accommodate the presence of the unmodeled source of spectral variation. This new method is demonstrated for the presence of unmodeled temperature variations in the unknown sample spectra of dilute aqueous solutions of urea, creatinine, and NaCl. When constant-temperature PLS models are applied to spectra of samples of variable temperature, the standard errors of prediction (SEP) are approximately an order of magnitude higher than that of the original cross-validated SEPs of the constant-temperature partial least squares models. Synthetic models using the classical least squares estimates of temperature from pure water or variable-temperature mixture sample spectra reduce the errors significantly for the variable temperature samples. Spectrometer drift adds additional error to the analyte determinations, but a method is demonstrated that can minimize the effect of drift on prediction errors through the measurement of the spectra of a small subset of samples during both calibration and prediction. In addition, sample temperature can be predicted with high precision with this new synthetic model without the need to recalibrate using actual variable-temperature sample data. The synthetic methods eliminate the need for expensive generation of new calibration samples and collection of their spectra. The methods are quite general and can be applied using any known source of spectral variation and can be used with any multivariate

  19. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    SciTech Connect

    Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van't

    2012-03-15

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  20. Linear Multivariable Regression Models for Prediction of Eddy Dissipation Rate from Available Meteorological Data

    NASA Technical Reports Server (NTRS)

    MCKissick, Burnell T. (Technical Monitor); Plassman, Gerald E.; Mall, Gerald H.; Quagliano, John R.

    2005-01-01

    Linear multivariable regression models for predicting day and night Eddy Dissipation Rate (EDR) from available meteorological data sources are defined and validated. Model definition is based on a combination of 1997-2000 Dallas/Fort Worth (DFW) data sources, EDR from Aircraft Vortex Spacing System (AVOSS) deployment data, and regression variables primarily from corresponding Automated Surface Observation System (ASOS) data. Model validation is accomplished through EDR predictions on a similar combination of 1994-1995 Memphis (MEM) AVOSS and ASOS data. Model forms include an intercept plus a single term of fixed optimal power for each of these regression variables; 30-minute forward averaged mean and variance of near-surface wind speed and temperature, variance of wind direction, and a discrete cloud cover metric. Distinct day and night models, regressing on EDR and the natural log of EDR respectively, yield best performance and avoid model discontinuity over day/night data boundaries.

  1. Quantitative modeling of bioconcentration factors of carbonyl herbicides using multivariate image analysis.

    PubMed

    Freitas, Mirlaine R; Barigye, Stephen J; Daré, Joyce K; Freitas, Matheus P

    2016-06-01

    The bioconcentration factor (BCF) is an important parameter used to estimate the propensity of chemicals to accumulate in aquatic organisms from the ambient environment. While simple regressions for estimating the BCF of chemical compounds from water solubility or the n-octanol/water partition coefficient have been proposed in the literature, these models do not always yield good correlations and more descriptive variables are required for better modeling of BCF data for a given series of organic pollutants, such as some herbicides. Thus, the logBCF values for a set of carbonyl herbicides comprising amide, urea, carbamate and thiocarbamate groups were quantitatively modeled using multivariate image analysis (MIA) descriptors, derived from colored image representations for chemical structures. The logBCF model was calibrated and vigorously validated (r(2) = 0.79, q(2) = 0.70 and rtest(2) = 0.81), providing a comprehensive three-parameter linear equation after variable selection (logBCF = 5.682 - 0.00233 × X9774 - 0.00070 × X813 - 0.00273 × X5144); the variables represent pixel coordinates in the multivariate image. Finally, chemical interpretation of the obtained models in terms of the structural characteristics responsible for the enhanced or reduced logBCF values was performed, providing key leads in the prospective development of more eco-friendly synthetic herbicides.

  2. On the Bayesian Treed Multivariate Gaussian Process with Linear Model of Coregionalization

    SciTech Connect

    Konomi, Bledar A.; Karagiannis, Georgios; Lin, Guang

    2015-02-01

    The Bayesian treed Gaussian process (BTGP) has gained popularity in recent years because it provides a straightforward mechanism for modeling non-stationary data and can alleviate computational demands by fitting models to less data. The extension of BTGP to the multivariate setting requires us to model the cross-covariance and to propose efficient algorithms that can deal with trans-dimensional MCMC moves. In this paper we extend the cross-covariance of the Bayesian treed multivariate Gaussian process (BTMGP) to that of linear model of Coregionalization (LMC) cross-covariances. Different strategies have been developed to improve the MCMC mixing and invert smaller matrices in the Bayesian inference. Moreover, we compare the proposed BTMGP with existing multiple BTGP and BTMGP in test cases and multiphase flow computer experiment in a full scale regenerator of a carbon capture unit. The use of the BTMGP with LMC cross-covariance helped to predict the computer experiments relatively better than existing competitors. The proposed model has a wide variety of applications, such as computer experiments and environmental data. In the case of computer experiments we also develop an adaptive sampling strategy for the BTMGP with LMC cross-covariance function.

  3. Forecasting of municipal solid waste quantity in a developing country using multivariate grey models.

    PubMed

    Intharathirat, Rotchana; Abdul Salam, P; Kumar, S; Untong, Akarapong

    2015-05-01

    In order to plan, manage and use municipal solid waste (MSW) in a sustainable way, accurate forecasting of MSW generation and composition plays a key role. It is difficult to carry out the reliable estimates using the existing models due to the limited data available in the developing countries. This study aims to forecast MSW collected in Thailand with prediction interval in long term period by using the optimized multivariate grey model which is the mathematical approach. For multivariate models, the representative factors of residential and commercial sectors affecting waste collected are identified, classified and quantified based on statistics and mathematics of grey system theory. Results show that GMC (1, 5), the grey model with convolution integral, is the most accurate with the least error of 1.16% MAPE. MSW collected would increase 1.40% per year from 43,435-44,994 tonnes per day in 2013 to 55,177-56,735 tonnes per day in 2030. This model also illustrates that population density is the most important factor affecting MSW collected, followed by urbanization, proportion employment and household size, respectively. These mean that the representative factors of commercial sector may affect more MSW collected than that of residential sector. Results can help decision makers to develop the measures and policies of waste management in long term period.

  4. Surgical workflow analysis with Gaussian mixture multivariate autoregressive (GMMAR) models: a simulation study.

    PubMed

    Loukas, Constantinos; Georgiou, Evangelos

    2013-01-01

    There is currently great interest in analyzing the workflow of minimally invasive operations performed in a physical or simulation setting, with the aim of extracting important information that can be used for skills improvement, optimization of intraoperative processes, and comparison of different interventional strategies. The first step in achieving this goal is to segment the operation into its key interventional phases, which is currently approached by modeling a multivariate signal that describes the temporal usage of a predefined set of tools. Although this technique has shown promising results, it is challenged by the manual extraction of the tool usage sequence and the inability to simultaneously evaluate the surgeon's skills. In this paper we describe an alternative methodology for surgical phase segmentation and performance analysis based on Gaussian mixture multivariate autoregressive (GMMAR) models of the hand kinematics. Unlike previous work in this area, our technique employs signals from orientation sensors, attached to the endoscopic instruments of a virtual reality simulator, without considering which tools are employed at each time-step of the operation. First, based on pre-segmented hand motion signals, a training set of regression coefficients is created for each surgical phase using multivariate autoregressive (MAR) models. Then, a signal from a new operation is processed with GMMAR, wherein each phase is modeled by a Gaussian component of regression coefficients. These coefficients are compared to those of the training set. The operation is segmented according to the prior probabilities of the surgical phases estimated via GMMAR. The method also allows for the study of motor behavior and hand motion synchronization demonstrated in each phase, a quality that can be incorporated into modern laparoscopic simulators for skills assessment.

  5. A multivariate multilevel Gaussian model with a mixed effects structure in the mean and covariance part.

    PubMed

    Li, Baoyue; Bruyneel, Luk; Lesaffre, Emmanuel

    2014-05-20

    A traditional Gaussian hierarchical model assumes a nested multilevel structure for the mean and a constant variance at each level. We propose a Bayesian multivariate multilevel factor model that assumes a multilevel structure for both the mean and the covariance matrix. That is, in addition to a multilevel structure for the mean we also assume that the covariance matrix depends on covariates and random effects. This allows to explore whether the covariance structure depends on the values of the higher levels and as such models heterogeneity in the variances and correlation structure of the multivariate outcome across the higher level values. The approach is applied to the three-dimensional vector of burnout measurements collected on nurses in a large European study to answer the research question whether the covariance matrix of the outcomes depends on recorded system-level features in the organization of nursing care, but also on not-recorded factors that vary with countries, hospitals, and nursing units. Simulations illustrate the performance of our modeling approach.

  6. U.S. Truck Driver Anthropometric Study and Multivariate Anthropometric Models for Cab Designs

    PubMed Central

    Guan, Jinhua; Hsiao, Hongwei; Bradtmiller, Bruce; Kau, Tsui-Ying; Reed, Matthew R.; Jahns, Steven K.; Loczi, Josef; Hardee, H. Lenora; Piamonte, Dominic Paul T.

    2015-01-01

    Objective This study presents data from a large-scale anthropometric study of U.S. truck drivers and the multivariate anthropometric models developed for the design of next-generation truck cabs. Background Up-to-date anthropometric information of the U.S. truck driver population is needed for the design of safe and ergonomically efficient truck cabs. Method We collected 35 anthropometric dimensions for 1,950 truck drivers (1,779 males and 171 females) across the continental United States using a sampling plan designed to capture the appropriate ethnic, gender, and age distributions of the truck driver population. Results Truck drivers are heavier than the U.S. general population, with a difference in mean body weight of 13.5 kg for males and 15.4 kg for females. They are also different in physique from the U.S. general population. In addition, the current truck drivers are heavier and different in physique compared to their counterparts of 25 to 30 years ago. Conclusion The data obtained in this study provide more accurate anthropometric information for cab designs than do the current U.S. general population data or truck driver data collected 25 to 30 years ago. Multivariate anthropometric models, spanning 95% of the current truck driver population on the basis of a set of 12 anthropometric measurements, have been developed to facilitate future cab designs. Application The up-to-date truck driver anthropometric data and multivariate anthropometric models will benefit the design of future truck cabs which, in turn, will help promote the safety and health of the U.S. truck drivers. PMID:23156628

  7. Estimating a graphical intra-class correlation coefficient (GICC) using multivariate probit-linear mixed models.

    PubMed

    Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S

    2015-09-01

    Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.

  8. Prediction of Stock Returns Based on Cross-Sectional Multivariable Model

    NASA Astrophysics Data System (ADS)

    Yamada, Shinya; Takahashi, Shinsuke; Funabashi, Motohisa

    A new prediction method of stock returns was constructed from a cross-sectional multivariable model where explanatory variables are current financial indexes and an explained variable is a future stock return. To achieve precise prediction, explanatory variables were appropriately selected over time based on various test statistics and optimization of a performance index of expected portfolio return. A long-short portfolio, in which stocks with high predicted return were bought and stocks with low predicted return were sold short, was constructed to evaluate the proposed method. The simulation test showed that the proposed prediction method was effective to achieve high portfolio performance.

  9. Multivariate Normal Tissue Complication Probability Modeling of Heart Valve Dysfunction in Hodgkin Lymphoma Survivors

    SciTech Connect

    Cella, Laura; Liuzzi, Raffaele; Conson, Manuel; D’Avino, Vittoria; Salvatore, Marco; Pacelli, Roberto

    2013-10-01

    Purpose: To establish a multivariate normal tissue complication probability (NTCP) model for radiation-induced asymptomatic heart valvular defects (RVD). Methods and Materials: Fifty-six patients treated with sequential chemoradiation therapy for Hodgkin lymphoma (HL) were retrospectively reviewed for RVD events. Clinical information along with whole heart, cardiac chambers, and lung dose distribution parameters was collected, and the correlations to RVD were analyzed by means of Spearman's rank correlation coefficient (Rs). For the selection of the model order and parameters for NTCP modeling, a multivariate logistic regression method using resampling techniques (bootstrapping) was applied. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC). Results: When we analyzed the whole heart, a 3-variable NTCP model including the maximum dose, whole heart volume, and lung volume was shown to be the optimal predictive model for RVD (Rs = 0.573, P<.001, AUC = 0.83). When we analyzed the cardiac chambers individually, for the left atrium and for the left ventricle, an NTCP model based on 3 variables including the percentage volume exceeding 30 Gy (V30), cardiac chamber volume, and lung volume was selected as the most predictive model (Rs = 0.539, P<.001, AUC = 0.83; and Rs = 0.557, P<.001, AUC = 0.82, respectively). The NTCP values increase as heart maximum dose or cardiac chambers V30 increase. They also increase with larger volumes of the heart or cardiac chambers and decrease when lung volume is larger. Conclusions: We propose logistic NTCP models for RVD considering not only heart irradiation dose but also the combined effects of lung and heart volumes. Our study establishes the statistical evidence of the indirect effect of lung size on radio-induced heart toxicity.

  10. TOXICITY PROFILING OF ENGINEERED NANOMATERIALS VIA MULTIVARIATE DOSE-RESPONSE SURFACE MODELING

    PubMed Central

    Patel, Trina; Telesca, Donatello; George, Saji; Nel, André E.

    2014-01-01

    New generation in vitro high-throughput screening (HTS) assays for the assessment of engineered nanomaterials provide an opportunity to learn how these particles interact at the cellular level, particularly in relation to injury pathways. These types of assays are often characterized by small sample sizes, high measurement error and high dimensionality, as multiple cytotoxicity outcomes are measured across an array of doses and durations of exposure. In this paper we propose a probability model for the toxicity profiling of engineered nanomaterials. A hierarchical structure is used to account for the multivariate nature of the data by modeling dependence between outcomes and thereby combining information across cytotoxicity pathways. In this framework we are able to provide a flexible surface-response model that provides inference and generalizations of various classical risk assessment parameters. We discuss applications of this model to data on eight nanoparticles evaluated in relation to four cytotoxicity parameters. PMID:25191531

  11. Probabilistic, multi-variate flood damage modelling using random forests and Bayesian networks

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Schröter, Kai

    2015-04-01

    Decisions on flood risk management and adaptation are increasingly based on risk analyses. Such analyses are associated with considerable uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention recently, they are hardly applied in flood damage assessments. Most of the damage models usually applied in standard practice have in common that complex damaging processes are described by simple, deterministic approaches like stage-damage functions. This presentation will show approaches for probabilistic, multi-variate flood damage modelling on the micro- and meso-scale and discuss their potential and limitations. Reference: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Schröter, K., Kreibich, H., Vogel, K., Riggelsen, C., Scherbaum, F., Merz, B. (2014): How useful are complex flood damage models? - Water Resources Research, 50, 4, p. 3378-3395.

  12. Multivariate model of female black bear habitat use for a Geographic Information System

    USGS Publications Warehouse

    Clark, Joseph D.; Dunn, James E.; Smith, Kimberly G.

    1993-01-01

    Simple univariate statistical techniques may not adequately assess the multidimensional nature of habitats used by wildlife. Thus, we developed a multivariate method to model habitat-use potential using a set of female black bear (Ursus americanus) radio locations and habitat data consisting of forest cover type, elevation, slope, aspect, distance to roads, distance to streams, and forest cover type diversity score in the Ozark Mountains of Arkansas. The model is based on the Mahalanobis distance statistic coupled with Geographic Information System (GIS) technology. That statistic is a measure of dissimilarity and represents a standardized squared distance between a set of sample variates and an ideal based on the mean of variates associated with animal observations. Calculations were made with the GIS to produce a map containing Mahalanobis distance values within each cell on a 60- × 60-m grid. The model identified areas of high habitat use potential that could not otherwise be identified by independent perusal of any single map layer. This technique avoids many pitfalls that commonly affect typical multivariate analyses of habitat use and is a useful tool for habitat manipulation or mitigation to favor terrestrial vertebrates that use habitats on a landscape scale.

  13. Adding local components to global functions for continuous covariates in multivariable regression modeling.

    PubMed

    Binder, H; Sauerbrei, W

    2010-03-30

    When global techniques, based on fractional polynomials (FPs), are employed for modeling potentially nonlinear effects of several continuous covariates on a response, accessible model equations are obtained. However, local features might be missed. Therefore, a procedure is introduced, which systematically checks model fits, obtained by the multivariable fractional polynomial (MFP) approach, for overlooked local features. Statistically significant local polynomials are then parsimoniously added. This approach, called MFP + L, is seen to result in an effective control of the Type I error with respect to the addition of local components in a small simulation study with univariate and multivariable settings. Prediction performance is compared with that of a penalized regression spline technique. In a setting unfavorable for FPs, the latter outperforms the MFP approach, if there is much information in the data. However, the addition of local features reduces this performance difference. There is only a small detrimental effect in settings where the MFP approach performs better. In an application example with children's respiratory health data, fits from the spline-based approach indicate many local features, but MFP + L adds only few significant features, which seem to have good support in the data. The proposed approach may be expected to be superior in settings with local features, but retains the good properties of the MFP approach in a large number of settings where global functions are sufficient.

  14. Ecological prediction with nonlinear multivariate time-frequency functional data models

    USGS Publications Warehouse

    Yang, Wen-Hsi; Wikle, Christopher K.; Holan, Scott H.; Wildhaber, Mark L.

    2013-01-01

    Time-frequency analysis has become a fundamental component of many scientific inquiries. Due to improvements in technology, the amount of high-frequency signals that are collected for ecological and other scientific processes is increasing at a dramatic rate. In order to facilitate the use of these data in ecological prediction, we introduce a class of nonlinear multivariate time-frequency functional models that can identify important features of each signal as well as the interaction of signals corresponding to the response variable of interest. Our methodology is of independent interest and utilizes stochastic search variable selection to improve model selection and performs model averaging to enhance prediction. We illustrate the effectiveness of our approach through simulation and by application to predicting spawning success of shovelnose sturgeon in the Lower Missouri River.

  15. A multivariate model for the meta-analysis of study level survival data at multiple times.

    PubMed

    Jackson, Dan; Rollins, Katie; Coughlin, Patrick

    2014-09-01

    Motivated by our meta-analytic dataset involving survival rates after treatment for critical leg ischemia, we develop and apply a new multivariate model for the meta-analysis of study level survival data at multiple times. Our data set involves 50 studies that provide mortality rates at up to seven time points, which we model simultaneously, and we compare the results to those obtained from standard methodologies. Our method uses exact binomial within-study distributions and enforces the constraints that both the study specific and the overall mortality rates must not decrease over time. We directly model the probabilities of mortality at each time point, which are the quantities of primary clinical interest. We also present I(2) statistics that quantify the impact of the between-study heterogeneity, which is very considerable in our data set.

  16. Adjusting the Adjusted X[superscript 2]/df Ratio Statistic for Dichotomous Item Response Theory Analyses: Does the Model Fit?

    ERIC Educational Resources Information Center

    Tay, Louis; Drasgow, Fritz

    2012-01-01

    Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…

  17. Disaster Hits Home: A Model of Displaced Family Adjustment after Hurricane Katrina

    ERIC Educational Resources Information Center

    Peek, Lori; Morrissey, Bridget; Marlatt, Holly

    2011-01-01

    The authors explored individual and family adjustment processes among parents (n = 30) and children (n = 55) who were displaced to Colorado after Hurricane Katrina. Drawing on in-depth interviews with 23 families, this article offers an inductive model of displaced family adjustment. Four stages of family adjustment are presented in the model: (a)…

  18. PM10 modeling in the Oviedo urban area (Northern Spain) by using multivariate adaptive regression splines

    NASA Astrophysics Data System (ADS)

    Nieto, Paulino José García; Antón, Juan Carlos Álvarez; Vilán, José Antonio Vilán; García-Gonzalo, Esperanza

    2014-10-01

    The aim of this research work is to build a regression model of the particulate matter up to 10 micrometers in size (PM10) by using the multivariate adaptive regression splines (MARS) technique in the Oviedo urban area (Northern Spain) at local scale. This research work explores the use of a nonparametric regression algorithm known as multivariate adaptive regression splines (MARS) which has the ability to approximate the relationship between the inputs and outputs, and express the relationship mathematically. In this sense, hazardous air pollutants or toxic air contaminants refer to any substance that may cause or contribute to an increase in mortality or serious illness, or that may pose a present or potential hazard to human health. To accomplish the objective of this study, the experimental dataset of nitrogen oxides (NOx), carbon monoxide (CO), sulfur dioxide (SO2), ozone (O3) and dust (PM10) were collected over 3 years (2006-2008) and they are used to create a highly nonlinear model of the PM10 in the Oviedo urban nucleus (Northern Spain) based on the MARS technique. One main objective of this model is to obtain a preliminary estimate of the dependence between PM10 pollutant in the Oviedo urban area at local scale. A second aim is to determine the factors with the greatest bearing on air quality with a view to proposing health and lifestyle improvements. The United States National Ambient Air Quality Standards (NAAQS) establishes the limit values of the main pollutants in the atmosphere in order to ensure the health of healthy people. Firstly, this MARS regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the main pollutants in the Oviedo urban area. Secondly, the main advantages of MARS are its capacity to produce simple, easy-to-interpret models, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, on the basis of

  19. A MULTIVARIATE FINITE MIXTURE LATENT TRAJECTORY MODEL WITH APPLICATION TO DEMENTIA STUDIES

    PubMed Central

    Lai, Dongbing; Xu, Huiping; Koller, Daniel; Foroud, Tatiana; Gao, Sujuan

    2016-01-01

    Dementia patients exhibit considerable heterogeneity in individual trajectories of cognitive decline, with some patients showing rapid decline following diagnoses while others exhibiting slower decline or remaining stable for several years. Dementia studies often collect longitudinal measures of multiple neuropsychological tests aimed to measure patients’ decline across a number of cognitive domains. We propose a multivariate finite mixture latent trajectory model to identify distinct longitudinal patterns of cognitive decline simultaneously in multiple cognitive domains, each of which is measured by multiple neuropsychological tests. EM algorithm is used for parameter estimation and posterior probabilities are used to predict latent class membership. We present results of a simulation study demonstrating adequate performance of our proposed approach and apply our model to the Uniform Data Set (UDS) from the National Alzheimer’s Coordinating Center (NACC) to identify cognitive decline patterns among dementia patients. PMID:27642206

  20. Bayesian latent variable modelling of multivariate spatio-temporal variation in cancer mortality.

    PubMed

    Tzala, Evangelia; Best, Nicky

    2008-02-01

    In this article, three alternative Bayesian hierarchical latent factor models are described for spatially and temporally correlated multivariate health data. The fundamentals of factor analysis with ideas of space- time disease mapping to provide a flexible framework for the joint analysis of multiple-related diseases in space and time with a view to estimating common and disease-specific trends in cancer risk are combined. The models are applied to area-level mortality data on six diet-related cancers for Greece over the 20-year period from 1980 to 1999. The aim of this study is to uncover the spatial and temporal patterns of any latent factor(s) underlying the cancer data that could be interpreted as reflecting some aspects of the habitual diet of the Greek population.

  1. Bayesian meta-analysis for longitudinal data models using multivariate mixture priors.

    PubMed

    Lopes, Hedibert Freitas; Müller, Peter; Rosner, Gary L

    2003-03-01

    We propose a class of longitudinal data models with random effects that generalizes currently used models in two important ways. First, the random-effects model is a flexible mixture of multivariate normals, accommodating population heterogeneity, outliers, and nonlinearity in the regression on subject-specific covariates. Second, the model includes a hierarchical extension to allow for meta-analysis over related studies. The random-effects distributions are decomposed into one part that is common across all related studies (common measure), and one part that is specific to each study and that captures the variability intrinsic between patients within the same study. Both the common measure and the study-specific measures are parameterized as mixture-of-normals models. We carry out inference using reversible jump posterior simulation to allow a random number of terms in the mixtures. The sampler takes advantage of the small number of entertained models. The motivating application is the analysis of two studies carried out by the Cancer and Leukemia Group B (CALGB). In both studies, we record for each patient white blood cell counts (WBC) over time to characterize the toxic effects of treatment. The WBCs are modeled through a nonlinear hierarchical model that gathers the information from both studies.

  2. Investigation of time and weather effects on crash types using full Bayesian multivariate Poisson lognormal models.

    PubMed

    El-Basyouny, Karim; Barua, Sudip; Islam, Md Tazul

    2014-12-01

    Previous research shows that various weather elements have significant effects on crash occurrence and risk; however, little is known about how these elements affect different crash types. Consequently, this study investigates the impact of weather elements and sudden extreme snow or rain weather changes on crash type. Multivariate models were used for seven crash types using five years of daily weather and crash data collected for the entire City of Edmonton. In addition, the yearly trend and random variation of parameters across the years were analyzed by using four different modeling formulations. The proposed models were estimated in a full Bayesian context via Markov Chain Monte Carlo simulation. The multivariate Poisson lognormal model with yearly varying coefficients provided the best fit for the data according to Deviance Information Criteria. Overall, results showed that temperature and snowfall were statistically significant with intuitive signs (crashes decrease with increasing temperature; crashes increase as snowfall intensity increases) for all crash types, while rainfall was mostly insignificant. Previous snow showed mixed results, being statistically significant and positively related to certain crash types, while negatively related or insignificant in other cases. Maximum wind gust speed was found mostly insignificant with a few exceptions that were positively related to crash type. Major snow or rain events following a dry weather condition were highly significant and positively related to three crash types: Follow-Too-Close, Stop-Sign-Violation, and Ran-Off-Road crashes. The day-of-the-week dummy variables were statistically significant, indicating a possible weekly variation in exposure. Transportation authorities might use the above results to improve road safety by providing drivers with information regarding the risk of certain crash types for a particular weather condition.

  3. A Multivariate model for Monte-Carlo Simulation of Spatially and Temporally Correlated Daily Rainfall Intensities

    NASA Astrophysics Data System (ADS)

    Mok, C. M.; Suribhatla, R. M.; Wanakule, N.; Zhang, M.

    2009-12-01

    A reliability-based water resources management framework has been developed by AMEC Geomatrix over the last few years to optimally manage a water supply system that serves over two million people in the northern Tampa Bay region in Florida, USA, while protecting wetland health and preventing seawater intrusion. The framework utilizes stochastic optimization techniques to account for uncertainties associated with the prediction of water demand, surface water availability, baseline groundwater levels, a non-anthropogenic reservoir water budget, and hydrological/hydrogeological properties. Except for the hydro¬geological properties, these uncertainties are partially caused by uncertainties in future rainfall patterns in the region. We present here a novel multivariate statistical model of rainfall and a methodology for generating Monte-Carlo realizations based on the statistical model. The model is intended to capture spatial-temporal characteristics of daily rainfall intensity in 172 basins in the northern Tampa Bay region and is characterized by its high dimensionality. Daily rainfall intensity in each basin is expressed as product of a binary random variable (RV) corresponding to the event of rain and a continuous RV representing the amount of rain. For the binary RVs we use a bivariate transformation technique to generate the Monte-Carlo realizations that form the basis for sequential simulation of the continuous RVs. A non-parametric Gaussian copula is used to develop the multivariate model for continuous RVs. This methodology captures key spatial and temporal characteristics of daily rainfall intensities and overcomes numerical issues posed by high-dimensionality of the Gaussian copula.

  4. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert M.

    2013-01-01

    A new regression model search algorithm was developed that may be applied to both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The algorithm is a simplified version of a more complex algorithm that was originally developed for the NASA Ames Balance Calibration Laboratory. The new algorithm performs regression model term reduction to prevent overfitting of data. It has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a regression model search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression model. Therefore, the simplified algorithm is not intended to replace the original algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new search algorithm.

  5. Variable selection in multivariate modeling of drug product formula and manufacturing process.

    PubMed

    Cui, Yong; Song, Xiling; Chuang, King; Venkatramani, Cadapakam; Lee, Sueanne; Gallegos, Gregory; Venkateshwaran, Thirunellai; Xie, Minli

    2012-12-01

    Multivariate data analysis methods such as partial least square (PLS) modeling have been increasingly applied to pharmaceutical product development. This study applied the PLS modeling to analyze a product development dataset generated from a design of experiment and historical batch data. Attention was paid in particular to the assessment of the importance of predictor variables, and subsequently the variable selection in the PLS modeling. The assessment indicated that irrelevant and collinear predictors could be extensively present in the initial PLS model. Therefore, variable selection is an important step in the optimization of the pharmaceutical product process model. The variable importance for projections (VIP) and coefficient values can be employed to rank the importance of predictors. On the basis of this ranking, the irrelevant predictors can be removed. To further reduce collinear predictors, multiple rounds of PLS modeling on different combinations of predictors may be necessary. To this end, stepwise reduction of predictors based on their VIP/coefficient ranking was introduced and was proven to be an effective approach to identify and remove redundant collinear predictors. Overall, the study demonstrated that the variable selection procedure implemented herein can effectively evaluate the importance of variables and optimize models of drug product processes.

  6. On the Numerical Formulation of Parametric Linear Fractional Transformation (LFT) Uncertainty Models for Multivariate Matrix Polynomial Problems

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.

    1998-01-01

    Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.

  7. Groundwater source contamination mechanisms: Physicochemical profile clustering, risk factor analysis and multivariate modelling

    NASA Astrophysics Data System (ADS)

    Hynds, Paul; Misstear, Bruce D.; Gill, Laurence W.; Murphy, Heather M.

    2014-04-01

    An integrated domestic well sampling and "susceptibility assessment" programme was undertaken in the Republic of Ireland from April 2008 to November 2010. Overall, 211 domestic wells were sampled, assessed and collated with local climate data. Based upon groundwater physicochemical profile, three clusters have been identified and characterised by source type (borehole or hand-dug well) and local geological setting. Statistical analysis indicates that cluster membership is significantly associated with the prevalence of bacteria (p = 0.001), with mean Escherichia coli presence within clusters ranging from 15.4% (Cluster-1) to 47.6% (Cluster-3). Bivariate risk factor analysis shows that on-site septic tank presence was the only risk factor significantly associated (p < 0.05) with bacterial presence within all clusters. Point agriculture adjacency was significantly associated with both borehole-related clusters. Well design criteria were associated with hand-dug wells and boreholes in areas characterised by high permeability subsoils, while local geological setting was significant for hand-dug wells and boreholes in areas dominated by low/moderate permeability subsoils. Multivariate susceptibility models were developed for all clusters, with predictive accuracies of 84% (Cluster-1) to 91% (Cluster-2) achieved. Septic tank setback was a common variable within all multivariate models, while agricultural sources were also significant, albeit to a lesser degree. Furthermore, well liner clearance was a significant factor in all models, indicating that direct surface ingress is a significant well contamination mechanism. Identification and elucidation of cluster-specific contamination mechanisms may be used to develop improved overall risk management and wellhead protection strategies, while also informing future remediation and maintenance efforts.

  8. Groundwater source contamination mechanisms: physicochemical profile clustering, risk factor analysis and multivariate modelling.

    PubMed

    Hynds, Paul; Misstear, Bruce D; Gill, Laurence W; Murphy, Heather M

    2014-04-01

    An integrated domestic well sampling and "susceptibility assessment" programme was undertaken in the Republic of Ireland from April 2008 to November 2010. Overall, 211 domestic wells were sampled, assessed and collated with local climate data. Based upon groundwater physicochemical profile, three clusters have been identified and characterised by source type (borehole or hand-dug well) and local geological setting. Statistical analysis indicates that cluster membership is significantly associated with the prevalence of bacteria (p=0.001), with mean Escherichia coli presence within clusters ranging from 15.4% (Cluster-1) to 47.6% (Cluster-3). Bivariate risk factor analysis shows that on-site septic tank presence was the only risk factor significantly associated (p<0.05) with bacterial presence within all clusters. Point agriculture adjacency was significantly associated with both borehole-related clusters. Well design criteria were associated with hand-dug wells and boreholes in areas characterised by high permeability subsoils, while local geological setting was significant for hand-dug wells and boreholes in areas dominated by low/moderate permeability subsoils. Multivariate susceptibility models were developed for all clusters, with predictive accuracies of 84% (Cluster-1) to 91% (Cluster-2) achieved. Septic tank setback was a common variable within all multivariate models, while agricultural sources were also significant, albeit to a lesser degree. Furthermore, well liner clearance was a significant factor in all models, indicating that direct surface ingress is a significant well contamination mechanism. Identification and elucidation of cluster-specific contamination mechanisms may be used to develop improved overall risk management and wellhead protection strategies, while also informing future remediation and maintenance efforts.

  9. Multi-variate spatial explicit constraining of a large scale hydrological model

    NASA Astrophysics Data System (ADS)

    Rakovec, Oldrich; Kumar, Rohini; Samaniego, Luis

    2016-04-01

    Increased availability and quality of near real-time data should target at better understanding of predictive skills of distributed hydrological models. Nevertheless, predictions of regional scale water fluxes and states remains of great challenge to the scientific community. Large scale hydrological models are used for prediction of soil moisture, evapotranspiration and other related water states and fluxes. They are usually properly constrained against river discharge, which is an integral variable. Rakovec et al (2016) recently demonstrated that constraining model parameters against river discharge is necessary, but not a sufficient condition. Therefore, we further aim at scrutinizing appropriate incorporation of readily available information into a hydrological model that may help to improve the realism of hydrological processes. It is important to analyze how complementary datasets besides observed streamflow and related signature measures can improve model skill of internal model variables during parameter estimation. Among those products suitable for further scrutiny are for example the GRACE satellite observations. Recent developments of using this dataset in a multivariate fashion to complement traditionally used streamflow data within the distributed model mHM (www.ufz.de/mhm) are presented. Study domain consists of 80 European basins, which cover a wide range of distinct physiographic and hydrologic regimes. First-order data quality check ensures that heavily human influenced basins are eliminated. For river discharge simulations we show that model performance of discharge remains unchanged when complemented by information from the GRACE product (both, daily and monthly time steps). Moreover, the GRACE complementary data lead to consistent and statistically significant improvements in evapotranspiration estimates, which are evaluated using an independent gridded FLUXNET product. We also show that the choice of the objective function used to estimate

  10. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2013-01-01

    A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.

  11. An empirical comparison of information-theoretic selection criteria for multivariate behavior genetic models.

    PubMed

    Markon, Kristian E; Krueger, Robert F

    2004-11-01

    Information theory provides an attractive basis for statistical inference and model selection. However, little is known about the relative performance of different information-theoretic criteria in covariance structure modeling, especially in behavioral genetic contexts. To explore these issues, information-theoretic fit criteria were compared with regard to their ability to discriminate between multivariate behavioral genetic models under various model, distribution, and sample size conditions. Results indicate that performance depends on sample size, model complexity, and distributional specification. The Bayesian Information Criterion (BIC) is more robust to distributional misspecification than Akaike's Information Criterion (AIC) under certain conditions, and outperforms AIC in larger samples and when comparing more complex models. An approximation to the Minimum Description Length (MDL; Rissanen, J. (1996). IEEE Transactions on Information Theory 42:40-47, Rissanen, J. (2001). IEEE Transactions on Information Theory 47:1712-1717) criterion, involving the empirical Fisher information matrix, exhibits variable patterns of performance due to the complexity of estimating Fisher information matrices. Results indicate that a relatively new information-theoretic criterion, Draper's Information Criterion (DIC; Draper, 1995), which shares features of the Bayesian and MDL criteria, performs similarly to or better than BIC. Results emphasize the importance of further research into theory and computation of information-theoretic criteria.

  12. Quantifying effects of abiotic and biotic drivers on community dynamics with multivariate autoregressive (MAR) models.

    PubMed

    Hampton, Stephanie E; Holmes, Elizabeth E; Scheef, Lindsay P; Scheuerell, Mark D; Katz, Stephen L; Pendleton, Daniel E; Ward, Eric J

    2013-12-01

    Long-term ecological data sets present opportunities for identifying drivers of community dynamics and quantifying their effects through time series analysis. Multivariate autoregressive (MAR) models are well known in many other disciplines, such as econometrics, but widespread adoption of MAR methods in ecology and natural resource management has been much slower despite some widely cited ecological examples. Here we review previous ecological applications of MAR models and highlight their ability to identify abiotic and biotic drivers of population dynamics, as well as community-level stability metrics, from long-term empirical observations. Thus far, MAR models have been used mainly with data from freshwater plankton communities; we examine the obstacles that may be hindering adoption in other systems and suggest practical modifications that will improve MAR models for broader application. Many of these modifications are already well known in other fields in which MAR models are common, although they are frequently described under different names. In an effort to make MAR models more accessible to ecologists, we include a worked example using recently developed R packages (MAR1 and MARSS), freely available and open-access software.

  13. Bayesian analysis of a multivariate null intercept errors-in-variables regression model.

    PubMed

    Aoki, Reiko; Bolfarine, Heleno; Achcar, Jorge A; Dorival, Leão P Júnior

    2003-11-01

    Longitudinal data are of great interest in analysis of clinical trials. In many practical situations the covariate can not be measured precisely and a natural alternative model is the errors-in-variables regression models. In this paper we study a null intercept errors-in-variables regression model with a structure of dependency between the response variables within the same group. We apply the model to real data presented in Hadgu and Koch (Hadgu, A., Koch, G. (1999). Application of generalized estimating equations to a dental randomized clinical trial. J. Biopharmaceutical Statistics 9(1):161-178). In that study volunteers with preexisting dental plaque were randomized to two experimental mouth rinses (A and B) or a control mouth rinse with double blinding. The dental plaque index was measured for each subject in the beginning of the study and at two follow-up times, which leads to the presence of an interclass correlation. We propose the use of a Bayesian approach to model a multivariate null intercept errors-in-variables regression model to the longitudinal data. The proposed Bayesian approach accommodates the correlated measurements and incorporates the restriction that the slopes must lie in the (0, 1) interval. A Gibbs sampler is used to perform the computations.

  14. Modeling Multi-Variate Gaussian Distributions and Analysis of Higgs Boson Couplings with the ATLAS Detector

    NASA Astrophysics Data System (ADS)

    Krohn, Olivia; Armbruster, Aaron; Gao, Yongsheng; Atlas Collaboration

    2017-01-01

    Software tools developed for the purpose of modeling CERN LHC pp collision data to aid in its interpretation are presented. Some measurements are not adequately described by a Gaussian distribution; thus an interpretation assuming Gaussian uncertainties will inevitably introduce bias, necessitating analytical tools to recreate and evaluate non-Gaussian features. One example is the measurements of Higgs boson production rates in different decay channels, and the interpretation of these measurements. The ratios of data to Standard Model expectations (μ) for five arbitrary signals were modeled by building five Poisson distributions with mixed signal contributions such that the measured values of μ are correlated. Algorithms were designed to recreate probability distribution functions of μ as multi-variate Gaussians, where the standard deviation (σ) and correlation coefficients (ρ) are parametrized. There was good success with modeling 1-D likelihood contours of μ, and the multi-dimensional distributions were well modeled within 1- σ but the model began to diverge after 2- σ due to unmerited assumptions in developing ρ. Future plans to improve the algorithms and develop a user-friendly analysis package will also be discussed. NSF International Research Experiences for Students

  15. Influence of temperature on vibrational spectra and consequences for the predictive ability of multivariate models.

    PubMed

    Wülfert, F; Kok, W T; Smilde, A K

    1998-05-01

    Temperature, pressure, viscosity, and other process variables fluctuate during an industrial process. When vibrational spectra are measured on- or in-line for process analytical and control purposes, the fluctuations influence the shape of the spectra in a nonlinear manner. The influence of these temperature-induced spectral variations on the predictive ability of multivariate calibration model is assessed. Short-wave NIR spectra of ethanol/water/2-propanol mixtures are taken at different temperatures, and different local and global partial least-squares calibration strategies are applied. The resulting prediction errors and sensitivity vectors of a test set are compared. For data with no temperature variation, the local models perform best with high sensitivity but the knowledge of the temperature for prediction measurements cannot aid in the improvement of local model predictions when temperature variation is introduced. The prediction errors of global models are considerably lower when temperature variation is present in the data set but at the expense of sensitivity. To be able to build temperature-stable calibration models with high sensitivity, a way of explicitly modeling the temperature should be found.

  16. Web-based tools for modelling and analysis of multivariate data: California ozone pollution activity

    NASA Astrophysics Data System (ADS)

    Dinov, Ivo D.; Christou, Nicolas

    2011-09-01

    This article presents a hands-on web-based activity motivated by the relation between human health and ozone pollution in California. This case study is based on multivariate data collected monthly at 20 locations in California between 1980 and 2006. Several strategies and tools for data interrogation and exploratory data analysis, model fitting and statistical inference on these data are presented. All components of this case study (data, tools, activity) are freely available online at: http://wiki.stat.ucla.edu/socr/index.php/SOCR_MotionCharts_CAOzoneData. Several types of exploratory (motion charts, box-and-whisker plots, spider charts) and quantitative (inference, regression, analysis of variance (ANOVA)) data analyses tools are demonstrated. Two specific human health related questions (temporal and geographic effects of ozone pollution) are discussed as motivational challenges.

  17. Application of multivariate storage model to quantify trends in seasonally frozen soil

    NASA Astrophysics Data System (ADS)

    Woody, Jonathan; Wang, Yan; Dyer, Jamie

    2016-06-01

    This article presents a study of the ground thermal regime recorded at 11 stations in the North Dakota Agricultural Network. Particular focus is placed on detecting trends in the annual ground freeze process portion of the ground thermal regime's daily temperature signature. A multivariate storage model from queuing theory is fit to a quantity of estimated daily depths of frozen soil. Statistical inference on a trend parameter is obtained by minimizing a weighted sum of squares of a sequence of daily one-step-ahead predictions. Standard errors for the trend estimates are presented. It is shown that the daily quantity of frozen ground experienced at these 11 sites exhibited a negative trend over the observation period.

  18. Flight investigation of a multivariable model-following control system for rotorcraft

    NASA Technical Reports Server (NTRS)

    Hilbert, K. B.; Lebacqz, J. V.; Hindson, W. S.

    1986-01-01

    A high-bandwidth, multivariable, explicit model-following control system for advanced rotorcraft has been developed and evaluated on the NASA Ames CH-47B fly-by-wire helicopter. This control system has expanded the in-flight simulation capabilities of the CH-47B to support research efforts directed at the next generation of superaugmented helicopters. A detailed, analytical model of the augmented CH-47B has also been developed to support the flight tests. Analysis using this theoretical model was used to expose fundamental limitations caused by the basic vehicle characteristics and original control system implementation that had affected the performance of the model-following control system. Improvements were made to the nominal control system design to compensate for large time delays created by the higher-orderd dynamics of the aircraft and its control system. With these improvements, high bandwidth control and excellent model-following performance were achieved. Both analytical and flight-test results for the lateral axis are presented and compared. In addition, frequency-domain techniques are employed for documenting the system performance.

  19. Analysis of pelagic species decline in the upper San Francisco Estuary using multivariate autoregressive modeling (MAR)

    USGS Publications Warehouse

    Mac Nally, Ralph; Thomson, James R.; Kimmerer, Wim J.; Feyrer, Frederick; Newman, Ken B.; Sih, Andy; Bennett, William A.; Brown, Larry; Fleishman, Erica; Culberson, Steven D.; Castillo, Gonzalo

    2010-01-01

    Four species of pelagic fish of particular management concern in the upper San Francisco Estuary, California, USA, have declined precipitously since ca. 2002: delta smelt (Hypomesus transpacificus), longfin smelt (Spirinchus thaleichthys), striped bass (Morone saxatilis), and threadfin shad (Dorosoma petenense). The estuary has been monitored since the late 1960s with extensive collection of data on the fishes, their pelagic prey, phytoplankton biomass, invasive species, and physical factors. We used multivariate autoregressive (MAR) modeling to discern the main factors responsible for the declines. An expert-elicited model was built to describe the system. Fifty-four relationships were built into the model, only one of which was of uncertain direction a priori. Twenty-eight of the proposed relationships were strongly supported by or consistent with the data, while 26 were close to zero (not supported by the data but not contrary to expectations). The position of the 2‰ isohaline (a measure of the physical response of the estuary to freshwater flow) and increased water clarity over the period of analyses were two factors affecting multiple declining taxa (including fishes and the fishes' main zooplankton prey). Our results were relatively robust with respect to the form of stock–recruitment model used and to inclusion of subsidiary covariates but may be enhanced by using detailed state–space models that describe more fully the life-history dynamics of the declining species.

  20. EEG source localization based on multivariate autoregressive models using Kalman filtering.

    PubMed

    Padilla-Buriticá, J I; Giraldo, E; Castellanos-Domínguez, G

    2011-01-01

    The estimation of current distributions from electroencephalographic recordings poses an inverse problem, which can approximately be solved by including dynamical models as spatio-temporal constraints onto the solution. In this paper, we consider the electrocardiography source localization task, where a specific structure for the dynamical model of current distribution is directly obtained from the data by fitting multivariate autoregressive models to electroencephalographic time series. Whereas previous approaches consider an approximation of the internal connectivity of the sources, the proposed methodology takes into account a realistic structure of the model estimated from the data, such that it becomes possible to obtain improved inverse solutions. The performance of the new method is demonstrated by application to simulated electroencephalographic data over several signal to noise ratios, where the source localization task is evaluated by using the localization error and the data fit error. Finally, it is shown that estimating MVAR models makes possible to obtain inverse solutions of considerably improved quality, as compared to the usual instantaneous inverse solutions, even if the regularized inverse of Tikhonov is used.

  1. Improved modeling of multivariate measurement errors based on the Wishart distribution.

    PubMed

    Wentzell, Peter D; Cleary, Cody S; Kompany-Zareh, M

    2017-03-22

    The error covariance matrix (ECM) is an important tool for characterizing the errors from multivariate measurements, representing both the variance and covariance in the errors across multiple channels. Such information is useful in understanding and minimizing sources of experimental error and in the selection of optimal data analysis procedures. Experimental ECMs, normally obtained through replication, are inherently noisy, inconvenient to obtain, and offer limited interpretability. Significant advantages can be realized by building a model for the ECM based on established error types. Such models are less noisy, reduce the need for replication, mitigate mathematical complications such as matrix singularity, and provide greater insights. While the fitting of ECM models using least squares has been previously proposed, the present work establishes that fitting based on the Wishart distribution offers a much better approach. Simulation studies show that the Wishart method results in parameter estimates with a smaller variance and also facilitates the statistical testing of alternative models using a parameterized bootstrap method. The new approach is applied to fluorescence emission data to establish the acceptability of various models containing error terms related to offset, multiplicative offset, shot noise and uniform independent noise. The implications of the number of replicates, as well as single vs. multiple replicate sets are also described.

  2. Analysis of heterogeneous dengue transmission in Guangdong in 2014 with multivariate time series model

    PubMed Central

    Cheng, Qing; Lu, Xin; Wu, Joseph T.; Liu, Zhong; Huang, Jincai

    2016-01-01

    Guangdong experienced the largest dengue epidemic in recent history. In 2014, the number of dengue cases was the highest in the previous 10 years and comprised more than 90% of all cases. In order to analyze heterogeneous transmission of dengue, a multivariate time series model decomposing dengue risk additively into endemic, autoregressive and spatiotemporal components was used to model dengue transmission. Moreover, random effects were introduced in the model to deal with heterogeneous dengue transmission and incidence levels and power law approach was embedded into the model to account for spatial interaction. There was little spatial variation in the autoregressive component. In contrast, for the endemic component, there was a pronounced heterogeneity between the Pearl River Delta area and the remaining districts. For the spatiotemporal component, there was considerable heterogeneity across districts with highest values in some western and eastern department. The results showed that the patterns driving dengue transmission were found by using clustering analysis. And endemic component contribution seems to be important in the Pearl River Delta area, where the incidence is high (95 per 100,000), while areas with relatively low incidence (4 per 100,000) are highly dependent on spatiotemporal spread and local autoregression. PMID:27666657

  3. Feature extraction and classifcation in surface grading application using multivariate statistical projection models

    NASA Astrophysics Data System (ADS)

    Prats-Montalbán, José M.; López, Fernando; Valiente, José M.; Ferrer, Alberto

    2007-01-01

    In this paper we present an innovative way to simultaneously perform feature extraction and classification for the quality control issue of surface grading by applying two well known multivariate statistical projection tools (SIMCA and PLS-DA). These tools have been applied to compress the color texture data describing the visual appearance of surfaces (soft color texture descriptors) and to directly perform classification using statistics and predictions computed from the extracted projection models. Experiments have been carried out using an extensive image database of ceramic tiles (VxC TSG). This image database is comprised of 14 different models, 42 surface classes and 960 pieces. A factorial experimental design has been carried out to evaluate all the combinations of several factors affecting the accuracy rate. Factors include tile model, color representation scheme (CIE Lab, CIE Luv and RGB) and compression/classification approach (SIMCA and PLS-DA). In addition, a logistic regression model is fitted from the experiments to compute accuracy estimates and study the factors effect. The results show that PLS-DA performs better than SIMCA, achieving a mean accuracy rate of 98.95%. These results outperform those obtained in a previous work where the soft color texture descriptors in combination with the CIE Lab color space and the k-NN classi.er achieved a 97.36% of accuracy.

  4. A New Approach to Identifying the Drivers of Regulation Compliance Using Multivariate Behavioural Models

    PubMed Central

    Thomas, Alyssa S.; Milfont, Taciano L.; Gavin, Michael C.

    2016-01-01

    Non-compliance with fishing regulations can undermine management effectiveness. Previous bivariate approaches were unable to untangle the complex mix of factors that may influence fishers’ compliance decisions, including enforcement, moral norms, perceived legitimacy of regulations and the behaviour of others. We compared seven multivariate behavioural models of fisher compliance decisions using structural equation modeling. An online survey of over 300 recreational fishers tested the ability of each model to best predict their compliance with two fishing regulations (daily and size limits). The best fitting model for both regulations was composed solely of psycho-social factors, with social norms having the greatest influence on fishers’ compliance behaviour. Fishers’ attitude also directly affected compliance with size limit, but to a lesser extent. On the basis of these findings, we suggest behavioural interventions to target social norms instead of increasing enforcement for the focal regulations in the recreational blue cod fishery in the Marlborough Sounds, New Zealand. These interventions could include articles in local newspapers and fishing magazines highlighting the extent of regulation compliance as well as using respected local fishers to emphasize the benefits of compliance through public meetings or letters to the editor. Our methodological approach can be broadly applied by natural resource managers as an effective tool to identify drivers of compliance that can then guide the design of interventions to decrease illegal resource use. PMID:27727292

  5. Multivariate mixed linear model analysis of longitudinal data: an information-rich statistical technique for analyzing disease resistance data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The mixed linear model (MLM) is currently among the most advanced and flexible statistical modeling techniques and its use in tackling problems in plant pathology has begun surfacing in the literature. The longitudinal MLM is a multivariate extension that handles repeatedly measured data, such as r...

  6. Multivariate normality

    NASA Technical Reports Server (NTRS)

    Crutcher, H. L.; Falls, L. W.

    1976-01-01

    Sets of experimentally determined or routinely observed data provide information about the past, present and, hopefully, future sets of similarly produced data. An infinite set of statistical models exists which may be used to describe the data sets. The normal distribution is one model. If it serves at all, it serves well. If a data set, or a transformation of the set, representative of a larger population can be described by the normal distribution, then valid statistical inferences can be drawn. There are several tests which may be applied to a data set to determine whether the univariate normal model adequately describes the set. The chi-square test based on Pearson's work in the late nineteenth and early twentieth centuries is often used. Like all tests, it has some weaknesses which are discussed in elementary texts. Extension of the chi-square test to the multivariate normal model is provided. Tables and graphs permit easier application of the test in the higher dimensions. Several examples, using recorded data, illustrate the procedures. Tests of maximum absolute differences, mean sum of squares of residuals, runs and changes of sign are included in these tests. Dimensions one through five with selected sample sizes 11 to 101 are used to illustrate the statistical tests developed.

  7. Multivariate mixed linear model analysis of longitudinal data: an information-rich statistical technique for analyzing plant disease resistance.

    PubMed

    Veturi, Yogasudha; Kump, Kristen; Walsh, Ellie; Ott, Oliver; Poland, Jesse; Kolkman, Judith M; Balint-Kurti, Peter J; Holland, James B; Wisser, Randall J

    2012-11-01

    ABSTRACT The mixed linear model (MLM) is an advanced statistical technique applicable to many fields of science. The multivariate MLM can be used to model longitudinal data, such as repeated ratings of disease resistance taken across time. In this study, using an example data set from a multi-environment trial of northern leaf blight disease on 290 maize lines with diverse levels of resistance, multivariate MLM analysis was performed and its utility was examined. In the population and environments tested, genotypic effects were highly correlated across disease ratings and followed an autoregressive pattern of correlation decay. Because longitudinal data are often converted to the univariate measure of area under the disease progress curve (AUDPC), comparisons between univariate MLM analysis of AUDPC and multivariate MLM analysis of longitudinal data were made. Univariate analysis had the advantage of simplicity and reduced computational demand, whereas multivariate analysis enabled a comprehensive perspective on disease development, providing the opportunity for unique insights into disease resistance. To aid in the application of multivariate MLM analysis of longitudinal data on disease resistance, annotated program syntax for model fitting is provided for the software ASReml.

  8. Improving the realism of hydrologic model functioning through multivariate parameter estimation

    NASA Astrophysics Data System (ADS)

    Rakovec, O.; Kumar, R.; Attinger, S.; Samaniego, L.

    2016-10-01

    Increased availability and quality of near real-time observations provide the opportunity to improve understanding of predictive skills of hydrologic models. Recent studies have shown the limited capability of river discharge data alone to adequately constrain different components of distributed model parameterizations. In this study, the GRACE satellite-based total water storage (TWS) anomaly is used to complement the discharge data with the aim to improve the fidelity of mesoscale hydrologic model (mHM) through multivariate parameter estimation. The study is conducted on 83 European basins covering a wide range of hydroclimatic regimes. The model parameterization complemented with the TWS anomalies leads to statistically significant improvements in (1) discharge simulations during low-flow period, and (2) evapotranspiration estimates which are evaluated against independent data (FLUXNET). Overall, there is no significant deterioration in model performance for the discharge simulations when complemented by information from the TWS anomalies. However, considerable changes in the partitioning of precipitation into runoff components are noticed by in-/exclusion of TWS during the parameter estimation. Introducing monthly averaged TWS data only improves the dynamics of streamflow on monthly or longer time scales, which mostly addresses the dynamical behavior of the base flow reservoir. A cross-evaluation test carried out to assess the transferability of the calibrated parameters to other locations further confirms the benefit of complementary TWS data. In particular, the evapotranspiration estimates show more robust performance when TWS data are incorporated during the parameter estimation, in comparison with the benchmark model constrained against discharge only. This study highlights the value for incorporating multiple data sources during parameter estimation to improve the overall realism of hydrologic models and their applications over large domains.

  9. Investigating Causality Between Interacting Brain Areas with Multivariate Autoregressive Models of MEG Sensor Data

    PubMed Central

    Michalareas, George; Schoffelen, Jan-Mathijs; Paterson, Gavin; Gross, Joachim

    2013-01-01

    Abstract In this work, we investigate the feasibility to estimating causal interactions between brain regions based on multivariate autoregressive models (MAR models) fitted to magnetoencephalographic (MEG) sensor measurements. We first demonstrate the theoretical feasibility of estimating source level causal interactions after projection of the sensor-level model coefficients onto the locations of the neural sources. Next, we show with simulated MEG data that causality, as measured by partial directed coherence (PDC), can be correctly reconstructed if the locations of the interacting brain areas are known. We further demonstrate, if a very large number of brain voxels is considered as potential activation sources, that PDC as a measure to reconstruct causal interactions is less accurate. In such case the MAR model coefficients alone contain meaningful causality information. The proposed method overcomes the problems of model nonrobustness and large computation times encountered during causality analysis by existing methods. These methods first project MEG sensor time-series onto a large number of brain locations after which the MAR model is built on this large number of source-level time-series. Instead, through this work, we demonstrate that by building the MAR model on the sensor-level and then projecting only the MAR coefficients in source space, the true casual pathways are recovered even when a very large number of locations are considered as sources. The main contribution of this work is that by this methodology entire brain causality maps can be efficiently derived without any a priori selection of regions of interest. Hum Brain Mapp, 2013. © 2012 Wiley Periodicals, Inc. PMID:22328419

  10. Multivariate modelling to study the effect of the manufacturing process on the complete tablet dissolution profile.

    PubMed

    Dumarey, Melanie; Goodwin, Daniel J; Davison, Chris

    2015-01-01

    Dissolution is invariably identified as a critical quality attribute for oral solid dosage forms, since it is related to when a drug is available for absorption and ultimately exert its effect. In this paper, the influence of granule and compression variability introduced by a design of experiments on the entire dissolution profile was studied with an innovative multivariate tool: bi-directional projections to orthogonal structures (O2PLS). This method enabled a more holistic process understanding compared to conventional approaches where only a single response is used to quantify dissolution. The O2PLS analysis of tablet manufacturing data showed that the disintegration phase of dissolution (10-15 min) was controlled by granule attributes and tablet hardness, while the later phase (15-30 min) was solely controlled by granule attributes. The bidirectional nature of the O2PLS model made it more fit for exploratory purposes, but decreased predictive ability. This approach does not require prior knowledge on the dissolution mechanism and is therefore particularly suited for exploratory studies gaining process understanding during early phase development. The outcome can then guide the selection of attributes, parameters and their ranges for the development of predictive models, e.g., models to define a suitable design space for the process.

  11. Air quality modeling in the Oviedo urban area (NW Spain) by using multivariate adaptive regression splines.

    PubMed

    Nieto, P J García; Antón, J C Álvarez; Vilán, J A Vilán; García-Gonzalo, E

    2015-05-01

    The aim of this research work is to build a regression model of air quality by using the multivariate adaptive regression splines (MARS) technique in the Oviedo urban area (northern Spain) at a local scale. To accomplish the objective of this study, the experimental data set made up of nitrogen oxides (NO x ), carbon monoxide (CO), sulfur dioxide (SO2), ozone (O3), and dust (PM10) was collected over 3 years (2006-2008). The US National Ambient Air Quality Standards (NAAQS) establishes the limit values of the main pollutants in the atmosphere in order to ensure the health of healthy people. Firstly, this MARS regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the main pollutants in the Oviedo urban area. Secondly, the main advantages of MARS are its capacity to produce simple, easy-to-interpret models, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, on the basis of these numerical calculations, using the MARS technique, conclusions of this research work are exposed.

  12. Modeling Seroadaptation and Sexual Behavior among HIV+ Study Participants with a Simultaneously Multilevel and Multivariate Longitudinal Count Model

    PubMed Central

    Zhu, Yuda; Weiss, Robert E.

    2013-01-01

    Summary Longitudinal behavioral intervention trials to reduce HIV transmission risk collect complex multilevel and multivariate data longitudinally for each subject with important correlation structures across time, level, and variables. Accurately assessing the effects of these trials are critical for determining which interventions are effective. Both numbers of partners and numbers of sex acts with each partner are reported at each time point. Sex acts with each partner are further differentiated into protected and unprotected acts with correspondingly differing risks of HIV/STD transmission. These trials generally also have eligibility criteria limiting enrollment to participants with some minimal level of risky sexual behavior tied directly to the outcome of interest. The combination of these factors makes it difficult to quantify sexual behaviors and the effects of intervention. We propose a multivariate multilevel count model that simultaneously models the number of partners, acts within partners, and accounts for recruitment eligibility. Our methods are useful in the evaluation of intervention trials and provide a more accurate and complete model for sexual behavior. We illustrate the contributions of our model by examining seroadaptive behavior defined as risk reducing behavior that depends on the serostatus of the partner. Several forms of seroadaptive risk reducing behavior are quantified and distinguished from non-seroadaptive risk reducing behavior. PMID:23002948

  13. Using multivariate regression modeling for sampling and predicting chemical characteristics of mixed waste in old landfills.

    PubMed

    Brandstätter, Christian; Laner, David; Prantl, Roman; Fellner, Johann

    2014-12-01

    Municipal solid waste landfills pose a threat on environment and human health, especially old landfills which lack facilities for collection and treatment of landfill gas and leachate. Consequently, missing information about emission flows prevent site-specific environmental risk assessments. To overcome this gap, the combination of waste sampling and analysis with statistical modeling is one option for estimating present and future emission potentials. Optimizing the tradeoff between investigation costs and reliable results requires knowledge about both: the number of samples to be taken and variables to be analyzed. This article aims to identify the optimized number of waste samples and variables in order to predict a larger set of variables. Therefore, we introduce a multivariate linear regression model and tested the applicability by usage of two case studies. Landfill A was used to set up and calibrate the model based on 50 waste samples and twelve variables. The calibrated model was applied to Landfill B including 36 waste samples and twelve variables with four predictor variables. The case study results are twofold: first, the reliable and accurate prediction of the twelve variables can be achieved with the knowledge of four predictor variables (Loi, EC, pH and Cl). For the second Landfill B, only ten full measurements would be needed for a reliable prediction of most response variables. The four predictor variables would exhibit comparably low analytical costs in comparison to the full set of measurements. This cost reduction could be used to increase the number of samples yielding an improved understanding of the spatial waste heterogeneity in landfills. Concluding, the future application of the developed model potentially improves the reliability of predicted emission potentials. The model could become a standard screening tool for old landfills if its applicability and reliability would be tested in additional case studies.

  14. Impact of Fractionation and Dose in a Multivariate Model for Radiation-Induced Chest Wall Pain

    SciTech Connect

    Din, Shaun U.; Williams, Eric L.; Jackson, Andrew; Rosenzweig, Kenneth E.; Wu, Abraham J.; Foster, Amanda; Yorke, Ellen D.; Rimner, Andreas

    2015-10-01

    Purpose: To determine the role of patient/tumor characteristics, radiation dose, and fractionation using the linear-quadratic (LQ) model to predict stereotactic body radiation therapy–induced grade ≥2 chest wall pain (CWP2) in a larger series and develop clinically useful constraints for patients treated with different fraction numbers. Methods and Materials: A total of 316 lung tumors in 295 patients were treated with stereotactic body radiation therapy in 3 to 5 fractions to 39 to 60 Gy. Absolute dose–absolute volume chest wall (CW) histograms were acquired. The raw dose-volume histograms (α/β = ∞ Gy) were converted via the LQ model to equivalent doses in 2-Gy fractions (normalized total dose, NTD) with α/β from 0 to 25 Gy in 0.1-Gy steps. The Cox proportional hazards (CPH) model was used in univariate and multivariate models to identify and assess CWP2 exposed to a given physical and NTD. Results: The median follow-up was 15.4 months, and the median time to development of CWP2 was 7.4 months. On a univariate CPH model, prescription dose, prescription dose per fraction, number of fractions, D83cc, distance of tumor to CW, and body mass index were all statistically significant for the development of CWP2. Linear-quadratic correction improved the CPH model significance over the physical dose. The best-fit α/β was 2.1 Gy, and the physical dose (α/β = ∞ Gy) was outside the upper 95% confidence limit. With α/β = 2.1 Gy, V{sub NTD99Gy} was most significant, with median V{sub NTD99Gy} = 31.5 cm{sup 3} (hazard ratio 3.87, P<.001). Conclusion: There were several predictive factors for the development of CWP2. The LQ-adjusted doses using the best-fit α/β = 2.1 Gy is a better predictor of CWP2 than the physical dose. To aid dosimetrists, we have calculated the physical dose equivalent corresponding to V{sub NTD99Gy} = 31.5 cm{sup 3} for the 3- to 5-fraction groups.

  15. Improving Metabolic Pathway Efficiency by Statistical Model-Based Multivariate Regulatory Metabolic Engineering.

    PubMed

    Xu, Peng; Rizzoni, Elizabeth Anne; Sul, Se-Yeong; Stephanopoulos, Gregory

    2017-01-20

    Metabolic engineering entails target modification of cell metabolism to maximize the production of a specific compound. For empowering combinatorial optimization in strain engineering, tools and algorithms are needed to efficiently sample the multidimensional gene expression space and locate the desirable overproduction phenotype. We addressed this challenge by employing design of experiment (DoE) models to quantitatively correlate gene expression with strain performance. By fractionally sampling the gene expression landscape, we statistically screened the dominant enzyme targets that determine metabolic pathway efficiency. An empirical quadratic regression model was subsequently used to identify the optimal gene expression patterns of the investigated pathway. As a proof of concept, our approach yielded the natural product violacein at 525.4 mg/L in shake flasks, a 3.2-fold increase from the baseline strain. Violacein production was further increased to 1.31 g/L in a controlled benchtop bioreactor. We found that formulating discretized gene expression levels into logarithmic variables (Linlog transformation) was essential for implementing this DoE-based optimization procedure. The reported methodology can aid multivariate combinatorial pathway engineering and may be generalized as a standard procedure for accelerating strain engineering and improving metabolic pathway efficiency.

  16. MD Simulations and Multivariate Studies for Modeling the Anti-Leishmanial Activity of Peptides.

    PubMed

    Guerra, Mirian Elisa Rodrigues; Fadel, Valmir; Maltarollo, Vinícius Gonçalves; Baldissera, Gisele; Honorio, Kathia Maria; Ruggiero, José Roberto; Dos Santos Cabrera, Marcia Perez

    2017-03-07

    Leishmaniasis, a protozoan-caused disease, requires alternative treatments with minimized side effects and less prone to resistance development. Antimicrobial peptides represent a possible choice to be developed. We report on the prospection of structural parameters of 23 helical antimicrobial and leishmanicidal peptides as a tool for modeling and predicting the activity of new peptides. This investigation is based on molecular dynamic simulations (MD) in mimetic membrane environment, since most of these peptides share the feature of interacting with phospholipid bilayers. To overcome the lack of experimental data on peptides' structures, we started simulations from designed 100% α-helices. This procedure was validated through comparisons with NMR data and the determination of the structure of Decoralin-amide. From physicochemical features and MD results, descriptors were raised and statistically related to the minimum inhibitory concentration against Leishmania by the multivariate data analysis technique. This statistical procedure confirmed five descriptors combined by different loadings in five principal components. The leishmanicidal activity depends on peptides' charge, backbone solvation, volume and solvent accessible surface area. The generated model possesses good predictability (q(2) =0.715, r(2) =0.898) and is indicative for the most and the least active peptides. This is a novel theoretical path for structure-activity studies combining computational methods that identify and prioritize the promising peptide candidates. This article is protected by copyright. All rights reserved.

  17. Application of Multivariate Modeling for Radiation Injury Assessment: A Proof of Concept

    PubMed Central

    Bolduc, David L.; Villa, Vilmar; Sandgren, David J.; Ledney, G. David; Blakely, William F.; Bünger, Rolf

    2014-01-01

    Multivariate radiation injury estimation algorithms were formulated for estimating severe hematopoietic acute radiation syndrome (H-ARS) injury (i.e., response category three or RC3) in a rhesus monkey total-body irradiation (TBI) model. Classical CBC and serum chemistry blood parameters were examined prior to irradiation (d 0) and on d 7, 10, 14, 21, and 25 after irradiation involving 24 nonhuman primates (NHP) (Macaca mulatta) given 6.5-Gy 60Co Υ-rays (0.4 Gy min−1) TBI. A correlation matrix was formulated with the RC3 severity level designated as the “dependent variable” and independent variables down selected based on their radioresponsiveness and relatively low multicollinearity using stepwise-linear regression analyses. Final candidate independent variables included CBC counts (absolute number of neutrophils, lymphocytes, and platelets) in formulating the “CBC” RC3 estimation algorithm. Additionally, the formulation of a diagnostic CBC and serum chemistry “CBC-SCHEM” RC3 algorithm expanded upon the CBC algorithm model with the addition of hematocrit and the serum enzyme levels of aspartate aminotransferase, creatine kinase, and lactate dehydrogenase. Both algorithms estimated RC3 with over 90% predictive power. Only the CBC-SCHEM RC3 algorithm, however, met the critical three assumptions of linear least squares demonstrating slightly greater precision for radiation injury estimation, but with significantly decreased prediction error indicating increased statistical robustness. PMID:25165485

  18. Multivariate near infrared spectroscopy models for predicting the methyl esters content in biodiesel.

    PubMed

    Baptista, Patrícia; Felizardo, Pedro; Menezes, José C; Correia, M Joana Neiva

    2008-01-28

    Biodiesel is the main alternative to fossil diesel. The key advantages of its use are the fact that it is a non-toxic renewable resource, which leads to lower emissions of polluting gases. European governments are targeting the incorporation of 20% of biofuels in the general fuels until 2020. Chemically, biodiesel is a mixture of fatty acid methyl esters, derived from vegetable oils or animal fats, which is usually produced by a transesterification reaction, where the oils/fats react with an alcohol, in the presence of a catalyst. The European Standard (EN 14214) establishes 25 parameters that have to be analysed to certify biodiesel quality and the analytical methods that should be used to determine those properties. This work reports the use of near infrared (NIR) spectroscopy to determine the esters content in biodiesel as well as the content in linolenic acid methyl esters (C18:3) in industrial and laboratory-scale biodiesel samples. Furthermore, calibration models for myristic (C14:0), palmitic (C16:0), stearic (C18:0), oleic (C18:1), linoleic (C18:2) acid methyl esters were also obtained. Principal component analysis was used for the qualitative analysis of the spectra, while partial least squares regression was used to develop the calibration models between analytical and spectral data. The results confirm that NIR spectroscopy, in combination with multivariate calibration, is a promising technique to assess the biodiesel quality control in both laboratory-scale and industrial scale samples.

  19. Prediction of MeV electron fluxes throughout the outer radiation belt using multivariate autoregressive models

    SciTech Connect

    Sakaguchi, Kaori; Nagatsuma, Tsutomu; Reeves, Geoffrey D.; Spence, Harlan E.

    2015-12-22

    The Van Allen radiation belts surrounding the Earth are filled with MeV-energy electrons. This region poses ionizing radiation risks for spacecraft that operate within it, including those in geostationary orbit (GEO) and medium Earth orbit. In order to provide alerts of electron flux enhancements, 16 prediction models of the electron log-flux variation throughout the equatorial outer radiation belt as a function of the McIlwain L parameter were developed using the multivariate autoregressive model and Kalman filter. Measurements of omnidirectional 2.3 MeV electron flux from the Van Allen Probes mission as well as >2 MeV electrons from the GOES 15 spacecraft were used as the predictors. Furthermore, we selected model explanatory parameters from solar wind parameters, the electron log-flux at GEO, and geomagnetic indices. For the innermost region of the outer radiation belt, the electron flux is best predicted by using the Dst index as the sole input parameter. For the central to outermost regions, at L≥4.8 and L ≥5.6, the electron flux is predicted most accurately by including also the solar wind velocity and then the dynamic pressure, respectively. The Dst index is the best overall single parameter for predicting at 3 ≤ L ≤ 6, while for the GEO flux prediction, the KP index is better than Dst. Finally, a test calculation demonstrates that the model successfully predicts the timing and location of the flux maximum as much as 2 days in advance and that the electron flux decreases faster with time at higher L values, both model features consistent with the actually observed behavior.

  20. Prediction of MeV electron fluxes throughout the outer radiation belt using multivariate autoregressive models

    DOE PAGES

    Sakaguchi, Kaori; Nagatsuma, Tsutomu; Reeves, Geoffrey D.; ...

    2015-12-22

    The Van Allen radiation belts surrounding the Earth are filled with MeV-energy electrons. This region poses ionizing radiation risks for spacecraft that operate within it, including those in geostationary orbit (GEO) and medium Earth orbit. In order to provide alerts of electron flux enhancements, 16 prediction models of the electron log-flux variation throughout the equatorial outer radiation belt as a function of the McIlwain L parameter were developed using the multivariate autoregressive model and Kalman filter. Measurements of omnidirectional 2.3 MeV electron flux from the Van Allen Probes mission as well as >2 MeV electrons from the GOES 15 spacecraftmore » were used as the predictors. Furthermore, we selected model explanatory parameters from solar wind parameters, the electron log-flux at GEO, and geomagnetic indices. For the innermost region of the outer radiation belt, the electron flux is best predicted by using the Dst index as the sole input parameter. For the central to outermost regions, at L≥4.8 and L ≥5.6, the electron flux is predicted most accurately by including also the solar wind velocity and then the dynamic pressure, respectively. The Dst index is the best overall single parameter for predicting at 3 ≤ L ≤ 6, while for the GEO flux prediction, the KP index is better than Dst. Finally, a test calculation demonstrates that the model successfully predicts the timing and location of the flux maximum as much as 2 days in advance and that the electron flux decreases faster with time at higher L values, both model features consistent with the actually observed behavior.« less

  1. Prediction of MeV electron fluxes throughout the outer radiation belt using multivariate autoregressive models

    NASA Astrophysics Data System (ADS)

    Sakaguchi, Kaori; Nagatsuma, Tsutomu; Reeves, Geoffrey D.; Spence, Harlan E.

    2015-12-01

    The Van Allen radiation belts surrounding the Earth are filled with MeV-energy electrons. This region poses ionizing radiation risks for spacecraft that operate within it, including those in geostationary orbit (GEO) and medium Earth orbit. To provide alerts of electron flux enhancements, 16 prediction models of the electron log-flux variation throughout the equatorial outer radiation belt as a function of the McIlwain L parameter were developed using the multivariate autoregressive model and Kalman filter. Measurements of omnidirectional 2.3 MeV electron flux from the Van Allen Probes mission as well as >2 MeV electrons from the GOES 15 spacecraft were used as the predictors. Model explanatory parameters were selected from solar wind parameters, the electron log-flux at GEO, and geomagnetic indices. For the innermost region of the outer radiation belt, the electron flux is best predicted by using the Dst index as the sole input parameter. For the central to outermost regions, at L ≧ 4.8 and L ≧ 5.6, the electron flux is predicted most accurately by including also the solar wind velocity and then the dynamic pressure, respectively. The Dst index is the best overall single parameter for predicting at 3 ≦ L ≦ 6, while for the GEO flux prediction, the KP index is better than Dst. A test calculation demonstrates that the model successfully predicts the timing and location of the flux maximum as much as 2 days in advance and that the electron flux decreases faster with time at higher L values, both model features consistent with the actually observed behavior.

  2. Advanced Multivariate Inversion Techniques for High Resolution 3D Geophysical Modeling (Invited)

    NASA Astrophysics Data System (ADS)

    Maceira, M.; Zhang, H.; Rowe, C. A.

    2009-12-01

    We focus on the development and application of advanced multivariate inversion techniques to generate a realistic, comprehensive, and high-resolution 3D model of the seismic structure of the crust and upper mantle that satisfies several independent geophysical datasets. Building on previous efforts of joint invesion using surface wave dispersion measurements, gravity data, and receiver functions, we have added a fourth dataset, seismic body wave P and S travel times, to the simultaneous joint inversion method. We present a 3D seismic velocity model of the crust and upper mantle of northwest China resulting from the simultaneous, joint inversion of these four data types. Surface wave dispersion measurements are primarily sensitive to seismic shear-wave velocities, but at shallow depths it is difficult to obtain high-resolution velocities and to constrain the structure due to the depth-averaging of the more easily-modeled, longer-period surface waves. Gravity inversions have the greatest resolving power at shallow depths, and they provide constraints on rock density variations. Moreover, while surface wave dispersion measurements are primarily sensitive to vertical shear-wave velocity averages, body wave receiver functions are sensitive to shear-wave velocity contrasts and vertical travel-times. Addition of the fourth dataset, consisting of seismic travel-time data, helps to constrain the shear wave velocities both vertically and horizontally in the model cells crossed by the ray paths. Incorporation of both P and S body wave travel times allows us to invert for both P and S velocity structure, capitalizing on empirical relationships between both wave types’ seismic velocities with rock densities, thus eliminating the need for ad hoc assumptions regarding the Poisson ratios. Our new tomography algorithm is a modification of the Maceira and Ammon joint inversion code, in combination with the Zhang and Thurber TomoDD (double-difference tomography) program.

  3. Women's Work Conditions and Marital Adjustment in Two-Earner Couples: A Structural Model.

    ERIC Educational Resources Information Center

    Sears, Heather A.; Galambos, Nancy L.

    1992-01-01

    Evaluated structural model of women's work conditions, women's stress, and marital adjustment using path analysis. Findings from 86 2-earner couples with adolescents indicated support for spillover model in which women's work stress and global stress mediated link between their work conditions and their perceptions of marital adjustment.…

  4. Multivariate poisson lognormal modeling of crashes by type and severity on rural two lane highways.

    PubMed

    Wang, Kai; Ivan, John N; Ravishanker, Nalini; Jackson, Eric

    2017-02-01

    In an effort to improve traffic safety, there has been considerable interest in estimating crash prediction models and identifying factors contributing to crashes. To account for crash frequency variations among crash types and severities, crash prediction models have been estimated by type and severity. The univariate crash count models have been used by researchers to estimate crashes by crash type or severity, in which the crash counts by type or severity are assumed to be independent of one another and modelled separately. When considering crash types and severities simultaneously, this may neglect the potential correlations between crash counts due to the presence of shared unobserved factors across crash types or severities for a specific roadway intersection or segment, and might lead to biased parameter estimation and reduce model accuracy. The focus on this study is to estimate crashes by both crash type and crash severity using the Integrated Nested Laplace Approximation (INLA) Multivariate Poisson Lognormal (MVPLN) model, and identify the different effects of contributing factors on different crash type and severity counts on rural two-lane highways. The INLA MVPLN model can simultaneously model crash counts by crash type and crash severity by accounting for the potential correlations among them and significantly decreases the computational time compared with a fully Bayesian fitting of the MVPLN model using Markov Chain Monte Carlo (MCMC) method. This paper describes estimation of MVPLN models for three-way stop controlled (3ST) intersections, four-way stop controlled (4ST) intersections, four-way signalized (4SG) intersections, and roadway segments on rural two-lane highways. Annual Average Daily traffic (AADT) and variables describing roadway conditions (including presence of lighting, presence of left-turn/right-turn lane, lane width and shoulder width) were used as predictors. A Univariate Poisson Lognormal (UPLN) was estimated by crash type and

  5. Surface tension and energy in multivariant martensitic transformations: phase-field theory, simulations, and model of coherent interface.

    PubMed

    Levitas, Valery I; Javanbakht, Mahdi

    2010-10-15

    The Ginzburg-Landau theory for multivariant martensitic phase transformations is advanced in three directions: the potential is developed that introduces the surface tension at interfaces; a mixed term in gradient energy is introduced to control the martensite-martensite interface energy independent of that for austenite-martensite; and a noncontradictory expression for variable surface energy is suggested. The problems of surface-induced pretransformation, barrierless multivariant nucleation, and the growth of an embryo in a nanosize sample are solved to elucidate the effect of the above contributions. The obtained results represent an advanced model for coherent interface.

  6. Validation of a Multivariate Career and Educational Counseling Intervention Model Using Long-Term Follow-Up

    ERIC Educational Resources Information Center

    Greenwood, Janet I.

    2008-01-01

    In this study, the author sought to validate the effectiveness of a multivariate career and educational counseling intervention model through long-term follow-up of clients seen in private practice. Effectiveness was measured by clients' commitment to and enjoyment of their chosen career paths and the relationship of these factors to adherence to…

  7. The effect of handedness on academic ability: a multivariate linear mixed model approach.

    PubMed

    Cheyne, Christopher P; Roberts, Neil; Crow, Tim J; Leask, Stuart J; Garcia-Finana, Marta

    2010-07-01

    In recent years questions have arisen about whether there are any links between handedness and academic abilities as well as other factors. In this study we investigate the effects of gender, writing hand, relative hand skill, and UK region on mathematics and reading test scores by applying a multivariate linear mixed-effects model. A data sample based on 11,847 11-year-old pupils across the UK from the National Child Development Study was considered for the analysis. Our results show that pupils who write with one hand while having better skill with their other hand (i.e., inconsistent writing hand and superior hand) obtained lower test scores in both reading and mathematics than pupils with consistent writing hand and superior hand. Furthermore, we confirm previous findings that degree of relative hand skill has a significant effect on both reading and maths scores and that this association is not linear. We also found higher scores of reading in children from the south of England, and of mathematics in children from the south of England and Scotland, when compared to other UK regions.

  8. A Bayesian design space for analytical methods based on multivariate models and predictions.

    PubMed

    Lebrun, Pierre; Boulanger, Bruno; Debrus, Benjamin; Lambert, Philippe; Hubert, Philippe

    2013-01-01

    The International Conference for Harmonization (ICH) has released regulatory guidelines for pharmaceutical development. In the document ICH Q8, the design space of a process is presented as the set of factor settings providing satisfactory results. However, ICH Q8 does not propose any practical methodology to define, derive, and compute design space. In parallel, in the last decades, it has been observed that the diversity and the quality of analytical methods have evolved exponentially, allowing substantial gains in selectivity and sensitivity. However, there is still a lack of a rationale toward the development of robust separation methods in a systematic way. Applying ICH Q8 to analytical methods provides a methodology for predicting a region of the space of factors in which results will be reliable. Combining design of experiments and Bayesian standard multivariate regression, an identified form of the predictive distribution of a new response vector has been identified and used, under noninformative as well as informative prior distributions of the parameters. From the responses and their predictive distribution, various critical quality attributes can be easily derived. This Bayesian framework was then extended to the multicriteria setting to estimate the predictive probability that several critical quality attributes will be jointly achieved in the future use of an analytical method. An example based on a high-performance liquid chromatography (HPLC) method is given. For this example, a constrained sampling scheme was applied to ensure the modeled responses have desirable properties.

  9. Granger Causality in Multivariate Time Series Using a Time-Ordered Restricted Vector Autoregressive Model

    NASA Astrophysics Data System (ADS)

    Siggiridou, Elsa; Kugiumtzis, Dimitris

    2016-04-01

    Granger causality has been used for the investigation of the inter-dependence structure of the underlying systems of multi-variate time series. In particular, the direct causal effects are commonly estimated by the conditional Granger causality index (CGCI). In the presence of many observed variables and relatively short time series, CGCI may fail because it is based on vector autoregressive models (VAR) involving a large number of coefficients to be estimated. In this work, the VAR is restricted by a scheme that modifies the recently developed method of backward-in-time selection (BTS) of the lagged variables and the CGCI is combined with BTS. Further, the proposed approach is compared favorably to other restricted VAR representations, such as the top-down strategy, the bottom-up strategy, and the least absolute shrinkage and selection operator (LASSO), in terms of sensitivity and specificity of CGCI. This is shown by using simulations of linear and nonlinear, low and high-dimensional systems and different time series lengths. For nonlinear systems, CGCI from the restricted VAR representations are compared with analogous nonlinear causality indices. Further, CGCI in conjunction with BTS and other restricted VAR representations is applied to multi-channel scalp electroencephalogram (EEG) recordings of epileptic patients containing epileptiform discharges. CGCI on the restricted VAR, and BTS in particular, could track the changes in brain connectivity before, during and after epileptiform discharges, which was not possible using the full VAR representation.

  10. The Analysis of Repeated Measurements with Mixed-Model Adjusted "F" Tests

    ERIC Educational Resources Information Center

    Kowalchuk, Rhonda K.; Keselman, H. J.; Algina, James; Wolfinger, Russell D.

    2004-01-01

    One approach to the analysis of repeated measures data allows researchers to model the covariance structure of their data rather than presume a certain structure, as is the case with conventional univariate and multivariate test statistics. This mixed-model approach, available through SAS PROC MIXED, was compared to a Welch-James type statistic.…

  11. Multivariate time series modeling of short-term system scale irrigation demand

    NASA Astrophysics Data System (ADS)

    Perera, Kushan C.; Western, Andrew W.; George, Biju; Nawarathna, Bandara

    2015-12-01

    Travel time limits the ability of irrigation system operators to react to short-term irrigation demand fluctuations that result from variations in weather, including very hot periods and rainfall events, as well as the various other pressures and opportunities that farmers face. Short-term system-wide irrigation demand forecasts can assist in system operation. Here we developed a multivariate time series (ARMAX) model to forecast irrigation demands with respect to aggregated service points flows (IDCGi, ASP) and off take regulator flows (IDCGi, OTR) based across 5 command areas, which included area covered under four irrigation channels and the study area. These command area specific ARMAX models forecast 1-5 days ahead daily IDCGi, ASP and IDCGi, OTR using the real time flow data recorded at the service points and the uppermost regulators and observed meteorological data collected from automatic weather stations. The model efficiency and the predictive performance were quantified using the root mean squared error (RMSE), Nash-Sutcliffe model efficiency coefficient (NSE), anomaly correlation coefficient (ACC) and mean square skill score (MSSS). During the evaluation period, NSE for IDCGi, ASP and IDCGi, OTR across 5 command areas were ranged 0.98-0.78. These models were capable of generating skillful forecasts (MSSS ⩾ 0.5 and ACC ⩾ 0.6) of IDCGi, ASP and IDCGi, OTR for all 5 lead days and IDCGi, ASP and IDCGi, OTR forecasts were better than using the long term monthly mean irrigation demand. Overall these predictive performance from the ARMAX time series models were higher than almost all the previous studies we are aware. Further, IDCGi, ASP and IDCGi, OTR forecasts have improved the operators' ability to react for near future irrigation demand fluctuations as the developed ARMAX time series models were self-adaptive to reflect the short-term changes in the irrigation demand with respect to various pressures and opportunities that farmers' face, such as

  12. Multivariate calibration modeling of liver oxygen saturation using near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Cingo, Ndumiso A.; Soller, Babs R.; Puyana, Juan C.

    2000-05-01

    The liver has been identified as an ideal site to spectroscopically monitor for changes in oxygen saturation during liver transplantation and shock because it is susceptible to reduced blood flow and oxygen transport. Near-IR spectroscopy, combined with multivariate calibration techniques, has been shown to be a viable technique for monitoring oxygen saturation changes in various organs in a minimally invasive manner. The liver has a dual system circulation. Blood enters the liver through the portal vein and hepatic artery, and leaves through the hepatic vein. Therefore, it is of utmost importance to determine how the liver NIR spectroscopic information correlates with the different regions of the hepatic lobule as the dual circulation flows from the presinusoidal space into the post sinusoidal region of the central vein. For NIR spectroscopic information to reliably represent the status of liver oxygenation, the NIR oxygen saturation should best correlate with the post-sinusoidal region. In a series of six pigs undergoing induced hemorrhagic chock, NIR spectra collected from the liver were used together with oxygen saturation reference data from the hepatic and portal veins, and an average of the two to build partial least-squares regression models. Results obtained from these models show that the hepatic vein and an average of the hepatic and portal veins provide information that is best correlate with NIR spectral information, while the portal vein reference measurement provides poorer correlation and accuracy. These results indicate that NIR determination of oxygen saturation in the liver can provide an assessment of liver oxygen utilization.

  13. Multivariate meta-analysis of individual participant data helped externally validate the performance and implementation of a prediction model

    PubMed Central

    Snell, Kym I.E.; Hua, Harry; Debray, Thomas P.A.; Ensor, Joie; Look, Maxime P.; Moons, Karel G.M.; Riley, Richard D.

    2016-01-01

    Objectives Our aim was to improve meta-analysis methods for summarizing a prediction model's performance when individual participant data are available from multiple studies for external validation. Study Design and Setting We suggest multivariate meta-analysis for jointly synthesizing calibration and discrimination performance, while accounting for their correlation. The approach estimates a prediction model's average performance, the heterogeneity in performance across populations, and the probability of “good” performance in new populations. This allows different implementation strategies (e.g., recalibration) to be compared. Application is made to a diagnostic model for deep vein thrombosis (DVT) and a prognostic model for breast cancer mortality. Results In both examples, multivariate meta-analysis reveals that calibration performance is excellent on average but highly heterogeneous across populations unless the model's intercept (baseline hazard) is recalibrated. For the cancer model, the probability of “good” performance (defined by C statistic ≥0.7 and calibration slope between 0.9 and 1.1) in a new population was 0.67 with recalibration but 0.22 without recalibration. For the DVT model, even with recalibration, there was only a 0.03 probability of “good” performance. Conclusion Multivariate meta-analysis can be used to externally validate a prediction model's calibration and discrimination performance across multiple populations and to evaluate different implementation strategies. PMID:26142114

  14. Mapping disability-adjusted life years: a Bayesian hierarchical model framework for burden of disease and injury assessment.

    PubMed

    MacNab, Ying C

    2007-11-20

    This paper presents a Bayesian disability-adjusted life year (DALY) methodology for spatial and spatiotemporal analyses of disease and/or injury burden. A Bayesian disease mapping model framework, which blends together spatial modelling, shared-component modelling (SCM), temporal modelling, ecological modelling, and non-linear modelling, is developed for small-area DALY estimation and inference. In particular, we develop a model framework that enables SCM as well as multivariate CAR modelling of non-fatal and fatal disease or injury rates and facilitates spline smoothing for non-linear modelling of temporal rate and risk trends. Using British Columbia (Canada) hospital admission-separation data and vital statistics mortality data on non-fatal and fatal road traffic injuries to male population age 20-39 for year 1991-2000 and for 84 local health areas and 16 health service delivery areas, spatial and spatiotemporal estimation and inference on years of life lost due to premature death, years lived with disability, and DALYs are presented. Fully Bayesian estimation and inference, with Markov chain Monte Carlo implementation, are illustrated. We present a methodological framework within which the DALY and the Bayesian disease mapping methodologies interface and intersect. Its development brings the relative importance of premature mortality and disability into the assessment of community health and health needs in order to provide reliable information and evidence for community-based public health surveillance and evaluation, disease and injury prevention, and resource provision.

  15. Multivariate analysis of groundwater quality and modeling impact of ground heat pump system

    NASA Astrophysics Data System (ADS)

    Thuyet, D. Q.; Saito, H.; Muto, H.; Saito, T.; Hamamoto, S.; Komatsu, T.

    2013-12-01

    The ground source heat pump system (GSHP) has recently become a popular building heating or cooling method, especially in North America, Western Europe, and Asia, due to advantages in reducing energy consumption and greenhouse gas emission. Because of the stability of the ground temperature, GSHP can effectively exchange the excess or demand heat of the building to the ground during the building air conditioning in the different seasons. The extensive use of GSHP can potentially disturb subsurface soil temperature and thus the groundwater quality. Therefore the assessment of subsurface thermal and environmental impacts from the GSHP operations is necessary to ensure sustainable use of GSHP system as well as the safe use of groundwater resources. This study aims to monitor groundwater quality during GSHP operation and to develop a numerical model to assess changes in subsurface soil temperature and in groundwater quality as affected by GSHP operation. A GSHP system was installed in Fuchu city, Tokyo, and consists of two closed double U-tubes (50-m length) buried vertically in the ground with a distance of 7.3 m from each U-tube located outside a building. An anti-freezing solution was circulated inside the U-tube for exchanging the heat between the building and the ground. The temperature at every 5-m depth and the groundwater quality including concentrations of 16 trace elements, pH, EC, Eh and DO in the shallow aquifer (32-m depth) and the deep aquifer (44-m depth) were monitored monthly since 2012, in an observation well installed 3 m from the center of the two U-tubes.Temporal variations of each element were evaluated using multivariate analysis and geostatistics. A three-dimensional heat exchange model was developed in COMSOL Multiphysics4.3b to simulate the heat exchange processes in subsurface soils. Results showed the difference in groundwater quality between the shallow and deep aquifers to be significant for some element concentrations and DO, but

  16. Stability of binary and ternary model oil-field particle suspensions: a multivariate analysis approach.

    PubMed

    Dudásová, Dorota; Rune Flåten, Geir; Sjöblom, Johan; Øye, Gisle

    2009-09-15

    The transmission profiles of one- to three-component particle suspension mixtures were analyzed by multivariate methods such as principal component analysis (PCA) and partial least-squares regression (PLS). The particles mimic the solids present in oil-field-produced water. Kaolin and silica represent solids of reservoir origin, whereas FeS is the product of bacterial metabolic activities, and Fe(3)O(4) corrosion product (e.g., from pipelines). All particles were coated with crude oil surface active components to imitate particles in real systems. The effects of different variables (concentration, temperature, and coating) on the suspension stability were studied with Turbiscan LAb(Expert). The transmission profiles over 75 min represent the overall water quality, while the transmission during the first 15.5 min gives information for suspension behavior during a representative time period for the hold time in the separator. The behavior of the mixed particle suspensions was compared to that of the single particle suspensions and models describing the systems were built. The findings are summarized as follows: silica seems to dominate the mixture properties in the binary suspensions toward enhanced separation. For 75 min, temperature and concentration are the most significant, while for 15.5 min, concentration is the only significant variable. Models for prediction of transmission spectra from run parameters as well as particle type from transmission profiles (inverse calibration) give a reasonable description of the relationships. In ternary particle mixtures, silica is not dominant and for 75 min, the significant variables for mixture (temperature and coating) are more similar to single kaolin and FeS/Fe(3)O(4). On the other hand, for 15.5 min, the coating is the most significant and this is similar to one for silica (at 15.5 min). The model for prediction of transmission spectra from run parameters gives good estimates of the transmission profiles. Although the

  17. Multivariate random-parameters zero-inflated negative binomial regression model: an application to estimate crash frequencies at intersections.

    PubMed

    Dong, Chunjiao; Clarke, David B; Yan, Xuedong; Khattak, Asad; Huang, Baoshan

    2014-09-01

    Crash data are collected through police reports and integrated with road inventory data for further analysis. Integrated police reports and inventory data yield correlated multivariate data for roadway entities (e.g., segments or intersections). Analysis of such data reveals important relationships that can help focus on high-risk situations and coming up with safety countermeasures. To understand relationships between crash frequencies and associated variables, while taking full advantage of the available data, multivariate random-parameters models are appropriate since they can simultaneously consider the correlation among the specific crash types and account for unobserved heterogeneity. However, a key issue that arises with correlated multivariate data is the number of crash-free samples increases, as crash counts have many categories. In this paper, we describe a multivariate random-parameters zero-inflated negative binomial (MRZINB) regression model for jointly modeling crash counts. The full Bayesian method is employed to estimate the model parameters. Crash frequencies at urban signalized intersections in Tennessee are analyzed. The paper investigates the performance of MZINB and MRZINB regression models in establishing the relationship between crash frequencies, pavement conditions, traffic factors, and geometric design features of roadway intersections. Compared to the MZINB model, the MRZINB model identifies additional statistically significant factors and provides better goodness of fit in developing the relationships. The empirical results show that MRZINB model possesses most of the desirable statistical properties in terms of its ability to accommodate unobserved heterogeneity and excess zero counts in correlated data. Notably, in the random-parameters MZINB model, the estimated parameters vary significantly across intersections for different crash types.

  18. Multivariate Discrete Hidden Markov Models for Domain-Based Measurements and Assessment of Risk Factors in Child Development

    PubMed Central

    Zhang, Qiang; Snow Jones, Alison; Rijmen, Frank; Ip, Edward H.

    2016-01-01

    Many studies in the social and behavioral sciences involve multivariate discrete measurements, which are often characterized by the presence of an underlying individual trait, the existence of clusters such as domains of measurements, and the availability of multiple waves of cohort data. Motivated by an application in child development, we propose a class of extended multivariate discrete hidden Markov models for analyzing domain-based measurements of cognition and behavior. A random effects model is used to capture the long-term trait. Additionally, we develop a model selection criterion based on the Bayes factor for the extended hidden Markov model. The National Longitudinal Survey of Youth (NLSY) is used to illustrate the methods. Supplementary technical details and computer codes are available online. PMID:28066134

  19. Examining Competing Models of the Associations among Peer Victimization, Adjustment Problems, and School Connectedness

    ERIC Educational Resources Information Center

    Loukas, Alexandra; Ripperger-Suhler, Ken G.; Herrera, Denise E.

    2012-01-01

    The present study tested two competing models to assess whether psychosocial adjustment problems mediate the associations between peer victimization and school connectedness one year later, or if peer victimization mediates the associations between psychosocial adjustment problems and school connectedness. Participants were 500 10- to 14-year-old…

  20. Parental Support, Coping Strategies, and Psychological Adjustment: An Integrative Model with Late Adolescents.

    ERIC Educational Resources Information Center

    Holahan, Charles J.; And Others

    1995-01-01

    An integrative predictive model was applied to responses of 241 college freshmen to examine interrelationships among parental support, adaptive coping strategies, and psychological adjustment. Social support from both parents and a nonconflictual parental relationship were positively associated with adolescents' psychological adjustment. (SLD)

  1. Multivariate detection limits of on-line NIR model for extraction process of chlorogenic acid from Lonicera japonica.

    PubMed

    Wu, Zhisheng; Sui, Chenglin; Xu, Bing; Ai, Lu; Ma, Qun; Shi, Xinyuan; Qiao, Yanjiang

    2013-04-15

    A methodology is proposed to estimate the multivariate detection limits (MDL) of on-line near-infrared (NIR) model in Chinese Herbal Medicines (CHM) system. In this paper, Lonicera japonica was used as an example, and its extraction process was monitored by on-line NIR spectroscopy. Spectra of on-line NIR could be collected by two fiber optic probes designed to transmit NIR radiation by a 2mm-flange. High performance liquid chromatography (HPLC) was used as a reference method to determine the content of chlorogenic acid in the extract solution. Multivariate calibration models were carried out including partial least squares regression (PLS) and interval partial least-squares (iPLS). The result showed improvement of model performance: compared with PLS model, the root mean square errors of prediction (RMSEP) of iPLS model decreased from 0.111mg to 0.068mg, and the R(2) parameter increased from 0.9434 to 0.9801. Furthermore, MDL values were determined by a multivariate method using the type of errors and concentration ranges. The MDL of iPLS model was about 14ppm, which confirmed that on-line NIR spectroscopy had the ability to detect trace amounts of chlorogenic acid in L. japonica. As a result, the application of on-line NIR spectroscopy for monitoring extraction process in CHM could be very encouraging and reliable.

  2. Tweaking Model Parameters: Manual Adjustment and Self Calibration

    NASA Astrophysics Data System (ADS)

    Schulz, B.; Tuffs, R. J.; Laureijs, R. J.; Lu, N.; Peschke, S. B.; Gabriel, C.; Khan, I.

    2002-12-01

    The reduction of P32 data is not always straight forward and the application of the transient model needs tight control by the user. This paper describes how to access the model parameters within the P32Tools software and how to work with the "Inspect signals per pixel" panel, in order to explore the parameter space and improve the model fit.

  3. Multivariate threshold model analysis of clinical mastitis in multiparous norwegian dairy cattle.

    PubMed

    Heringstad, B; Chang, Y M; Gianola, D; Klemetsdal, G

    2004-09-01

    A Bayesian multivariate threshold model was fitted to clinical mastitis (CM) records from 372,227 daughters of 2411 Norwegian Dairy Cattle (NRF) sires. All cases of veterinary-treated CM occurring from 30 d before first calving to culling or 300 d after third calving were included. Lactations were divided into 4 intervals: -30 to 0 d, 1 to 30 d, 31 to 120 d, and 121 to 300 d after calving. Within each interval, absence or presence of CM was scored as "0" or "1" based on the CM episodes. A 12-variate (3 lactations x 4 intervals) threshold model was used, assuming that CM was a different trait in each interval. Residuals were assumed correlated within lactation but independent between lactations. The model for liability to CM had interval-specific effects of month-year of calving, age at calving (first lactation), or calving interval (second and third lactations), herd-5-yr-period, sire of the cow, plus a residual. Posterior mean of heritability of liability to CM was 0.09 and 0.05 in the first and last intervals, respectively, and between 0.06 and 0.07 for other intervals. Posterior means of genetic correlations of liability to CM between intervals ranged from 0.24 (between intervals 1 and 12) to 0.73 (between intervals 1 and 2), suggesting interval-specific genetic control of resistance to mastitis. Residual correlations ranged from 0.08 to 0.17 for adjacent intervals, and between -0.01 and 0.03 for nonadjacent intervals. Trends of mean sire posterior means by birth year of daughters were used to assess genetic change. The 12 traits showed similar trends, with little or no genetic change from 1976 to 1986, and genetic improvement in resistance to mastitis thereafter. Annual genetic change was larger for intervals in first lactation when compared with second or third lactation. Within lactation, genetic change was larger for intervals early in lactation, and more so in the first lactation. This reflects that selection against mastitis in NRF has emphasized mainly CM

  4. A multivariate cure model for left-censored and right-censored data with application to colorectal cancer screening patterns.

    PubMed

    Hagar, Yolanda C; Harvey, Danielle J; Beckett, Laurel A

    2016-08-30

    We develop a multivariate cure survival model to estimate lifetime patterns of colorectal cancer screening. Screening data cover long periods of time, with sparse observations for each person. Some events may occur before the study begins or after the study ends, so the data are both left-censored and right-censored, and some individuals are never screened (the 'cured' population). We propose a multivariate parametric cure model that can be used with left-censored and right-censored data. Our model allows for the estimation of the time to screening as well as the average number of times individuals will be screened. We calculate likelihood functions based on the observations for each subject using a distribution that accounts for within-subject correlation and estimate parameters using Markov chain Monte Carlo methods. We apply our methods to the estimation of lifetime colorectal cancer screening behavior in the SEER-Medicare data set. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Transfer of multivariate regression models between high-resolution NMR instruments: application to authenticity control of sunflower lecithin.

    PubMed

    Monakhova, Yulia B; Diehl, Bernd W K

    2016-03-22

    In recent years the number of spectroscopic studies utilizing multivariate techniques and involving different laboratories has been dramatically increased. In this paper the protocol for calibration transfer of partial least square regression model between high-resolution nuclear magnetic resonance (NMR) spectrometers of different frequencies and equipped with different probes was established. As the test system previously published quantitative model to predict the concentration of blended soy species in sunflower lecithin was used. For multivariate modelling piecewise direct standardization (PDS), direct standardization, and hybrid calibration were employed. PDS showed the best performance for estimating lecithin falsification regarding its vegetable origin resulting in a significant decrease in root mean square error of prediction from 5.0 to 7.3% without standardization to 2.9-3.2% for PDS. Acceptable calibration transfer model was obtained by direct standardization, but this standardization approach introduces unfavourable noise to the spectral data. Hybrid calibration is least recommended for high-resolution NMR data. The sensitivity of instrument transfer methods with respect to the type of spectrometer, the number of samples and the subset selection was also discussed. The study showed the necessity of applying a proper standardization procedure in cases when multivariate model has to be applied to the spectra recorded on a secondary NMR spectrometer even with the same magnetic field strength. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Covariate-Adjusted Linear Mixed Effects Model with an Application to Longitudinal Data

    PubMed Central

    Nguyen, Danh V.; Şentürk, Damla; Carroll, Raymond J.

    2009-01-01

    Linear mixed effects (LME) models are useful for longitudinal data/repeated measurements. We propose a new class of covariate-adjusted LME models for longitudinal data that nonparametrically adjusts for a normalizing covariate. The proposed approach involves fitting a parametric LME model to the data after adjusting for the nonparametric effects of a baseline confounding covariate. In particular, the effect of the observable covariate on the response and predictors of the LME model is modeled nonparametrically via smooth unknown functions. In addition to covariate-adjusted estimation of fixed/population parameters and random effects, an estimation procedure for the variance components is also developed. Numerical properties of the proposed estimators are investigated with simulation studies. The consistency and convergence rates of the proposed estimators are also established. An application to a longitudinal data set on calcium absorption, accounting for baseline distortion from body mass index, illustrates the proposed methodology. PMID:19266053

  7. The relationship of values to adjustment in illness: a model for nursing practice.

    PubMed

    Harvey, R M

    1992-04-01

    This paper proposes a model of the relationship between values, in particular health value, and adjustment to illness. The importance of values as well as the need for value change are described in the literature related to adjustment to physical disability and chronic illness. An empirical model, however, that explains the relationship of values to adjustment or adaptation has not been found by this researcher. Balance theory and its application to the abstract and perceived cognitions of health value and health perception are described here to explain the relationship of values like health value to outcomes associated with adjustment or adaptation to illness. The proposed model is based on the balance theories of Heider, Festinger and Feather. Hypotheses based on the model were tested and supported in a study of 100 adults with visible and invisible chronic illness. Nursing interventions based on the model are described and suggestions for further research discussed.

  8. Prediction of wheat tortilla quality using multivariate modeling of kernel, flour and dough properties

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Wheat grain attributes that influence tortilla quality are not fully understood. This impedes genetic improvement efforts to develop wheat varieties for the growing market. This study used a multivariate discriminant analysis to predict tortilla quality using a set of 16 variables derived from kerne...

  9. A MULTIVARIATE FIT LUMINOSITY FUNCTION AND WORLD MODEL FOR LONG GAMMA-RAY BURSTS

    SciTech Connect

    Shahmoradi, Amir

    2013-04-01

    It is proposed that the luminosity function, the rest-frame spectral correlations, and distributions of cosmological long-duration (Type-II) gamma-ray bursts (LGRBs) may be very well described as a multivariate log-normal distribution. This result is based on careful selection, analysis, and modeling of LGRBs' temporal and spectral variables in the largest catalog of GRBs available to date: 2130 BATSE GRBs, while taking into account the detection threshold and possible selection effects. Constraints on the joint rest-frame distribution of the isotropic peak luminosity (L{sub iso}), total isotropic emission (E{sub iso}), the time-integrated spectral peak energy (E{sub p,z}), and duration (T{sub 90,z}) of LGRBs are derived. The presented analysis provides evidence for a relatively large fraction of LGRBs that have been missed by the BATSE detector with E{sub iso} extending down to {approx}10{sup 49} erg and observed spectral peak energies (E{sub p} ) as low as {approx}5 keV. LGRBs with rest-frame duration T{sub 90,z} {approx}< 1 s or observer-frame duration T{sub 90} {approx}< 2 s appear to be rare events ({approx}< 0.1% chance of occurrence). The model predicts a fairly strong but highly significant correlation ({rho} = 0.58 {+-} 0.04) between E{sub iso} and E{sub p,z} of LGRBs. Also predicted are strong correlations of L{sub iso} and E{sub iso} with T{sub 90,z} and moderate correlation between L{sub iso} and E{sub p,z}. The strength and significance of the correlations found encourage the search for underlying mechanisms, though undermine their capabilities as probes of dark energy's equation of state at high redshifts. The presented analysis favors-but does not necessitate-a cosmic rate for BATSE LGRBs tracing metallicity evolution consistent with a cutoff Z/Z{sub Sun} {approx} 0.2-0.5, assuming no luminosity-redshift evolution.

  10. A New Climate Adjustment Tool: An update to EPA’s Storm Water Management Model

    EPA Science Inventory

    The US EPA’s newest tool, the Stormwater Management Model (SWMM) – Climate Adjustment Tool (CAT) is meant to help municipal stormwater utilities better address potential climate change impacts affecting their operations.

  11. Modeling angles in proteins and circular genomes using multivariate angular distributions based on multiple nonnegative trigonometric sums.

    PubMed

    Fernández-Durán, Juan José; Gregorio-Domínguez, María Mercedes

    2014-02-01

    Fernández-Durán, J. J. (2004): "Circular distributions based on nonnegative trigonometric sums," Biometrics, 60, 499-503, developed a family of univariate circular distributions based on nonnegative trigonometric sums. In this work, we extend this family of distributions to the multivariate case by using multiple nonnegative trigonometric sums to model the joint distribution of a vector of angular random variables. Practical examples of vectors of angular random variables include the wind direction at different monitoring stations, the directions taken by an animal on different occasions, the times at which a person performs different daily activities, and the dihedral angles of a protein molecule. We apply the proposed new family of multivariate distributions to three real data-sets: two for the study of protein structure and one for genomics. The first is related to the study of a bivariate vector of dihedral angles in proteins. In the second real data-set, we compare the fit of the proposed multivariate model with the bivariate generalized von Mises model of [Shieh, G. S., S. Zheng, R. A. Johnson, Y.-F. Chang, K. Shimizu, C.-C. Wang, and S.-L. Tang (2011): "Modeling and comparing the organization of circular genomes," Bioinformatics, 27(7), 912-918.] in a problem related to orthologous genes in pairs of circular genomes. The third real data-set consists of observed values of three dihedral angles in γ-turns in a protein and serves as an example of trivariate angular data. In addition, a simulation algorithm is presented to generate realizations from the proposed multivariate angular distribution.

  12. Multivariate Visual Explanation for High Dimensional Datasets

    PubMed Central

    Barlowe, Scott; Zhang, Tianyi; Liu, Yujie; Yang, Jing; Jacobs, Donald

    2010-01-01

    Understanding multivariate relationships is an important task in multivariate data analysis. Unfortunately, existing multivariate visualization systems lose effectiveness when analyzing relationships among variables that span more than a few dimensions. We present a novel multivariate visual explanation approach that helps users interactively discover multivariate relationships among a large number of dimensions by integrating automatic numerical differentiation techniques and multidimensional visualization techniques. The result is an efficient workflow for multivariate analysis model construction, interactive dimension reduction, and multivariate knowledge discovery leveraging both automatic multivariate analysis and interactive multivariate data visual exploration. Case studies and a formal user study with a real dataset illustrate the effectiveness of this approach. PMID:20694164

  13. A Disequilibrium Adjustment Mechanism for CPE Macroeconometric Models: Initial Testing on SOVMOD.

    DTIC Science & Technology

    1979-02-01

    062 1H0 SRI INTERNATIONAL ARLINGTON VA STRATEGIC STUDIES CENTER F/ T 5/3 DISEQUILIBRIUM ADJUSTMENT MECHANISM FOR CPE MACROECOAI0AIETRIC -E (U) FEB...wC) u Approved for Review Distribution: 0 Richard B. Foster, Director Strategic Studies Center Approved for public release; distribution unlimited...describes work on the model aimed at facilitating the integration of a disequilibrium adjustment mechanism into the macroeconometric model. The

  14. Algorithms for the analysis of ensemble neural spiking activity using simultaneous-event multivariate point-process models.

    PubMed

    Ba, Demba; Temereanca, Simona; Brown, Emery N

    2014-01-01

    Understanding how ensembles of neurons represent and transmit information in the patterns of their joint spiking activity is a fundamental question in computational neuroscience. At present, analyses of spiking activity from neuronal ensembles are limited because multivariate point process (MPP) models cannot represent simultaneous occurrences of spike events at an arbitrarily small time resolution. Solo recently reported a simultaneous-event multivariate point process (SEMPP) model to correct this key limitation. In this paper, we show how Solo's discrete-time formulation of the SEMPP model can be efficiently fit to ensemble neural spiking activity using a multinomial generalized linear model (mGLM). Unlike existing approximate procedures for fitting the discrete-time SEMPP model, the mGLM is an exact algorithm. The MPP time-rescaling theorem can be used to assess model goodness-of-fit. We also derive a new marked point-process (MkPP) representation of the SEMPP model that leads to new thinning and time-rescaling algorithms for simulating an SEMPP stochastic process. These algorithms are much simpler than multivariate extensions of algorithms for simulating a univariate point process, and could not be arrived at without the MkPP representation. We illustrate the versatility of the SEMPP model by analyzing neural spiking activity from pairs of simultaneously-recorded rat thalamic neurons stimulated by periodic whisker deflections, and by simulating SEMPP data. In the data analysis example, the SEMPP model demonstrates that whisker motion significantly modulates simultaneous spiking activity at the 1 ms time scale and that the stimulus effect is more than one order of magnitude greater for simultaneous activity compared with non-simultaneous activity. Together, the mGLM, the MPP time-rescaling theorem and the MkPP representation of the SEMPP model offer a theoretically sound, practical tool for measuring joint spiking propensity in a neuronal ensemble.

  15. An Open Source Geovisual Analytics Toolbox for Multivariate Spatio-Temporal Data in Environmental Change Modelling

    NASA Astrophysics Data System (ADS)

    Bernasocchi, M.; Coltekin, A.; Gruber, S.

    2012-07-01

    In environmental change studies, often multiple variables are measured or modelled, and temporal information is essential for the task. These multivariate geographic time-series datasets are often big and difficult to analyse. While many established methods such as PCP (parallel coordinate plots), STC (space-time cubes), scatter-plots and multiple (linked) visualisations help provide more information, we observe that most of the common geovisual analytics suits do not include three-dimensional (3D) visualisations. However, in many environmental studies, we hypothesize that the addition of 3D terrain visualisations along with appropriate data plots and two-dimensional views can help improve the analysts' ability to interpret the spatial relevance better. To test our ideas, we conceptualize, develop, implement and evaluate a geovisual analytics toolbox in a user-centred manner. The conceptualization of the tool is based on concrete user needs that have been identified and collected during informal brainstorming sessions and in a structured focus group session prior to the development. The design process, therefore, is based on a combination of user-centred design with a requirement analysis and agile development. Based on the findings from this phase, the toolbox was designed to have a modular structure and was built on open source geographic information systems (GIS) program Quantum GIS (QGIS), thus benefiting from existing GIS functionality. The modules include a globe view for 3D terrain visualisation (OSGEarth), a scattergram, a time vs. value plot, and a 3D helix visualisation as well as the possibility to view the raw data. The visualisation frame allows real-time linking of these representations. After the design and development stage, a case study was created featuring data from Zermatt valley and the toolbox was evaluated based on expert interviews. Analysts performed multiple spatial and temporal tasks with the case study using the toolbox. The expert

  16. Modeling of an Adjustable Beam Solid State Light Project

    NASA Technical Reports Server (NTRS)

    Clark, Toni

    2015-01-01

    This proposal is for the development of a computational model of a prototype variable beam light source using optical modeling software, Zemax Optics Studio. The variable beam light source would be designed to generate flood, spot, and directional beam patterns, while maintaining the same average power usage. The optical model would demonstrate the possibility of such a light source and its ability to address several issues: commonality of design, human task variability, and light source design process improvements. An adaptive lighting solution that utilizes the same electronics footprint and power constraints while addressing variability of lighting needed for the range of exploration tasks can save costs and allow for the development of common avionics for lighting controls.

  17. Spherical Model Integrating Academic Competence with Social Adjustment and Psychopathology.

    ERIC Educational Resources Information Center

    Schaefer, Earl S.; And Others

    This study replicates and elaborates a three-dimensional, spherical model that integrates research findings concerning social and emotional behavior, psychopathology, and academic competence. Kindergarten teachers completed an extensive set of rating scales on 100 children, including the Classroom Behavior Inventory and the Child Adaptive Behavior…

  18. New nonlinear multivariable model shows the relationship between central corneal thickness and HRTII topographic parameters in glaucoma patients

    PubMed Central

    Kourkoutas, Dimitrios; Georgopoulos, Gerasimos; Maragos, Antonios; Apostolakis, Ioannis; Tsekouras, George; Karanasiou, Irene S; Papaconstantinou, Dimitrios; Iliakis, Evaggelos; Moschos, Michael

    2009-01-01

    Purpose: In this paper a new nonlinear multivariable regression method is presented in order to investigate the relationship between the central corneal thickness (CCT) and the Heidelberg Retina Tomograph (HRTII) optic nerve head (ONH) topographic measurements, in patients with established glaucoma. Methods: Forty nine eyes of 49 patients with glaucoma were included in this study. Inclusion criteria were patients with (a) HRT II ONH imaging of good quality (SD < 30 μm), (b) reliable Humphrey visual field tests (30-2 program), and (c) bilateral CCT measurements with ultrasonic contact pachymetry. Patients were classified as glaucomatous based on visual field and/or ONH damage. The relationship between CCT and topographic parameters was analyzed by using the new nonlinear multivariable regression model. Results: In the entire group, CCT was 549.78 ± 33.08 μm (range: 484–636 μm); intraocular pressure (IOP) was 16.4 ± 2.67 mmHg (range: 11–23 mmHg); MD was −3.80 ± 4.97 dB (range: 4.04 – [−20.4] dB); refraction was −0.78 ± 2.46 D (range: −6.0 D to +3.0 D). The new nonlinear multivariable regression model we used indicated that CCT was significantly related (R2 = 0.227, p < 0.01) with rim volume nasally and type of diagnosis. Conclusions: By using the new nonlinear multivariable regression model, in patients with established glaucoma, our data showed that there is a statistically significant correlation between CCT and HRTII ONH structural measurements, in glaucoma patients. PMID:19668584

  19. Use of multivariate calibration models based on UV-Vis spectra for seawater quality monitoring in Tianjin Bohai Bay, China.

    PubMed

    Liu, Xianhua; Wang, Lili

    2015-01-01

    A series of ultraviolet-visible (UV-Vis) spectra from seawater samples collected from sites along the coastline of Tianjin Bohai Bay in China were subjected to multivariate partial least squares (PLS) regression analysis. Calibration models were developed for monitoring chemical oxygen demand (COD) and concentrations of total organic carbon (TOC). Three different PLS models were developed using the spectra from raw samples (Model-1), diluted samples (Model-2), and diluted and raw samples combined (Model-3). Experimental results showed that: (i) possible nonlinearities in the signal concentration relationships were well accounted for by the multivariate PLS model; (ii) the predicted values of COD and TOC fit the analytical values well; the high correlation coefficients and small root mean squared error of cross-validation (RMSECV) showed that this method can be used for seawater quality monitoring; and (iii) compared with Model-1 and Model-2, Model-3 had the highest coefficient of determination (R2) and the lowest number of latent variables. This latter finding suggests that only large data sets that include data representing different combinations of conditions (i.e., various seawater matrices) will produce stable site-specific regressions. The results of this study illustrate the effectiveness of the proposed method and its potential for use as a seawater quality monitoring technique.

  20. Inferring Instantaneous, Multivariate and Nonlinear Sensitivities for the Analysis of Feedback Processes in a Dynamical System: Lorenz Model Case Study

    NASA Technical Reports Server (NTRS)

    Aires, Filipe; Rossow, William B.; Hansen, James E. (Technical Monitor)

    2001-01-01

    A new approach is presented for the analysis of feedback processes in a nonlinear dynamical system by observing its variations. The new methodology consists of statistical estimates of the sensitivities between all pairs of variables in the system based on a neural network modeling of the dynamical system. The model can then be used to estimate the instantaneous, multivariate and nonlinear sensitivities, which are shown to be essential for the analysis of the feedbacks processes involved in the dynamical system. The method is described and tested on synthetic data from the low-order Lorenz circulation model where the correct sensitivities can be evaluated analytically.

  1. Estimating multivariate response surface model with data outliers, case study in enhancing surface layer properties of an aircraft aluminium alloy

    NASA Astrophysics Data System (ADS)

    Widodo, Edy; Kariyam

    2017-03-01

    To determine the input variable settings that create the optimal compromise in response variable used Response Surface Methodology (RSM). There are three primary steps in the RSM problem, namely data collection, modelling, and optimization. In this study focused on the establishment of response surface models, using the assumption that the data produced is correct. Usually the response surface model parameters are estimated by OLS. However, this method is highly sensitive to outliers. Outliers can generate substantial residual and often affect the estimator models. Estimator models produced can be biased and could lead to errors in the determination of the optimal point of fact, that the main purpose of RSM is not reached. Meanwhile, in real life, the collected data often contain some response variable and a set of independent variables. Treat each response separately and apply a single response procedures can result in the wrong interpretation. So we need a development model for the multi-response case. Therefore, it takes a multivariate model of the response surface that is resistant to outliers. As an alternative, in this study discussed on M-estimation as a parameter estimator in multivariate response surface models containing outliers. As an illustration presented a case study on the experimental results to the enhancement of the surface layer of aluminium alloy air by shot peening.

  2. An evaluation of bias in propensity score-adjusted non-linear regression models.

    PubMed

    Wan, Fei; Mitra, Nandita

    2016-04-19

    Propensity score methods are commonly used to adjust for observed confounding when estimating the conditional treatment effect in observational studies. One popular method, covariate adjustment of the propensity score in a regression model, has been empirically shown to be biased in non-linear models. However, no compelling underlying theoretical reason has been presented. We propose a new framework to investigate bias and consistency of propensity score-adjusted treatment effects in non-linear models that uses a simple geometric approach to forge a link between the consistency of the propensity score estimator and the collapsibility of non-linear models. Under this framework, we demonstrate that adjustment of the propensity score in an outcome model results in the decomposition of observed covariates into the propensity score and a remainder term. Omission of this remainder term from a non-collapsible regression model leads to biased estimates of the conditional odds ratio and conditional hazard ratio, but not for the conditional rate ratio. We further show, via simulation studies, that the bias in these propensity score-adjusted estimators increases with larger treatment effect size, larger covariate effects, and increasing dissimilarity between the coefficients of the covariates in the treatment model versus the outcome model.

  3. Comparison of the Properties of Regression and Categorical Risk-Adjustment Models

    PubMed Central

    Averill, Richard F.; Muldoon, John H.; Hughes, John S.

    2016-01-01

    Clinical risk-adjustment, the ability to standardize the comparison of individuals with different health needs, is based upon 2 main alternative approaches: regression models and clinical categorical models. In this article, we examine the impact of the differences in the way these models are constructed on end user applications. PMID:26945302

  4. Using Wherry's Adjusted R Squared and Mallow's C (p) for Model Selection from All Possible Regressions.

    ERIC Educational Resources Information Center

    Olejnik, Stephen; Mills, Jamie; Keselman, Harvey

    2000-01-01

    Evaluated the use of Mallow's C(p) and Wherry's adjusted R squared (R. Wherry, 1931) statistics to select a final model from a pool of model solutions using computer generated data. Neither statistic identified the underlying regression model any better than, and usually less well than, the stepwise selection method, which itself was poor for…

  5. The development of a risk-adjusted capitation payment system: the Maryland Medicaid model.

    PubMed

    Weiner, J P; Tucker, A M; Collins, A M; Fakhraei, H; Lieberman, R; Abrams, C; Trapnell, G R; Folkemer, J G

    1998-10-01

    This article describes the risk-adjusted payment methodology employed by the Maryland Medicaid program to pay managed care organizations. It also presents an empirical simulation analysis using claims data from 230,000 Maryland Medicaid recipients. This simulation suggests that the new payment model will help adjust for adverse or favorable selection. The article is intended for a wide audience, including state and national policy makers concerned with the design of managed care Medicaid programs and actuaries, analysts, and researchers involved in the design and implementation of risk-adjusted capitation payment systems.

  6. Multivariate class modeling techniques applied to multielement analysis for the verification of the geographical origin of chili pepper.

    PubMed

    Naccarato, Attilio; Furia, Emilia; Sindona, Giovanni; Tagarelli, Antonio

    2016-09-01

    Four class-modeling techniques (soft independent modeling of class analogy (SIMCA), unequal dispersed classes (UNEQ), potential functions (PF), and multivariate range modeling (MRM)) were applied to multielement distribution to build chemometric models able to authenticate chili pepper samples grown in Calabria respect to those grown outside of Calabria. The multivariate techniques were applied by considering both all the variables (32 elements, Al, As, Ba, Ca, Cd, Ce, Co, Cr, Cs, Cu, Dy, Fe, Ga, La, Li, Mg, Mn, Na, Nd, Ni, Pb, Pr, Rb, Sc, Se, Sr, Tl, Tm, V, Y, Yb, Zn) and variables selected by means of stepwise linear discriminant analysis (S-LDA). In the first case, satisfactory and comparable results in terms of CV efficiency are obtained with the use of SIMCA and MRM (82.3 and 83.2% respectively), whereas MRM performs better than SIMCA in terms of forced model efficiency (96.5%). The selection of variables by S-LDA permitted to build models characterized, in general, by a higher efficiency. MRM provided again the best results for CV efficiency (87.7% with an effective balance of sensitivity and specificity) as well as forced model efficiency (96.5%).

  7. Comparison of procedures to assess non-linear and time-varying effects in multivariable models for survival data.

    PubMed

    Buchholz, Anika; Sauerbrei, Willi

    2011-03-01

    The focus of many medical applications is to model the impact of several factors on time to an event. A standard approach for such analyses is the Cox proportional hazards model. It assumes that the factors act linearly on the log hazard function (linearity assumption) and that their effects are constant over time (proportional hazards (PH) assumption). Variable selection is often required to specify a more parsimonious model aiming to include only variables with an influence on the outcome. As follow-up increases the effect of a variable often gets weaker, which means that it varies in time. However, spurious time-varying effects may also be introduced by mismodelling other parts of the multivariable model, such as omission of an important covariate or an incorrect functional form of a continuous covariate. These issues interact. To check whether the effect of a variable varies in time several tests for non-PH have been proposed. However, they are not sufficient to derive a model, as appropriate modelling of the shape of time-varying effects is required. In three examples we will compare five recently published strategies to assess whether and how the effects of covariates from a multivariable model vary in time. For practical use we will give some recommendations.

  8. Modeling and Control of the Redundant Parallel Adjustment Mechanism on a Deployable Antenna Panel

    PubMed Central

    Tian, Lili; Bao, Hong; Wang, Meng; Duan, Xuechao

    2016-01-01

    With the aim of developing multiple input and multiple output (MIMO) coupling systems with a redundant parallel adjustment mechanism on the deployable antenna panel, a structural control integrated design methodology is proposed in this paper. Firstly, the modal information from the finite element model of the structure of the antenna panel is extracted, and then the mathematical model is established with the Hamilton principle; Secondly, the discrete Linear Quadratic Regulator (LQR) controller is added to the model in order to control the actuators and adjust the shape of the panel. Finally, the engineering practicality of the modeling and control method based on finite element analysis simulation is verified. PMID:27706076

  9. Modeling and Control of the Redundant Parallel Adjustment Mechanism on a Deployable Antenna Panel.

    PubMed

    Tian, Lili; Bao, Hong; Wang, Meng; Duan, Xuechao

    2016-10-01

    With the aim of developing multiple input and multiple output (MIMO) coupling systems with a redundant parallel adjustment mechanism on the deployable antenna panel, a structural control integrated design methodology is proposed in this paper. Firstly, the modal information from the finite element model of the structure of the antenna panel is extracted, and then the mathematical model is established with the Hamilton principle; Secondly, the discrete Linear Quadratic Regulator (LQR) controller is added to the model in order to control the actuators and adjust the shape of the panel. Finally, the engineering practicality of the modeling and control method based on finite element analysis simulation is verified.

  10. On the hydrologic adjustment of climate-model projections: The potential pitfall of potential evapotranspiration

    USGS Publications Warehouse

    Milly, P.C.D.; Dunne, K.A.

    2011-01-01

    Hydrologic models often are applied to adjust projections of hydroclimatic change that come from climate models. Such adjustment includes climate-bias correction, spatial refinement ("downscaling"), and consideration of the roles of hydrologic processes that were neglected in the climate model. Described herein is a quantitative analysis of the effects of hydrologic adjustment on the projections of runoff change associated with projected twenty-first-century climate change. In a case study including three climate models and 10 river basins in the contiguous United States, the authors find that relative (i.e., fractional or percentage) runoff change computed with hydrologic adjustment more often than not was less positive (or, equivalently, more negative) than what was projected by the climate models. The dominant contributor to this decrease in runoff was a ubiquitous change in runoff (median 211%) caused by the hydrologic model's apparent amplification of the climate-model-implied growth in potential evapotranspiration. Analysis suggests that the hydrologic model, on the basis of the empirical, temperature-based modified Jensen-Haise formula, calculates a change in potential evapotranspiration that is typically 3 times the change implied by the climate models, which explicitly track surface energy budgets. In comparison with the amplification of potential evapotranspiration, central tendencies of other contributions from hydrologic adjustment (spatial refinement, climate-bias adjustment, and process refinement) were relatively small. The authors' findings highlight the need for caution when projecting changes in potential evapotranspiration for use in hydrologic models or drought indices to evaluate climatechange impacts on water. Copyright ?? 2011, Paper 15-001; 35,952 words, 3 Figures, 0 Animations, 1 Tables.

  11. Cross-correlations and joint gaussianity in multivariate level crossing models.

    PubMed

    Di Bernardino, Elena; León, José; Tchumatchenko, Tatjana

    2014-04-17

    A variety of phenomena in physical and biological sciences can be mathematically understood by considering the statistical properties of level crossings of random Gaussian processes. Notably, a growing number of these phenomena demand a consideration of correlated level crossings emerging from multiple correlated processes. While many theoretical results have been obtained in the last decades for individual Gaussian level-crossing processes, few results are available for multivariate, jointly correlated threshold crossings. Here, we address bivariate upward crossing processes and derive the corresponding bivariate Central Limit Theorem as well as provide closed-form expressions for their joint level-crossing correlations.

  12. Cross-Correlations and Joint Gaussianity in Multivariate Level Crossing Models

    PubMed Central

    2014-01-01

    A variety of phenomena in physical and biological sciences can be mathematically understood by considering the statistical properties of level crossings of random Gaussian processes. Notably, a growing number of these phenomena demand a consideration of correlated level crossings emerging from multiple correlated processes. While many theoretical results have been obtained in the last decades for individual Gaussian level-crossing processes, few results are available for multivariate, jointly correlated threshold crossings. Here, we address bivariate upward crossing processes and derive the corresponding bivariate Central Limit Theorem as well as provide closed-form expressions for their joint level-crossing correlations. PMID:24742344

  13. A multivariate linear regression model for predicting children's blood lead levels based on soil lead levels: A study at four Superfund sites

    SciTech Connect

    Lewin, M.D.; Sarasua, S.; Jones, P.A. . Div. of Health Studies)

    1999-07-01

    For the purpose of examining the association between blood lead levels and household-specific soil lead levels, the authors used a multivariate linear regression model to find a slope factor relating soil lead levels to blood lead levels. They used previously collected data from the Agency for Toxic Substances and Disease Registry's (ATSDR's) multisite lead and cadmium study. The data included in the blood lead measurements of 1,015 children aged 6--71 months, and corresponding household-specific environmental samples. The environmental samples included lead in soil, house dust, interior paint, and tap water. After adjusting for income, education or the parents, presence of a smoker in the household, sex, and dust lead, and using a double log transformation, they found a slope factor of 0.1388 with a 95% confidence interval of 0.09--0.19 for the dose-response relationship between the natural log of the soil lead level and the natural log of the blood lead level. The predicted blood lead level corresponding to a soil lead level of 500 mg/kg was 5.99 [micro]g/kg with a 95% prediction interval of 2.08--17.29. Predicted values and their corresponding prediction intervals varied by covariate level. The model shows that increased soil lead level is associated with elevated blood leads in children, but that predictions based on this regression model are subject to high levels of uncertainty and variability.

  14. Multivariate bubbles and antibubbles

    NASA Astrophysics Data System (ADS)

    Fry, John

    2014-08-01

    In this paper we develop models for multivariate financial bubbles and antibubbles based on statistical physics. In particular, we extend a rich set of univariate models to higher dimensions. Changes in market regime can be explicitly shown to represent a phase transition from random to deterministic behaviour in prices. Moreover, our multivariate models are able to capture some of the contagious effects that occur during such episodes. We are able to show that declining lending quality helped fuel a bubble in the US stock market prior to 2008. Further, our approach offers interesting insights into the spatial development of UK house prices.

  15. Estimation of reference evapotranspiration using multivariate fractional polynomial, Bayesian regression, and robust regression models in three arid environments

    NASA Astrophysics Data System (ADS)

    Khoshravesh, Mojtaba; Sefidkouhi, Mohammad Ali Gholami; Valipour, Mohammad

    2015-12-01

    The proper evaluation of evapotranspiration is essential in food security investigation, farm management, pollution detection, irrigation scheduling, nutrient flows, carbon balance as well as hydrologic modeling, especially in arid environments. To achieve sustainable development and to ensure water supply, especially in arid environments, irrigation experts need tools to estimate reference evapotranspiration on a large scale. In this study, the monthly reference evapotranspiration was estimated by three different regression models including the multivariate fractional polynomial (MFP), robust regression, and Bayesian regression in Ardestan, Esfahan, and Kashan. The results were compared with Food and Agriculture Organization (FAO)-Penman-Monteith (FAO-PM) to select the best model. The results show that at a monthly scale, all models provided a closer agreement with the calculated values for FAO-PM (R 2 > 0.95 and RMSE < 12.07 mm month-1). However, the MFP model gives better estimates than the other two models for estimating reference evapotranspiration at all stations.

  16. On the Hydrologic Adjustment of Climate-Model Projections: The Potential Pitfall of Potential Evapotranspiration

    USGS Publications Warehouse

    Milly, Paul C.D.; Dunne, Krista A.

    2011-01-01

    Hydrologic models often are applied to adjust projections of hydroclimatic change that come from climate models. Such adjustment includes climate-bias correction, spatial refinement ("downscaling"), and consideration of the roles of hydrologic processes that were neglected in the climate model. Described herein is a quantitative analysis of the effects of hydrologic adjustment on the projections of runoff change associated with projected twenty-first-century climate change. In a case study including three climate models and 10 river basins in the contiguous United States, the authors find that relative (i.e., fractional or percentage) runoff change computed with hydrologic adjustment more often than not was less positive (or, equivalently, more negative) than what was projected by the climate models. The dominant contributor to this decrease in runoff was a ubiquitous change in runoff (median -11%) caused by the hydrologic model’s apparent amplification of the climate-model-implied growth in potential evapotranspiration. Analysis suggests that the hydrologic model, on the basis of the empirical, temperature-based modified Jensen–Haise formula, calculates a change in potential evapotranspiration that is typically 3 times the change implied by the climate models, which explicitly track surface energy budgets. In comparison with the amplification of potential evapotranspiration, central tendencies of other contributions from hydrologic adjustment (spatial refinement, climate-bias adjustment, and process refinement) were relatively small. The authors’ findings highlight the need for caution when projecting changes in potential evapotranspiration for use in hydrologic models or drought indices to evaluate climate-change impacts on water.

  17. Cerebral Cortical Folding Analysis with Multivariate Modeling and Testing: Studies on Gender Differences and Neonatal Development

    PubMed Central

    Awate, Suyash P.; Yushkevich, Paul A.; Song, Zhuang; Licht, Daniel J.; Gee, James C.

    2010-01-01

    This paper presents a novel statistical framework for human cortical folding pattern analysis that relies on a rich multivariate descriptor of folding patterns in a region of interest (ROI). The ROI-based approach avoids problems faced by spatial-normalization-based approaches stemming from the deficiency of homologous features between typical human cerebral cortices. Unlike typical ROI-based methods that summarize folding by a single number, the proposed descriptor unifies multiple characteristics of surface geometry in a high-dimensional space (hundreds/thousands of dimensions). In this way, the proposed framework couples the reliability of ROI-based analysis with the richness of the novel cortical folding pattern descriptor. This paper presents new mathematical insights into the relationship of cortical complexity with intra-cranial volume (ICV). It shows that conventional complexity descriptors implicitly handle ICV Differences in Different ways, thereby lending Different meanings to “complexity”. The paper proposes a new application of a non-parametric permutation-based approach for rigorous statistical hypothesis testing with multivariate cortical descriptors. The paper presents two cross-sectional studies applying the proposed framework to study folding Differences between genders and in neonates with complex congenital heart disease. Both studies lead to novel interesting results. PMID:20630489

  18. Interpreting support vector machine models for multivariate group wise analysis in neuroimaging.

    PubMed

    Gaonkar, Bilwaj; T Shinohara, Russell; Davatzikos, Christos

    2015-08-01

    Machine learning based classification algorithms like support vector machines (SVMs) have shown great promise for turning a high dimensional neuroimaging data into clinically useful decision criteria. However, tracing imaging based patterns that contribute significantly to classifier decisions remains an open problem. This is an issue of critical importance in imaging studies seeking to determine which anatomical or physiological imaging features contribute to the classifier's decision, thereby allowing users to critically evaluate the findings of such machine learning methods and to understand disease mechanisms. The majority of published work addresses the question of statistical inference for support vector classification using permutation tests based on SVM weight vectors. Such permutation testing ignores the SVM margin, which is critical in SVM theory. In this work we emphasize the use of a statistic that explicitly accounts for the SVM margin and show that the null distributions associated with this statistic are asymptotically normal. Further, our experiments show that this statistic is a lot less conservative as compared to weight based permutation tests and yet specific enough to tease out multivariate patterns in the data. Thus, we can better understand the multivariate patterns that the SVM uses for neuroimaging based classification.

  19. Interpreting support vector machine models for multivariate group wise analysis in neuroimaging

    PubMed Central

    Gaonkar, Bilwaj; Shinohara, Russell T; Davatzikos, Christos

    2015-01-01

    Machine learning based classification algorithms like support vector machines (SVMs) have shown great promise for turning a high dimensional neuroimaging data into clinically useful decision criteria. However, tracing imaging based patterns that contribute significantly to classifier decisions remains an open problem. This is an issue of critical importance in imaging studies seeking to determine which anatomical or physiological imaging features contribute to the classifier’s decision, thereby allowing users to critically evaluate the findings of such machine learning methods and to understand disease mechanisms. The majority of published work addresses the question of statistical inference for support vector classification using permutation tests based on SVM weight vectors. Such permutation testing ignores the SVM margin, which is critical in SVM theory. In this work we emphasize the use of a statistic that explicitly accounts for the SVM margin and show that the null distributions associated with this statistic are asymptotically normal. Further, our experiments show that this statistic is a lot less conservative as compared to weight based permutation tests and yet specific enough to tease out multivariate patterns in the data. Thus, we can better understand the multivariate patterns that the SVM uses for neuroimaging based classification. PMID:26210913

  20. Combining regional estimation and historical floods: A multivariate semiparametric peaks-over-threshold model with censored data

    NASA Astrophysics Data System (ADS)

    Sabourin, Anne; Renard, Benjamin

    2015-12-01

    The estimation of extreme flood quantiles is challenging due to the relative scarcity of extreme data compared to typical target return periods. Several approaches have been developed over the years to face this challenge, including regional estimation and the use of historical flood data. This paper investigates the combination of both approaches using a multivariate peaks-over-threshold model that allows estimating altogether the intersite dependence structure and the marginal distributions at each site. The joint distribution of extremes at several sites is constructed using a semiparametric Dirichlet Mixture model. The existence of partially missing and censored observations (historical data) is accounted for within a data augmentation scheme. This model is applied to a case study involving four catchments in Southern France, for which historical data are available since 1604. The comparison of marginal estimates from four versions of the model (with or without regionalizing the shape parameter; using or ignoring historical floods) highlights significant differences in terms of return level estimates. Moreover, the availability of historical data on several nearby catchments allows investigating the asymptotic dependence properties of extreme floods. Catchments display a significant amount of asymptotic dependence, calling for adapted multivariate statistical models.

  1. A Poisson-lognormal conditional-autoregressive model for multivariate spatial analysis of pedestrian crash counts across neighborhoods.

    PubMed

    Wang, Yiyi; Kockelman, Kara M

    2013-11-01

    This work examines the relationship between 3-year pedestrian crash counts across Census tracts in Austin, Texas, and various land use, network, and demographic attributes, such as land use balance, residents' access to commercial land uses, sidewalk density, lane-mile densities (by roadway class), and population and employment densities (by type). The model specification allows for region-specific heterogeneity, correlation across response types, and spatial autocorrelation via a Poisson-based multivariate conditional auto-regressive (CAR) framework and is estimated using Bayesian Markov chain Monte Carlo methods. Least-squares regression estimates of walk-miles traveled per zone serve as the exposure measure. Here, the Poisson-lognormal multivariate CAR model outperforms an aspatial Poisson-lognormal multivariate model and a spatial model (without cross-severity correlation), both in terms of fit and inference. Positive spatial autocorrelation emerges across neighborhoods, as expected (due to latent heterogeneity or missing variables that trend in space, resulting in spatial clustering of crash counts). In comparison, the positive aspatial, bivariate cross correlation of severe (fatal or incapacitating) and non-severe crash rates reflects latent covariates that have impacts across severity levels but are more local in nature (such as lighting conditions and local sight obstructions), along with spatially lagged cross correlation. Results also suggest greater mixing of residences and commercial land uses is associated with higher pedestrian crash risk across different severity levels, ceteris paribus, presumably since such access produces more potential conflicts between pedestrian and vehicle movements. Interestingly, network densities show variable effects, and sidewalk provision is associated with lower severe-crash rates.

  2. Assessment and indirect adjustment for confounding by smoking in cohort studies using relative hazards models.

    PubMed

    Richardson, David B; Laurier, Dominique; Schubauer-Berigan, Mary K; Tchetgen Tchetgen, Eric; Cole, Stephen R

    2014-11-01

    Workers' smoking histories are not measured in many occupational cohort studies. Here we discuss the use of negative control outcomes to detect and adjust for confounding in analyses that lack information on smoking. We clarify the assumptions necessary to detect confounding by smoking and the additional assumptions necessary to indirectly adjust for such bias. We illustrate these methods using data from 2 studies of radiation and lung cancer: the Colorado Plateau cohort study (1950-2005) of underground uranium miners (in which smoking was measured) and a French cohort study (1950-2004) of nuclear industry workers (in which smoking was unmeasured). A cause-specific relative hazards model is proposed for estimation of indirectly adjusted associations. Among the miners, the proposed method suggests no confounding by smoking of the association between radon and lung cancer--a conclusion supported by adjustment for measured smoking. Among the nuclear workers, the proposed method suggests substantial confounding by smoking of the association between radiation and lung cancer. Indirect adjustment for confounding by smoking resulted in an 18% decrease in the adjusted estimated hazard ratio, yet this cannot be verified because smoking was unmeasured. Assumptions underlying this method are described, and a cause-specific proportional hazards model that allows easy implementation using standard software is presented.

  3. A model of parental representations, second individuation, and psychological adjustment in late adolescence.

    PubMed

    Boles, S A

    1999-04-01

    This study examined the role that mental representations and the second individuation process play in adjustment during late adolescence. Participants between the ages of 18 and 22 were used to test a theoretical model exploring the various relationships among the following latent variables: Parental Representations, Psychological Differentiation, Psychological Dependence, Positive Adjustment, and Maladjustment. The results indicated that the quality of parental representations facilitates the second individuation process, which in turn facilitates psychological adjustment in late adolescence. Furthermore, the results indicated that the second individuation process mediates the influence that the quality of parental representations have on psychological adjustment in late adolescence. These findings are discussed in light of previous research in this area, and clinical implications and suggestions for future research are offered.

  4. Modeling Quality-Adjusted Life Expectancy Loss Resulting from Tobacco Use in the United States

    ERIC Educational Resources Information Center

    Kaplan, Robert M.; Anderson, John P.; Kaplan, Cameron M.

    2007-01-01

    Purpose: To describe the development of a model for estimating the effects of tobacco use upon Quality Adjusted Life Years (QALYs) and to estimate the impact of tobacco use on health outcomes for the United States (US) population using the model. Method: We obtained estimates of tobacco consumption from 6 years of the National Health Interview…

  5. A Model of Divorce Adjustment for Use in Family Service Agencies.

    ERIC Educational Resources Information Center

    Faust, Ruth Griffith

    1987-01-01

    Presents a combined educationally and therapeutically oriented model of treatment to (1) control and lessen disruptive experiences associated with divorce; (2) enable individuals to improve their skill in coping with adjustment reactions to divorce; and (3) modify the pressures and response of single parenthood. Describes the model's four-session…

  6. A Monte Carlo Power Analysis of Traditional Repeated Measures and Hierarchical Multivariate Linear Models in Longitudinal Data Analysis.

    PubMed

    Fang, Hua; Brooks, Gordon P; Rizzo, Maria L; Espy, Kimberly A; Barcikowski, Robert S

    2008-01-01

    The power properties of traditional repeated measures and hierarchical linear models have not been clearly determined in the balanced design for longitudinal studies in the current literature. A Monte Carlo power analysis of traditional repeated measures and hierarchical multivariate linear models are presented under three variance-covariance structures. Results suggest that traditional repeated measures have higher power than hierarchical linear models for main effects, but lower power for interaction effects. Significant power differences are also exhibited when power is compared across different covariance structures. Results also supplement more comprehensive empirical indexes for estimating model precision via bootstrap estimates and the approximate power for both main effects and interaction tests under standard model assumptions.

  7. Introducing nonlinear, multivariate 'Predictor Surfaces' for quantitative modeling of chemical systems with higher-order, coupled predictor variables.

    PubMed

    Horton, Rebecca B; McConico, Morgan; Landry, Currie; Tran, Tho; Vogt, Frank

    2012-10-09

    Innovations in chemometrics are required for studies of chemical systems which are governed by nonlinear responses to chemical parameters and/or interdependencies (coupling) among these parameters. Conventional and linear multivariate models have limited use for quantitative and qualitative investigations of such systems because they are based on the assumption that the measured data are simple superpositions of several input parameters. 'Predictor Surfaces' were developed for studies of more chemically complex systems such as biological materials in order to ensure accurate quantitative analyses and proper chemical modeling for in-depth studies of such systems. Predictor Surfaces are based on approximating nonlinear multivariate model functions by multivariate Taylor expansions which inherently introduce the required coupled and higher-order predictor variables. As proof-of-principle for the Predictor Surfaces' capabilities, an application from environmental analytical chemistry was chosen. Microalgae cells are known to sensitively adapt to changes in environmental parameters such as pollution and/or nutrient availability and thus have potential as novel in situ sensors for environmental monitoring. These adaptations of the microalgae cells are reflected in their chemical signatures which were then acquired by means of FT-IR spectroscopy. In this study, the concentrations of three nutrients, namely inorganic carbon and two nitrogen containing ions, were chosen. Biological considerations predict that changes in nutrient availability produce a nonlinear response in the cells' biomass composition; it is also known that microalgae need certain nutrient mixes to thrive. The nonlinear Predictor Surfaces were demonstrated to be more accurate in predicting the values of these nutrients' concentrations than principal component regression. For qualitative chemical studies of biological systems, the Predictor Surfaces themselves are a novel tool as they visualize

  8. Objective classification of latent behavioral states in bio-logging data using multivariate-normal hidden Markov models.

    PubMed

    Phillips, Joe Scutt; Patterson, Toby A; Leroy, Bruno; Pilling, Graham M; Nicol, Simon J

    2015-07-01

    Analysis of complex time-series data from ecological system study requires quantitative tools for objective description and classification. These tools must take into account largely ignored problems of bias in manual classification, autocorrelation, and noise. Here we describe a method using existing estimation techniques for multivariate-normal hidden Markov models (HMMs) to develop such a classification. We use high-resolution behavioral data from bio-loggers attached to free-roaming pelagic tuna as an example. Observed patterns are assumed to be generated by an unseen Markov process that switches between several multivariate-normal distributions. Our approach is assessed in two parts. The first uses simulation experiments, from which the ability of the HMM to estimate known parameter values is examined using artificial time series of data consistent with hypotheses about pelagic predator foraging ecology. The second is the application to time series of continuous vertical movement data from yellowfin and bigeye tuna taken from tuna tagging experiments. These data were compressed into summary metrics capturing the variation of patterns in diving behavior and formed into a multivariate time series used to estimate a HMM. Each observation was associated with covariate information incorporating the effect of day and night on behavioral switching. Known parameter values were well recovered by the HMMs in our simulation experiments, resulting in mean correct classification rates of 90-97%, although some variance-covariance parameters were estimated less accurately. HMMs with two distinct behavioral states were selected for every time series of real tuna data, predicting a shallow warm state, which was similar across all individuals, and a deep colder state, which was more variable. Marked diurnal behavioral switching was predicted, consistent with many previous empirical studies on tuna. HMMs provide easily interpretable models for the objective classification of

  9. FT-IR/ATR univariate and multivariate calibration models for in situ monitoring of sugars in complex microalgal culture media.

    PubMed

    Girard, Jean-Michel; Deschênes, Jean-Sébastien; Tremblay, Réjean; Gagnon, Jonathan

    2013-09-01

    The objective of this work is to develop a quick and simple method for the in situ monitoring of sugars in biological cultures. A new technology based on Attenuated Total Reflectance-Fourier Transform Infrared (FT-IR/ATR) spectroscopy in combination with an external light guiding fiber probe was tested, first to build predictive models from solutions of pure sugars, and secondly to use those models to monitor the sugars in the complex culture medium of mixotrophic microalgae. Quantification results from the univariate model were correlated with the total dissolved solids content (R(2)=0.74). A vector normalized multivariate model was used to proportionally quantify the different sugars present in the complex culture medium and showed a predictive accuracy of >90% for sugars representing >20% of the total. This method offers an alternative to conventional sugar monitoring assays and could be used at-line or on-line in commercial scale production systems.

  10. An efficient parallel sampling technique for Multivariate Poisson-Lognormal model: Analysis with two crash count datasets

    DOE PAGES

    Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.

    2015-11-19

    Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chainmore » Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.« less

  11. An efficient parallel sampling technique for Multivariate Poisson-Lognormal model: Analysis with two crash count datasets

    SciTech Connect

    Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.

    2015-11-19

    Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chain Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.

  12. Application of in-line near infrared spectroscopy and multivariate batch modeling for process monitoring in fluid bed granulation.

    PubMed

    Kona, Ravikanth; Qu, Haibin; Mattes, Robert; Jancsik, Bela; Fahmy, Raafat M; Hoag, Stephen W

    2013-08-16

    Fluid bed is an important unit operation in pharmaceutical industry for granulation and drying. To improve our understanding of fluid bed granulation, in-line near infrared spectroscopy (NIRS) and novel environmental temperature and RH data logger called a PyroButton(®) were used in conjunction with partial least square (PLS) and principal component analysis (PCA) to develop multivariate statistical process control charts (MSPC). These control charts were constructed using real-time moisture, temperature and humidity data obtained from batch experiments. To demonstrate their application, statistical control charts such as Scores, Distance to model (DModX), and Hotelling's T(2) were used to monitor the batch evolution process during the granulation and subsequent drying phase; moisture levels were predicted using a validated PLS model. Two data loggers were placed one near the bottom of the granulator bowl plenum where air enters the granulator and another inside the granulator in contact with the product in the fluid bed helped to monitor the humidity and temperature levels during the granulation and drying phase. The control charts were used for real time fault analysis, and were tested on normal batches and on three batches which deviated from normal processing conditions. This study demonstrated the use of NIRS and the use of humidity and temperature data loggers in conjunction with multivariate batch modeling as an effective tool in process understanding and fault determining method to effective process control in fluid bed granulation.

  13. Testing a developmental cascade model of adolescent substance use trajectories and young adult adjustment

    PubMed Central

    LYNNE-LANDSMAN, SARAH D.; BRADSHAW, CATHERINE P.; IALONGO, NICHOLAS S.

    2013-01-01

    Developmental models highlight the impact of early risk factors on both the onset and growth of substance use, yet few studies have systematically examined the indirect effects of risk factors across several domains, and at multiple developmental time points, on trajectories of substance use and adult adjustment outcomes (e.g., educational attainment, mental health problems, criminal behavior). The current study used data from a community epidemiologically defined sample of 678 urban, primarily African American youth, followed from first grade through young adulthood (age 21) to test a developmental cascade model of substance use and young adult adjustment outcomes. Drawing upon transactional developmental theories and using growth mixture modeling procedures, we found evidence for a developmental progression from behavioral risk to adjustment problems in the peer context, culminating in a high-risk trajectory of alcohol, cigarette, and marijuana use during adolescence. Substance use trajectory membership was associated with adjustment in adulthood. These findings highlight the developmental significance of early individual and interpersonal risk factors on subsequent risk for substance use and, in turn, young adult adjustment outcomes. PMID:20883591

  14. Contact angle adjustment in equation-of-state-based pseudopotential model

    NASA Astrophysics Data System (ADS)

    Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong

    2016-05-01

    The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.

  15. Contact angle adjustment in equation-of-state-based pseudopotential model.

    PubMed

    Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong

    2016-05-01

    The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.

  16. Analysis of Case-Parent Trios Using a Loglinear Model with Adjustment for Transmission Ratio Distortion

    PubMed Central

    Huang, Lam O.; Infante-Rivard, Claire; Labbe, Aurélie

    2016-01-01

    Transmission of the two parental alleles to offspring deviating from the Mendelian ratio is termed Transmission Ratio Distortion (TRD), occurs throughout gametic and embryonic development. TRD has been well-studied in animals, but remains largely unknown in humans. The Transmission Disequilibrium Test (TDT) was first proposed to test for association and linkage in case-trios (affected offspring and parents); adjusting for TRD using control-trios was recommended. However, the TDT does not provide risk parameter estimates for different genetic models. A loglinear model was later proposed to provide child and maternal relative risk (RR) estimates of disease, assuming Mendelian transmission. Results from our simulation study showed that case-trios RR estimates using this model are biased in the presence of TRD; power and Type 1 error are compromised. We propose an extended loglinear model adjusting for TRD. Under this extended model, RR estimates, power and Type 1 error are correctly restored. We applied this model to an intrauterine growth restriction dataset, and showed consistent results with a previous approach that adjusted for TRD using control-trios. Our findings suggested the need to adjust for TRD in avoiding spurious results. Documenting TRD in the population is therefore essential for the correct interpretation of genetic association studies. PMID:27630667

  17. Comparison of Multivariate Poisson lognormal spatial and temporal crash models to identify hot spots of intersections based on crash types.

    PubMed

    Cheng, Wen; Gill, Gurdiljot Singh; Dasu, Ravi; Xie, Meiquan; Jia, Xudong; Zhou, Jiao

    2017-02-01

    Most of the studies are focused on the general crashes or total crash counts with considerably less research dedicated to different crash types. This study employs the Systemic approach for detection of hotspots and comprehensively cross-validates five multivariate models of crash type-based HSID methods which incorporate spatial and temporal random effects. It is anticipated that comparison of the crash estimation results of the five models would identify the impact of varied random effects on the HSID. The data over a ten year time period (2003-2012) were selected for analysis of a total 137 intersections in the City of Corona, California. The crash types collected in this study include: Rear-end, Head-on, Side-swipe, Broad-side, Hit object, and Others. Statistically significant correlations among crash outcomes for the heterogeneity error term were observed which clearly demonstrated their multivariate nature. Additionally, the spatial random effects revealed the correlations among neighboring intersections across crash types. Five cross-validation criteria which contains, Residual Sum of Squares, Kappa, Mean Absolute Deviation, Method Consistency Test, and Total Rank Difference, were applied to assess the performance of the five HSID methods at crash estimation. In terms of accumulated results which combined all crash types, the model with spatial random effects consistently outperformed the other competing models with a significant margin. However, the inclusion of spatial random effect in temporal models fell short of attaining the expected results. The overall observation from the model fitness and validation results failed to highlight any correlation among better model fitness and superior crash estimation.

  18. Testing a Social Ecological Model for Relations between Political Violence and Child Adjustment in Northern Ireland

    PubMed Central

    Cummings, E. Mark; Merrilees, Christine E.; Schermerhorn, Alice C.; Goeke-Morey, Marcie C.; Shirlow, Peter; Cairns, Ed

    2013-01-01

    Relations between political violence and child adjustment are matters of international concern. Past research demonstrates the significance of community, family and child psychological processes in child adjustment, supporting study of inter-relations between multiple social ecological factors and child adjustment in contexts of political violence. Testing a social ecological model, 300 mothers and their children (M= 12.28 years, SD = 1.77) from Catholic and Protestant working class neighborhoods in Belfast, Northern Ireland completed measures of community discord, family relations, and children’s regulatory processes (i.e., emotional security) and outcomes. Historical political violence in neighborhoods based on objective records (i.e., politically motivated deaths) were related to family members’ reports of current sectarian and non-sectarian antisocial behavior. Interparental conflict and parental monitoring and children’s emotional security about both the community and family contributed to explanatory pathways for relations between sectarian antisocial behavior in communities and children’s adjustment problems. The discussion evaluates support for social ecological models for relations between political violence and child adjustment and its implications for understanding relations in other parts of the world. PMID:20423550

  19. Testing a social ecological model for relations between political violence and child adjustment in Northern Ireland.

    PubMed

    Cummings, E Mark; Merrilees, Christine E; Schermerhorn, Alice C; Goeke-Morey, Marcie C; Shirlow, Peter; Cairns, Ed

    2010-05-01

    Relations between political violence and child adjustment are matters of international concern. Past research demonstrates the significance of community, family, and child psychological processes in child adjustment, supporting study of interrelations between multiple social ecological factors and child adjustment in contexts of political violence. Testing a social ecological model, 300 mothers and their children (M = 12.28 years, SD = 1.77) from Catholic and Protestant working class neighborhoods in Belfast, Northern Ireland, completed measures of community discord, family relations, and children's regulatory processes (i.e., emotional security) and outcomes. Historical political violence in neighborhoods based on objective records (i.e., politically motivated deaths) were related to family members' reports of current sectarian antisocial behavior and nonsectarian antisocial behavior. Interparental conflict and parental monitoring and children's emotional security about both the community and family contributed to explanatory pathways for relations between sectarian antisocial behavior in communities and children's adjustment problems. The discussion evaluates support for social ecological models for relations between political violence and child adjustment and its implications for understanding relations in other parts of the world.

  20. Improving medication safety: Development and impact of a multivariate model-based strategy to target high-risk patients

    PubMed Central

    Nguyen, Tri-Long; Leguelinel-Blache, Géraldine; Kinowski, Jean-Marie; Roux-Marson, Clarisse; Rougier, Marion; Spence, Jessica; Le Manach, Yannick; Landais, Paul

    2017-01-01

    Background Preventive strategies to reduce clinically significant medication errors (MEs), such as medication review, are often limited by human resources. Identifying high-risk patients to allow for appropriate resource allocation is of the utmost importance. To this end, we developed a predictive model to identify high-risk patients and assessed its impact on clinical decision-making. Methods From March 1st to April 31st 2014, we conducted a prospective cohort study on adult inpatients of a 1,644-bed University Hospital Centre. After a clinical evaluation of identified MEs, we fitted and internally validated a multivariate logistic model predicting their occurrence. Through 5,000 simulated randomized controlled trials, we compared two clinical decision pathways for intervention: one supported by our model and one based on the criterion of age. Results Among 1,408 patients, 365 (25.9%) experienced at least one clinically significant ME. Eleven variables were identified using multivariable logistic regression and used to build a predictive model which demonstrated fair performance (c-statistic: 0.72). Major predictors were age and number of prescribed drugs. When compared with a decision to treat based on the criterion of age, our model enhanced the interception of potential adverse drug events by 17.5%, with a number needed to treat of 6 patients. Conclusion We developed and tested a model predicting the occurrence of clinically significant MEs. Preliminary results suggest that its implementation into clinical practice could be used to focus interventions on high-risk patients. This must be confirmed on an independent set of patients and evaluated through a real clinical impact study. PMID:28192533

  1. Parametric estimation of quality adjusted lifetime (QAL) distribution in progressive illness--death model.

    PubMed

    Pradhan, Biswabrata; Dewanji, Anup

    2009-07-10

    In this work, we consider the parametric estimation of quality adjusted lifetime (QAL) distribution in progressive illness-death models. The main idea of this paper is to derive the theoretical distribution of QAL for the progressive illness-death models, under parametric models for the sojourn time distributions in different states, and then replace the model parameters by their estimates obtained by standard techniques of survival analysis. The method of estimation of the model parameters is also described. A data set of IBCSG Trial V has been analyzed for illustration. Extension to more general illness-death models is also discussed.

  2. Two Models of Caregiver Strain and Bereavement Adjustment: A Comparison of Husband and Daughter Caregivers of Breast Cancer Hospice Patients

    ERIC Educational Resources Information Center

    Bernard, Lori L.; Guarnaccia, Charles A.

    2003-01-01

    Purpose: Caregiver bereavement adjustment literature suggests opposite models of impact of role strain on bereavement adjustment after care-recipient death--a Complicated Grief Model and a Relief Model. This study tests these competing models for husband and adult-daughter caregivers of breast cancer hospice patients. Design and Methods: This…

  3. A Threshold Model of Social Support, Adjustment, and Distress after Breast Cancer Treatment

    ERIC Educational Resources Information Center

    Mallinckrodt, Brent; Armer, Jane M.; Heppner, P. Paul

    2012-01-01

    This study examined a threshold model that proposes that social support exhibits a curvilinear association with adjustment and distress, such that support in excess of a critical threshold level has decreasing incremental benefits. Women diagnosed with a first occurrence of breast cancer (N = 154) completed survey measures of perceived support…

  4. A Study of Perfectionism, Attachment, and College Student Adjustment: Testing Mediational Models.

    ERIC Educational Resources Information Center

    Hood, Camille A.; Kubal, Anne E.; Pfaller, Joan; Rice, Kenneth G.

    Mediational models predicting college students' adjustment were tested using regression analyses. Contemporary adult attachment theory was employed to explore the cognitive/affective mechanisms by which adult attachment and perfectionism affect various aspects of psychological functioning. Consistent with theoretical expectations, results…

  5. A Four-Part Model of Autonomy during Emerging Adulthood: Associations with Adjustment

    ERIC Educational Resources Information Center

    Lamborn, Susie D.; Groh, Kelly

    2009-01-01

    We found support for a four-part model of autonomy that links connectedness, separation, detachment, and agency to adjustment during emerging adulthood. Based on self-report surveys of 285 American college students, expected associations among the autonomy variables were found. In addition, agency, as measured by self-reliance, predicted lower…

  6. Towards an Integrated Conceptual Model of International Student Adjustment and Adaptation

    ERIC Educational Resources Information Center

    Schartner, Alina; Young, Tony Johnstone

    2016-01-01

    Despite a burgeoning body of empirical research on "the international student experience", the area remains under-theorized. The literature to date lacks a guiding conceptual model that captures the adjustment and adaptation trajectories of this unique, growing, and important sojourner group. In this paper, we therefore put forward a…

  7. Putting Predictive Models to Use: Scoring of Unseen Streaming Data using a Multivariate Time Series Classification Tool

    NASA Astrophysics Data System (ADS)

    Sipes, T.; Karimabadi, H.; Imber, S. M.; Slavin, J. A.; Pothier, N. M.; Coeli, R.

    2013-12-01

    Advances in data collection and data storage technologies have made the assembly of multivariate time series data more common. Data analysis and extraction of knowledge from such massive and complex datasets encountered in space physics today present a major obstacle to fully utilizing our vast data repositories and to scientific progress. In the previous years we introduced a time series classification tool MineTool-TS [Karimabadi et al, 2009] and its extension to simulation and streaming data [Sipes& Karimabadi, 2012, 2013]. In this work we demonstrate the applicability and real world utility of the predictive models created using the tool to scoring and labeling of a large dataset of unseen, streaming data. Predictive models that are created are based on the assumption that the training data used to create them is a true representative of the population. Multivariate time series datasets are also characterized by large amounts of variability and potential background noise. Moreover, there are multiple issues being raised by the streaming nature of the data. In this work we illustrate how we dealt with these challenges and demonstrate the results in a study of flux ropes in the plasma sheet. We have used an iterative process of building a predictive model using the original labeled training set, tested it on a week worth of streaming data, had the results checked by a scientific expert in the domain, and fed the results and the labels back into the training set, creating a large training set and using it to produce the final model. This final model was then put to use to predict a very large, unseen, six month period of streaming data. In this work we present the results of our machine learning approach to automatically detect flux ropes in spacecraft data.

  8. Bayesian inference for multivariate meta-analysis Box-Cox transformation models for individual patient data with applications to evaluation of cholesterol-lowering drugs.

    PubMed

    Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G; Shah, Arvind K; Lin, Jianxin

    2013-10-15

    In this paper, we propose a class of Box-Cox transformation regression models with multidimensional random effects for analyzing multivariate responses for individual patient data in meta-analysis. Our modeling formulation uses a multivariate normal response meta-analysis model with multivariate random effects, in which each response is allowed to have its own Box-Cox transformation. Prior distributions are specified for the Box-Cox transformation parameters as well as the regression coefficients in this complex model, and the deviance information criterion is used to select the best transformation model. Because the model is quite complex, we develop a novel Monte Carlo Markov chain sampling scheme to sample from the joint posterior of the parameters. This model is motivated by a very rich dataset comprising 26 clinical trials involving cholesterol-lowering drugs where the goal is to jointly model the three-dimensional response consisting of low density lipoprotein cholesterol (LDL-C), high density lipoprotein cholesterol (HDL-C), and triglycerides (TG) (LDL-C, HDL-C, TG). Because the joint distribution of (LDL-C, HDL-C, TG) is not multivariate normal and in fact quite skewed, a Box-Cox transformation is needed to achieve normality. In the clinical literature, these three variables are usually analyzed univariately; however, a multivariate approach would be more appropriate because these variables are correlated with each other. We carry out a detailed analysis of these data by using the proposed methodology.

  9. 10 km running performance predicted by a multiple linear regression model with allometrically adjusted variables

    PubMed Central

    Abad, Cesar C. C.; Barros, Ronaldo V.; Bertuzzi, Romulo; Gagliardi, João F. L.; Lima-Silva, Adriano E.; Lambert, Mike I.

    2016-01-01

    Abstract The aim of this study was to verify the power of VO2max, peak treadmill running velocity (PTV), and running economy (RE), unadjusted or allometrically adjusted, in predicting 10 km running performance. Eighteen male endurance runners performed: 1) an incremental test to exhaustion to determine VO2max and PTV; 2) a constant submaximal run at 12 km·h−1 on an outdoor track for RE determination; and 3) a 10 km running race. Unadjusted (VO2max, PTV and RE) and adjusted variables (VO2max0.72, PTV0.72 and RE0.60) were investigated through independent multiple regression models to predict 10 km running race time. There were no significant correlations between 10 km running time and either the adjusted or unadjusted VO2max. Significant correlations (p < 0.01) were found between 10 km running time and adjusted and unadjusted RE and PTV, providing models with effect size > 0.84 and power > 0.88. The allometrically adjusted predictive model was composed of PTV0.72 and RE0.60 and explained 83% of the variance in 10 km running time with a standard error of the estimate (SEE) of 1.5 min. The unadjusted model composed of a single PVT accounted for 72% of the variance in 10 km running time (SEE of 1.9 min). Both regression models provided powerful estimates of 10 km running time; however, the unadjusted PTV may provide an uncomplicated estimation. PMID:28149382

  10. Fast and accurate multivariate Gaussian modeling of protein families: predicting residue contacts and protein-interaction partners.

    PubMed

    Baldassi, Carlo; Zamparo, Marco; Feinauer, Christoph; Procaccini, Andrea; Zecchina, Riccardo; Weigt, Martin; Pagnani, Andrea

    2014-01-01

    In the course of evolution, proteins show a remarkable conservation of their three-dimensional structure and their biological function, leading to strong evolutionary constraints on the sequence variability between homologous proteins. Our method aims at extracting such constraints from rapidly accumulating sequence data, and thereby at inferring protein structure and function from sequence information alone. Recently, global statistical inference methods (e.g. direct-coupling analysis, sparse inverse covariance estimation) have achieved a breakthrough towards this aim, and their predictions have been successfully implemented into tertiary and quaternary protein structure prediction methods. However, due to the discrete nature of the underlying variable (amino-acids), exact inference requires exponential time in the protein length, and efficient approximations are needed for practical applicability. Here we propose a very efficient multivariate Gaussian modeling approach as a variant of direct-coupling analysis: the discrete amino-acid variables are replaced by continuous Gaussian random variables. The resulting statistical inference problem is efficiently and exactly solvable. We show that the quality of inference is comparable or superior to the one achieved by mean-field approximations to inference with discrete variables, as done by direct-coupling analysis. This is true for (i) the prediction of residue-residue contacts in proteins, and (ii) the identification of protein-protein interaction partner in bacterial signal transduction. An implementation of our multivariate Gaussian approach is available at the website http://areeweb.polito.it/ricerca/cmp/code.

  11. Multivariable wavelet finite element-based vibration model for quantitative crack identification by using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Zhang, Xingwu; Gao, Robert X.; Yan, Ruqiang; Chen, Xuefeng; Sun, Chuang; Yang, Zhibo

    2016-08-01

    Crack is one of the crucial causes of structural failure. A methodology for quantitative crack identification is proposed in this paper based on multivariable wavelet finite element method and particle swarm optimization. First, the structure with crack is modeled by multivariable wavelet finite element method (MWFEM) so that the vibration parameters of the first three natural frequencies in arbitrary crack conditions can be obtained, which is named as the forward problem. Second, the structure with crack is tested to obtain the vibration parameters of first three natural frequencies by modal testing and advanced vibration signal processing method. Then, the analyzed and measured first three natural frequencies are combined together to obtain the location and size of the crack by using particle swarm optimization. Compared with traditional wavelet finite element method, MWFEM method can achieve more accurate vibration analysis results because it interpolates all the solving variables at one time, which makes the MWFEM-based method to improve the accuracy in quantitative crack identification. In the end, the validity and superiority of the proposed method are verified by experiments of both cantilever beam and simply supported beam.

  12. Modelling nitrate pollution pressure using a multivariate statistical approach: the case of Kinshasa groundwater body, Democratic Republic of Congo

    NASA Astrophysics Data System (ADS)

    Mfumu Kihumba, Antoine; Ndembo Longo, Jean; Vanclooster, Marnik

    2016-03-01

    A multivariate statistical modelling approach was applied to explain the anthropogenic pressure of nitrate pollution on the Kinshasa groundwater body (Democratic Republic of Congo). Multiple regression and regression tree models were compared and used to identify major environmental factors that control the groundwater nitrate concentration in this region. The analyses were made in terms of physical attributes related to the topography, land use, geology and hydrogeology in the capture zone of different groundwater sampling stations. For the nitrate data, groundwater datasets from two different surveys were used. The statistical models identified the topography, the residential area, the service land (cemetery), and the surface-water land-use classes as major factors explaining nitrate occurrence in the groundwater. Also, groundwater nitrate pollution depends not on one single factor but on the combined influence of factors representing nitrogen loading sources and aquifer susceptibility characteristics. The groundwater nitrate pressure was better predicted with the regression tree model than with the multiple regression model. Furthermore, the results elucidated the sensitivity of the model performance towards the method of delineation of the capture zones. For pollution modelling at the monitoring points, therefore, it is better to identify capture-zone shapes based on a conceptual hydrogeological model rather than to adopt arbitrary circular capture zones.

  13. Comparison between splines and fractional polynomials for multivariable model building with continuous covariates: a simulation study with continuous response.

    PubMed

    Binder, Harald; Sauerbrei, Willi; Royston, Patrick

    2013-06-15

    In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2)  = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models.

  14. Improving the global applicability of the RUSLE model - adjustment of the topographical and rainfall erosivity factors

    NASA Astrophysics Data System (ADS)

    Naipal, V.; Reick, C.; Pongratz, J.; Van Oost, K.

    2015-09-01

    Large uncertainties exist in estimated rates and the extent of soil erosion by surface runoff on a global scale. This limits our understanding of the global impact that soil erosion might have on agriculture and climate. The Revised Universal Soil Loss Equation (RUSLE) model is, due to its simple structure and empirical basis, a frequently used tool in estimating average annual soil erosion rates at regional to global scales. However, large spatial-scale applications often rely on coarse data input, which is not compatible with the local scale on which the model is parameterized. Our study aims at providing the first steps in improving the global applicability of the RUSLE model in order to derive more accurate global soil erosion rates. We adjusted the topographical and rainfall erosivity factors of the RUSLE model and compared the resulting erosion rates to extensive empirical databases from the USA and Europe. By scaling the slope according to the fractal method to adjust the topographical factor, we managed to improve the topographical detail in a coarse resolution global digital elevation model. Applying the linear multiple regression method to adjust rainfall erosivity for various climate zones resulted in values that compared well to high resolution erosivity data for different regions. However, this method needs to be extended to tropical climates, for which erosivity is biased due to the lack of high resolution erosivity data. After applying the adjusted and the unadjusted versions of the RUSLE model on a global scale we find that the adjusted version shows a global higher mean erosion rate and more variability in the erosion rates. Comparison to empirical data sets of the USA and Europe shows that the adjusted RUSLE model is able to decrease the very high erosion rates in hilly regions that are observed in the unadjusted RUSLE model results. Although there are still some regional differences with the empirical databases, the results indicate that the

  15. A stepwise, multi-objective, multi-variable parameter optimization method for the APEX model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Proper parameterization enables hydrological models to make reliable estimates of non-point source pollution for effective control measures. The automatic calibration of hydrologic models requires significant computational power limiting its application. The study objective was to develop and eval...

  16. Cross-scale predictive modeling of CHO cell culture growth and metabolites using Raman spectroscopy and multivariate analysis.

    PubMed

    Berry, Brandon; Moretto, Justin; Matthews, Thomas; Smelko, John; Wiltberger, Kelly

    2015-01-01

    Multi-component, multi-scale Raman spectroscopy modeling results from a monoclonal antibody producing CHO cell culture process including data from two development scales (3 L, 200 L) and a clinical manufacturing scale environment (2,000 L) are presented. Multivariate analysis principles are a critical component to partial least squares (PLS) modeling but can quickly turn into an overly iterative process, thus a simplified protocol is proposed for addressing necessary steps including spectral preprocessing, spectral region selection, and outlier removal to create models exclusively from cell culture process data without the inclusion of spectral data from chemically defined nutrient solutions or targeted component spiking studies. An array of single-scale and combination-scale modeling iterations were generated to evaluate technology capabilities and model scalability. Analysis of prediction errors across models suggests that glucose, lactate, and osmolality are well modeled. Model strength was confirmed via predictive validation and by examining performance similarity across single-scale and combination-scale models. Additionally, accurate predictive models were attained in most cases for viable cell density and total cell density; however, these components exhibited some scale-dependencies that hindered model quality in cross-scale predictions where only development data was used in calibration. Glutamate and ammonium models were also able to achieve accurate predictions in most cases. However, there are differences in the absolute concentration ranges of these components across the datasets of individual bioreactor scales. Thus, glutamate and ammonium PLS models were forced to extrapolate in cases where models were derived from small scale data only but used in cross-scale applications predicting against manufacturing scale batches.

  17. Multivariate autoregressive models with exogenous inputs for intracerebral responses to direct electrical stimulation of the human brain

    PubMed Central

    Chang, Jui-Yang; Pigorini, Andrea; Massimini, Marcello; Tononi, Giulio; Nobili, Lino; Van Veen, Barry D.

    2012-01-01

    A multivariate autoregressive (MVAR) model with exogenous inputs (MVARX) is developed for describing the cortical interactions excited by direct electrical current stimulation of the cortex. Current stimulation is challenging to model because it excites neurons in multiple locations both near and distant to the stimulation site. The approach presented here models these effects using an exogenous input that is passed through a bank of filters, one for each channel. The filtered input and a random input excite a MVAR system describing the interactions between cortical activity at the recording sites. The exogenous input filter coefficients, the autoregressive coefficients, and random input characteristics are estimated from the measured activity due to current stimulation. The effectiveness of the approach is demonstrated using intracranial recordings from three surgical epilepsy patients. We evaluate models for wakefulness and NREM sleep in these patients with two stimulation levels in one patient and two stimulation sites in another resulting in a total of 10 datasets. Excellent agreement between measured and model-predicted evoked responses is obtained across all datasets. Furthermore, one-step prediction is used to show that the model also describes dynamics in pre-stimulus and evoked recordings. We also compare integrated information—a measure of intracortical communication thought to reflect the capacity for consciousness—associated with the network model in wakefulness and sleep. As predicted, higher information integration is found in wakefulness than in sleep for all five cases. PMID:23226122

  18. Multivariate normal tissue complication probability modeling of gastrointestinal toxicity after external beam radiotherapy for localized prostate cancer

    PubMed Central

    2013-01-01

    Background The risk of radio-induced gastrointestinal (GI) complications is affected by several factors other than the dose to the rectum such as patient characteristics, hormonal or antihypertensive therapy, and acute rectal toxicity. Purpose of this work is to study clinical and dosimetric parameters impacting on late GI toxicity after prostate external beam radiotherapy (RT) and to establish multivariate normal tissue complication probability (NTCP) model for radiation-induced GI complications. Methods A total of 57 men who had undergone definitive RT for prostate cancer were evaluated for GI events classified using the RTOG/EORTC scoring system. Their median age was 73 years (range 53–85). The patients were assessed for GI toxicity before, during, and periodically after RT completion. Several clinical variables along with rectum dose-volume parameters (Vx) were collected and their correlation to GI toxicity was analyzed by Spearman’s rank correlation coefficient (Rs). Multivariate logistic regression method using resampling techniques was applied to select model order and parameters for NTCP modeling. Model performance was evaluated through the area under the receiver operating characteristic curve (AUC). Results At a median follow-up of 30 months, 37% (21/57) patients developed G1-2 acute GI events while 33% (19/57) were diagnosed with G1-2 late GI events. An NTCP model for late mild/moderate GI toxicity based on three variables including V65 (OR = 1.03), antihypertensive and/or anticoagulant (AH/AC) drugs (OR = 0.24), and acute GI toxicity (OR = 4.3) was selected as the most predictive model (Rs = 0.47, p < 0.001; AUC = 0.79). This three-variable model outperforms the logistic model based on V65 only (Rs = 0.28, p < 0.001; AUC = 0.69). Conclusions We propose a logistic NTCP model for late GI toxicity considering not only rectal irradiation dose but also clinical patient-specific factors. Accordingly, the risk of G1

  19. A system to build distributed multivariate models and manage disparate data sharing policies: implementation in the scalable national network for effectiveness research

    PubMed Central

    Jiang, Xiaoqian; Matheny, Michael E; Farcas, Claudiu; D’Arcy, Michel; Pearlman, Laura; Nookala, Lavanya; Day, Michele E; Kim, Katherine K; Kim, Hyeoneui; Boxwala, Aziz; El-Kareh, Robert; Kuo, Grace M; Resnic, Frederic S; Kesselman, Carl; Ohno-Machado, Lucila

    2015-01-01

    Background Centralized and federated models for sharing data in research networks currently exist. To build multivariate data analysis for centralized networks, transfer of patient-level data to a central computation resource is necessary. The authors implemented distributed multivariate models for federated networks in which patient-level data is kept at each site and data exchange policies are managed in a study-centric manner. Objective The objective was to implement infrastructure that supports the functionality of some existing research networks (e.g., cohort discovery, workflow management, and estimation of multivariate analytic models on centralized data) while adding additional important new features, such as algorithms for distributed iterative multivariate models, a graphical interface for multivariate model specification, synchronous and asynchronous response to network queries, investigator-initiated studies, and study-based control of staff, protocols, and data sharing policies. Materials and Methods Based on the requirements gathered from statisticians, administrators, and investigators from multiple institutions, the authors developed infrastructure and tools to support multisite comparative effectiveness studies using web services for multivariate statistical estimation in the SCANNER federated network. Results The authors implemented massively parallel (map-reduce) computation methods and a new policy management system to enable each study initiated by network participants to define the ways in which data may be processed, managed, queried, and shared. The authors illustrated the use of these systems among institutions with highly different policies and operating under different state laws. Discussion and Conclusion Federated research networks need not limit distributed query functionality to count queries, cohort discovery, or independently estimated analytic models. Multivariate analyses can be efficiently and securely conducted without patient

  20. A Bayesian Nonparametric Model for Spatially Distributed Multivariate Binary Data with Application to a Multidrug-Resistant Tuberculosis (MDR-TB) Study

    PubMed Central

    Zhang, Nanhua; Shi, Ran

    2015-01-01

    Summary There has been an increasing interest in the analysis of spatially distributed multivariate binary data motivated by a wide range of research problems. Two types of correlations are usually involved: the correlation between the multiple outcomes at one location and the spatial correlation between the locations for one particular outcome. The commonly used regression models only consider one type of correlations while ignoring or modeling inappropriately the other one. To address this limitation, we adopt a Bayesian nonparametric approach to jointly modeling multivariate spatial binary data by integrating both types of correlations. A multivariate probit model is employed to link the binary outcomes to Gaussian latent variables; and Gaussian processes are applied to specify the spatially correlated random effects. We develop an efficient Markov chain Monte Carlo algorithm for the posterior computation. We illustrate the proposed model on simulation studies and a multidrug-resistant tuberculosis case study. PMID:24975716

  1. Model based multivariable controller for large scale compression stations. Design and experimental validation on the LHC 18KW cryorefrigerator

    SciTech Connect

    Bonne, François; Bonnay, Patrick; Bradu, Benjamin

    2014-01-29

    In this paper, a multivariable model-based non-linear controller for Warm Compression Stations (WCS) is proposed. The strategy is to replace all the PID loops controlling the WCS with an optimally designed model-based multivariable loop. This new strategy leads to high stability and fast disturbance rejection such as those induced by a turbine or a compressor stop, a key-aspect in the case of large scale cryogenic refrigeration. The proposed control scheme can be used to have precise control of every pressure in normal operation or to stabilize and control the cryoplant under high variation of thermal loads (such as a pulsed heat load expected to take place in future fusion reactors such as those expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor ITER or the Japan Torus-60 Super Advanced fusion experiment JT-60SA). The paper details how to set the WCS model up to synthesize the Linear Quadratic Optimal feedback gain and how to use it. After preliminary tuning at CEA-Grenoble on the 400W@1.8K helium test facility, the controller has been implemented on a Schneider PLC and fully tested first on the CERN's real-time simulator. Then, it was experimentally validated on a real CERN cryoplant. The efficiency of the solution is experimentally assessed using a reasonable operating scenario of start and stop of compressors and cryogenic turbines. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  2. Model based multivariable controller for large scale compression stations. Design and experimental validation on the LHC 18KW cryorefrigerator

    NASA Astrophysics Data System (ADS)

    Bonne, François; Alamir, Mazen; Bonnay, Patrick; Bradu, Benjamin

    2014-01-01

    In this paper, a multivariable model-based non-linear controller for Warm Compression Stations (WCS) is proposed. The strategy is to replace all the PID loops controlling the WCS with an optimally designed model-based multivariable loop. This new strategy leads to high stability and fast disturbance rejection such as those induced by a turbine or a compressor stop, a key-aspect in the case of large scale cryogenic refrigeration. The proposed control scheme can be used to have precise control of every pressure in normal operation or to stabilize and control the cryoplant under high variation of thermal loads (such as a pulsed heat load expected to take place in future fusion reactors such as those expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor ITER or the Japan Torus-60 Super Advanced fusion experiment JT-60SA). The paper details how to set the WCS model up to synthesize the Linear Quadratic Optimal feedback gain and how to use it. After preliminary tuning at CEA-Grenoble on the 400W@1.8K helium test facility, the controller has been implemented on a Schneider PLC and fully tested first on the CERN's real-time simulator. Then, it was experimentally validated on a real CERN cryoplant. The efficiency of the solution is experimentally assessed using a reasonable operating scenario of start and stop of compressors and cryogenic turbines. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  3. Adjusting lidar-derived digital terrain models in coastal marshes based on estimated aboveground biomass density

    SciTech Connect

    Medeiros, Stephen; Hagen, Scott; Weishampel, John; Angelo, James

    2015-03-25

    Digital elevation models (DEMs) derived from airborne lidar are traditionally unreliable in coastal salt marshes due to the inability of the laser to penetrate the dense grasses and reach the underlying soil. To that end, we present a novel processing methodology that uses ASTER Band 2 (visible red), an interferometric SAR (IfSAR) digital surface model, and lidar-derived canopy height to classify biomass density using both a three-class scheme (high, medium and low) and a two-class scheme (high and low). Elevation adjustments associated with these classes using both median and quartile approaches were applied to adjust lidar-derived elevation values closer to true bare earth elevation. The performance of the method was tested on 229 elevation points in the lower Apalachicola River Marsh. The two-class quartile-based adjusted DEM produced the best results, reducing the RMS error in elevation from 0.65 m to 0.40 m, a 38% improvement. The raw mean errors for the lidar DEM and the adjusted DEM were 0.61 ± 0.24 m and 0.32 ± 0.24 m, respectively, thereby reducing the high bias by approximately 49%.

  4. Adjusting lidar-derived digital terrain models in coastal marshes based on estimated aboveground biomass density

    DOE PAGES

    Medeiros, Stephen; Hagen, Scott; Weishampel, John; ...

    2015-03-25

    Digital elevation models (DEMs) derived from airborne lidar are traditionally unreliable in coastal salt marshes due to the inability of the laser to penetrate the dense grasses and reach the underlying soil. To that end, we present a novel processing methodology that uses ASTER Band 2 (visible red), an interferometric SAR (IfSAR) digital surface model, and lidar-derived canopy height to classify biomass density using both a three-class scheme (high, medium and low) and a two-class scheme (high and low). Elevation adjustments associated with these classes using both median and quartile approaches were applied to adjust lidar-derived elevation values closer tomore » true bare earth elevation. The performance of the method was tested on 229 elevation points in the lower Apalachicola River Marsh. The two-class quartile-based adjusted DEM produced the best results, reducing the RMS error in elevation from 0.65 m to 0.40 m, a 38% improvement. The raw mean errors for the lidar DEM and the adjusted DEM were 0.61 ± 0.24 m and 0.32 ± 0.24 m, respectively, thereby reducing the high bias by approximately 49%.« less

  5. A multivariate linear regression model for predicting children's blood lead levels based on soil lead levels: A study at four superfund sites.

    PubMed

    Lewin, M D; Sarasua, S; Jones, P A

    1999-07-01

    For the purpose of examining the association between blood lead levels and household-specific soil lead levels, we used a multivariate linear regression model to find a slope factor relating soil lead levels to blood lead levels. We used previously collected data from the Agency for Toxic Substances and Disease Registry's (ATSDR's) multisite lead and cadmium study. The data included the blood lead measurements (0.5 to 40.2 microg/dL) of 1015 children aged 6-71 months, and corresponding household-specific environmental samples. The environmental samples included lead in soil (18.1-9980 mg/kg), house dust (5.2-71,000 mg/kg), interior paint (0-16.5 mg/cm2), and tap water (0.3-103 microg/L). After adjusting for income, education of the parents, presence of a smoker in the household, sex, and dust lead, and using a double log transformation, we found a slope factor of 0.1388 with a 95% confidence interval of 0.09-0.19 for the dose-response relationship between the natural log of the soil lead level and the natural log of the blood lead level. The predicted blood lead level corresponding to a soil lead level of 500 mg/kg was 5.99 microg/kg with a 95% prediction interval of 2. 08-17.29. Predicted values and their corresponding prediction intervals varied by covariate level. The model shows that increased soil lead level is associated with elevated blood leads in children, but that predictions based on this regression model are subject to high levels of uncertainty and variability.

  6. A new clinical multivariable model that predicts postoperative acute kidney injury: impact of endogenous ouabain

    PubMed Central

    Simonini, Marco; Lanzani, Chiara; Bignami, Elena; Casamassima, Nunzia; Frati, Elena; Meroni, Roberta; Messaggio, Elisabetta; Alfieri, Ottavio; Hamlyn, John; Body, Simon C.; Collard, C. David; Zangrillo, Alberto; Manunta, Paolo; Body, Simon C.; Daniel Muehlschlegel, J.; Shernan, Stanton K.; Fox, Amanda A.; David Collard, C.

    2014-01-01

    Background Acute kidney injury (AKI) is an important complication of cardiac surgery. Recently, elevated levels of endogenous ouabain (EO), an adrenal stress hormone with haemodynamic and renal effects, have been associated with worse renal outcome after cardiac surgery. Our aim was to develop and evaluate a new risk model of AKI using simple preoperative clinical parameters and to investigate the utility of EO. Methods The primary outcome was AKI according to Acute Kidney Injury Network stage II or III. We selected the Northern New England Cardiovascular Disease Study Group (NNECDSG) as a reference model. We built a new internal predictive risk model considering common clinical variables (CLIN-RISK), compared this model with the NNECDSG model and determined whether the addition of preoperative plasma EO improved prediction of AKI. Results All models were tested on >800 patients admitted for elective cardiac surgery in our hospital. Seventy-nine patients developed AKI (9.9%). Preoperative EO levels were strongly associated with the incidence of AKI and clinical complication (total ICU stay and in-hospital mortality). The NNECDSG model was confirmed as a good predictor of AKI (AUC 0.74, comparable to the NNECDSG reference population). Our CLIN-RISK model had improved predictive power for AKI (AUC 0.79, CI 95% 0.73–0.84). Furthermore, addition of preoperative EO levels to both clinical models improved AUC to 0.79 and to 0.83, respectively (ΔAUC +0.05 and +0.04, respectively, P < 0.01). Conclusion In a population where the predictive power of the NNECDSG model was confirmed, CLIN-RISK was more powerful. Both clinical models were further improved by the addition of preoperative plasma EO levels. These new models provide improved predictability of the relative risk for the development of AKI following cardiac surgery and suggest that EO is a marker for renal vascular injury. PMID:24920842

  7. An appraisal-based coping model of attachment and adjustment to arthritis.

    PubMed

    Sirois, Fuschia M; Gick, Mary L

    2016-05-01

    Guided by pain-related attachment models and coping theory, we used structural equation modeling to test an appraisal-based coping model of how insecure attachment was linked to arthritis adjustment in a sample of 365 people with arthritis. The structural equation modeling analyses revealed indirect and direct associations of anxious and avoidant attachment with greater appraisals of disease-related threat, less perceived social support to deal with this threat, and less coping efficacy. There was evidence of reappraisal processes for avoidant but not anxious attachment. Findings highlight the importance of considering attachment style when assessing how people cope with the daily challenges of arthritis.

  8. Validation and Recalibration of Two Multivariable Prognostic Models for Survival and Independence in Acute Stroke

    PubMed Central

    Teece, Lucy; Dennis, Martin S.; Roffe, Christine

    2016-01-01

    Introduction Various prognostic models have been developed for acute stroke, including one based on age and five binary variables (‘six simple variables’ model; SSVMod) and one based on age plus scores on the National Institutes of Health Stroke Scale (NIHSSMod). The aims of this study were to externally validate and recalibrate these models, and to compare their predictive ability in relation to both survival and independence. Methods Data from a large clinical trial of oxygen therapy (n = 8003) were used to determine the discrimination and calibration of the models, using C-statistics, calibration plots, and Hosmer-Lemeshow statistics. Methods of recalibration in the large and logistic recalibration were used to update the models. Results For discrimination, both models functioned better for survival (C-statistics between .802 and .837) than for independence (C-statistics between .725 and .735). Both models showed slight shortcomings with regard to calibration, over-predicting survival and under-predicting independence; the NIHSSMod performed slightly better than the SSVMod. For the most part, there were only minor differences between ischaemic and haemorrhagic strokes. Logistic recalibration successfully updated the models for a clinical trial population. Conclusions Both prognostic models performed well overall in a clinical trial population. The choice between them is probably better based on clinical and practical considerations than on statistical considerations. PMID:27227988

  9. Mathematical modeling on experimental protocol of glucose adjustment for non-invasive blood glucose sensing

    NASA Astrophysics Data System (ADS)

    Jiang, Jingying; Min, Xiaolin; Zou, Da; Xu, Kexin

    2012-03-01

    Currently, blood glucose concentration levels from OGTT(Oral Glucose Tolerance Test) results are used to build PLS model in noninvasive blood glucose sensing by Near-Infrared(NIR) Spectroscopy. However, the univocal dynamic change trend of blood glucose concentration based on OGTT results is not various enough to provide comprehensive data to make PLS model robust and accurate. In this talk, with the final purpose of improving the stability and accuracy of the PLS model, we introduced an integrated minimal model(IMM) of glucose metabolism system. First, by adjusting parameters, which represent different metabolism characteristics and individual differences, comparatively ideal mediation programs to different groups of people, even individuals were customized. Second, with different glucose input types(oral method, intravenous injection, or intravenous drip), we got various changes of blood glucose concentration. And by studying the adjustment methods of blood glucose concentration, we would thus customize corresponding experimental protocols of glucose adjustment to different people for noninvasive blood glucose concentration and supply comprehensive data for PLS model.

  10. Multivariate Strategies in Functional Magnetic Resonance Imaging

    ERIC Educational Resources Information Center

    Hansen, Lars Kai

    2007-01-01

    We discuss aspects of multivariate fMRI modeling, including the statistical evaluation of multivariate models and means for dimensional reduction. In a case study we analyze linear and non-linear dimensional reduction tools in the context of a "mind reading" predictive multivariate fMRI model.

  11. Voxelwise multivariate analysis of multimodality magnetic resonance imaging.

    PubMed

    Naylor, Melissa G; Cardenas, Valerie A; Tosun, Duygu; Schuff, Norbert; Weiner, Michael; Schwartzman, Armin

    2014-03-01

    Most brain magnetic resonance imaging (MRI) studies concentrate on a single MRI contrast or modality, frequently structural MRI. By performing an integrated analysis of several modalities, such as structural, perfusion-weighted, and diffusion-weighted MRI, new insights may be attained to better understand the underlying processes of brain diseases. We compare two voxelwise approaches: (1) fitting multiple univariate models, one for each outcome and then adjusting for multiple comparisons among the outcomes and (2) fitting a multivariate model. In both cases, adjustment for multiple comparisons is performed over all voxels jointly to account for the search over the brain. The multivariate model is able to account for the multiple comparisons over outcomes without assuming independence because the covariance structure between modalities is estimated. Simulations show that the multivariate approach is more powerful when the outcomes are correlated and, even when the outcomes are independent, the multivariate approach is just as powerful or more powerful when at least two outcomes are dependent on predictors in the model. However, multiple univariate regressions with Bonferroni correction remain a desirable alternative in some circumstances. To illustrate the power of each approach, we analyze a case control study of Alzheimer's disease, in which data from three MRI modalities are available.

  12. Modelling the rate of change in a longitudinal study with missing data, adjusting for contact attempts.

    PubMed

    Akacha, Mouna; Hutton, Jane L

    2011-05-10

    The Collaborative Ankle Support Trial (CAST) is a longitudinal trial of treatments for severe ankle sprains in which interest lies in the rate of improvement, the effectiveness of reminders and potentially informative missingness. A model is proposed for continuous longitudinal data with non-ignorable or informative missingness, taking into account the nature of attempts made to contact initial non-responders. The model combines a non-linear mixed model for the outcome model with logistic regression models for the reminder processes. A sensitivity analysis is used to contrast this model with the traditional selection model, where we adjust for missingness by modelling the missingness process. The conclusions that recovery is slower, and less satisfactory with age and more rapid with below knee cast than with a tubular bandage do not alter materially across all models investigated. The results also suggest that phone calls are most effective in retrieving questionnaires.

  13. Determining sources of elevated salinity in pre-hydraulic fracturing water quality data using a multivariate discriminant analysis model

    NASA Astrophysics Data System (ADS)

    Lautz, L. K.; Hoke, G. D.; Lu, Z.; Siegel, D. I.

    2013-12-01

    Hydraulic fracturing has the potential to introduce saline water into the environment due to migration of deep formation water to shallow aquifers and/or discharge of flowback water to the environment during transport and disposal. It is challenging to definitively identify whether elevated salinity is associated with hydraulic fracturing, in part, due to the real possibility of other anthropogenic sources of salinity in the human-impacted watersheds in which drilling is taking place and some formation water present naturally in shallow groundwater aquifers. We combined new and published chemistry data for private drinking water wells sampled across five southern New York (NY) counties overlying the Marcellus Shale (Broome, Chemung, Chenango, Steuben, and Tioga). Measurements include Cl, Na, Br, I, Ca, Mg, Ba, SO4, and Sr. We compared this baseline groundwater quality data in NY, now under a moratorium on hydraulic fracturing, with published chemistry data for 6 different potential sources of elevated salinity in shallow groundwater, including Appalachian Basin formation water, road salt runoff, septic effluent, landfill leachate, animal waste, and water softeners. A multivariate random number generator was used to create a synthetic, low salinity (< 20 mg/L Cl) groundwater data set (n=1000) based on the statistical properties of the observed low salinity groundwater. The synthetic, low salinity groundwater was then artificially mixed with variable proportions of different potential sources of salinity to explore chemical differences between groundwater impacted by formation water, road salt runoff, septic effluent, landfill leachate, animal waste, and water softeners. We then trained a multivariate, discriminant analysis model on the resulting data set to classify observed high salinity groundwater (> 20 mg/L Cl) as being affected by formation water, road salt, septic effluent, landfill leachate, animal waste, or water softeners. Single elements or pairs of

  14. Recent Changes Leading to Subsequent Changes: Extensions of Multivariate Latent Difference Score Models

    ERIC Educational Resources Information Center

    Grimm, Kevin J.; An, Yang; McArdle, John J.; Zonderman, Alan B.; Resnick, Susan M.

    2012-01-01

    Latent difference score models (e.g., McArdle & Hamagami, 2001) are extended to include effects from prior changes to subsequent changes. This extension of latent difference scores allows for testing hypotheses where recent changes, as opposed to recent levels, are a primary predictor of subsequent changes. These models are applied to…

  15. Structured Latent Curve Models for the Study of Change in Multivariate Repeated Measures

    ERIC Educational Resources Information Center

    Blozis, Shelley A.

    2004-01-01

    This article considers a structured latent curve model for multiple repeated measures. In a structured latent curve model, a smooth nonlinear function characterizes the mean response. A first-order Taylor polynomial taken with regard to the mean function defines elements of a restricted factor matrix that may include parameters that enter…

  16. A Thurstonian Pairwise Choice Model with Univariate and Multivariate Spline Transformations.

    ERIC Educational Resources Information Center

    De Soete, Geert; Winsberg, Suzanne

    1993-01-01

    A probabilistic choice model, based on L. L. Thurstone's Law of Comparative Judgment Case V, is developed for paired comparisons data about psychological stimuli. The model assumes that each stimulus is measured on a small number of physical variables. An algorithm for estimating parameters is illustrated with real data. (SLD)

  17. Longitudinal Relationships Between Productive Activities and Functional Health in Later Years: A Multivariate Latent Growth Curve Modeling Approach.

    PubMed

    Choi, Eunhee; Tang, Fengyan; Kim, Sung-Geun; Turk, Phillip

    2016-10-01

    This study examined the longitudinal relationships between functional health in later years and three types of productive activities: volunteering, full-time, and part-time work. Using the data from five waves (2000-2008) of the Health and Retirement Study, we applied multivariate latent growth curve modeling to examine the longitudinal relationships among individuals 50 or over. Functional health was measured by limitations in activities of daily living. Individuals who volunteered, worked either full time or part time exhibited a slower decline in functional health than nonparticipants. Significant associations were also found between initial functional health and longitudinal changes in productive activity participation. This study provides additional support for the benefits of productive activities later in life; engagement in volunteering and employment are indeed associated with better functional health in middle and old age.

  18. Modeling the High Speed Research Cycle 2B Longitudinal Aerodynamic Database Using Multivariate Orthogonal Functions

    NASA Technical Reports Server (NTRS)

    Morelli, E. A.; Proffitt, M. S.

    1999-01-01

    The data for longitudinal non-dimensional, aerodynamic coefficients in the High Speed Research Cycle 2B aerodynamic database were modeled using polynomial expressions identified with an orthogonal function modeling technique. The discrepancy between the tabular aerodynamic data and the polynomial models was tested and shown to be less than 15 percent for drag, lift, and pitching moment coefficients over the entire flight envelope. Most of this discrepancy was traced to smoothing local measurement noise and to the omission of mass case 5 data in the modeling process. A simulation check case showed that the polynomial models provided a compact and accurate representation of the nonlinear aerodynamic dependencies contained in the HSR Cycle 2B tabular aerodynamic database.

  19. Assessment of goat fat depots using ultrasound technology and multiple multivariate prediction models.

    PubMed

    Peres, A M; Dias, L G; Joy, M; Teixeira, A

    2010-02-01

    Assessment of fat depots for several goat body parts is an expensive and time-consuming task requiring a trained technician. Therefore, the establishment of models to predict fat depots based on data requiring simpler and easier procedures, such as ultrasound measurements, that could be carried out in vivo, would be a major advantage. An interesting alternative to the use of multiple linear regression models is the use of partial least squares or artificial neural network models because they allow the establishment of one model to simultaneously predict different fat depots of interest. In this work, the applicability of these models to simultaneously predict 7 goat fat depots (subcutaneous fat, intermuscular fat, total carcass fat, omental fat, kidney and pelvic fat, mesenteric fat, and total body fat) was investigated. Although satisfactory correlation and prediction results were obtained using the multiple partial least squares model (cross-verification and validation R(2) and standard prediction error values between 0.66 and 0.98 and 247 and 2,168, respectively), the best global correlation and prediction performances were achieved with the multiple radial basis function artificial neural network (verification and validation R(2) and standard prediction error values between 0.82 and 0.96 and 304 and 1,707, respectively). These 2 multiple models allowed correlating and predicting simultaneously the 7 goat fat depots based on the goat BW and on only 2 ultrasonic measures (lumbar subcutaneous fat between fifth and sixth vertebrae and the fat depth at the third sternebra). Moreover, both multiple models showed better results compared with those obtained with multiple linear regression models proposed in previous work.

  20. Parental concern about vaccine safety in Canadian children partially immunized at age 2: a multivariable model including system level factors.

    PubMed

    MacDonald, Shannon E; Schopflocher, Donald P; Vaudry, Wendy

    2014-01-01

    Children who begin but do not fully complete the recommended series of childhood vaccines by 2 y of age are a much larger group than those who receive no vaccines. While parents who refuse all vaccines typically express concern about vaccine safety, it is critical to determine what influences parents of 'partially' immunized children. This case-control study examined whether parental concern about vaccine safety was responsible for partial immunization, and whether other personal or system-level factors played an important role. A random sample of parents of partially and completely immunized 2 y old children were selected from a Canadian regional immunization registry and completed a postal survey assessing various personal and system-level factors. Unadjusted odds ratios (OR) and adjusted ORs (aOR) were calculated with logistic regression. While vaccine safety concern was associated with partial immunization (OR 7.338, 95% CI 4.138-13.012), other variables were more strongly associated and reduced the strength of the relationship between concern and partial immunization in multivariable analysis (aOR 2.829, 95% CI 1.151-6.957). Other important factors included perceived disease susceptibility and severity (aOR 4.629, 95% CI 2.017-10.625), residential mobility (aOR 3.908, 95% CI 2.075-7.358), daycare use (aOR 0.310, 95% CI 0.144-0.671), number of needles administered at each visit (aOR 7.734, 95% CI 2.598-23.025) and access to a regular physician (aOR 0.219, 95% CI 0.057-0.846). While concern about vaccine safety may be addressed through educational strategies, this study suggests that additional program and policy-level strategies may positively impact immunization uptake.

  1. Multivariate Statistical Models for Predicting Sediment Yields from Southern California Watersheds

    USGS Publications Warehouse

    Gartner, Joseph E.; Cannon, Susan H.; Helsel, Dennis R.; Bandurraga, Mark

    2009-01-01

    Debris-retention basins in Southern California are frequently used to protect communities and infrastructure from the hazards of flooding and debris flow. Empirical models that predict sediment yields are used to determine the size of the basins. Such models have been developed using analyses of records of the amount of material removed from debris retention basins, associated rainfall amounts, measures of watershed characteristics, and wildfire extent and history. In this study we used multiple linear regression methods to develop two updated empirical models to predict sediment yields for watersheds located in Southern California. The models are based on both new and existing measures of volume of sediment removed from debris retention basins, measures of watershed morphology, and characterization of burn severity distributions for watersheds located in Ventura, Los Angeles, and San Bernardino Counties. The first model presented reflects conditions in watersheds located throughout the Transverse Ranges of Southern California and is based on volumes of sediment measured following single storm events with known rainfall conditions. The second model presented is specific to conditions in Ventura County watersheds and was developed using volumes of sediment measured following multiple storm events. To relate sediment volumes to triggering storm rainfall, a rainfall threshold was developed to identify storms likely to have caused sediment deposition. A measured volume of sediment deposited by numerous storms was parsed among the threshold-exceeding storms based on relative storm rainfall totals. The predictive strength of the two models developed here, and of previously-published models, was evaluated using a test dataset consisting of 65 volumes of sediment yields measured in Southern California. The evaluation indicated that the model developed using information from single storm events in the Transverse Ranges best predicted sediment yields for watersheds in San

  2. Comprehensive ripeness-index for prediction of ripening level in mangoes by multivariate modelling of ripening behaviour

    NASA Astrophysics Data System (ADS)

    Eyarkai Nambi, Vijayaram; Thangavel, Kuladaisamy; Manickavasagan, Annamalai; Shahir, Sultan

    2017-01-01

    Prediction of ripeness level in climacteric fruits is essential for post-harvest handling. An index capable of predicting ripening level with minimum inputs would be highly beneficial to the handlers, processors and researchers in fruit industry. A study was conducted with Indian mango cultivars to develop a ripeness index and associated model. Changes in physicochemical, colour and textural properties were measured throughout the ripening period and the period was classified into five stages (unripe, early ripe, partially ripe, ripe and over ripe). Multivariate regression techniques like partial least square regression, principal component regression and multi linear regression were compared and evaluated for its prediction. Multi linear regression model with 12 parameters was found more suitable in ripening prediction. Scientific variable reduction method was adopted to simplify the developed model. Better prediction was achieved with either 2 or 3 variables (total soluble solids, colour and acidity). Cross validation was done to increase the robustness and it was found that proposed ripening index was more effective in prediction of ripening stages. Three-variable model would be suitable for commercial applications where reasonable accuracies are sufficient. However, 12-variable model can be used to obtain more precise results in research and development applications.

  3. A Methodology for Validation of Complex Multi-Variable Military Computerized Models.

    DTIC Science & Technology

    1980-12-01

    limitations of the model. Due to the high uncertainty of reality, care must be taken by users to avoid interpreting results as a prediction of what will...results. Variable-ParamjtCr 3. There are two primary features of Hermann’s variable-parameter criterion: comparisions between the simulation’s parameters...stage. Statistical estimation and hypotheses testing are the primary tools for this step. The third stage attempts to test the model’s ability to

  4. A Disturbance Rejection for Model Predictive Control Using a Multivariable Disturbance Observer

    NASA Astrophysics Data System (ADS)

    Tange, Yoshio; Matsui, Tetsuro; Matsumoto, Koji; Nishida, Hideyuki

    Model predictive control has been widely used in industrial applications. And more efficient and more precise control is being required to meet growing demands such as energy savings and fewer emissions in industrial plants. In this paper, we focus on step response model based predictive control, which is one of most applied predictive control methods, and propose a new disturbance rejection method to overcome control performance degradation caused by unmeasured ramp-like disturbances.

  5. Content-adaptive pentary steganography using the multivariate generalized Gaussian cover model

    NASA Astrophysics Data System (ADS)

    Sedighi, Vahid; Fridrich, Jessica; Cogranne, Rémi

    2015-03-01

    The vast majority of steganographic schemes for digital images stored in the raster format limit the amplitude of embedding changes to the smallest possible value. In this paper, we investigate the possibility to further improve the empirical security by allowing the embedding changes in highly textured areas to have a larger amplitude and thus embedding there a larger payload. Our approach is entirely model driven in the sense that the probabilities with which the cover pixels should be changed by a certain amount are derived from the cover model to minimize the power of an optimal statistical test. The embedding consists of two steps. First, the sender estimates the cover model parameters, the pixel variances, when modeling the pixels as a sequence of independent but not identically distributed generalized Gaussian random variables. Then, the embedding change probabilities for changing each pixel by 1 or 2, which can be transformed to costs for practical embedding using syndrome-trellis codes, are computed by solving a pair of non-linear algebraic equations. Using rich models and selection-channel-aware features, we compare the security of our scheme based on the generalized Gaussian model with pentary versions of two popular embedding algorithms: HILL and S-UNIWARD.

  6. The HHS-HCC Risk Adjustment Model for Individual and Small Group Markets under the Affordable Care Act

    PubMed Central

    Kautter, John; Pope, Gregory C; Ingber, Melvin; Freeman, Sara; Patterson, Lindsey; Cohen, Michael; Keenan, Patricia

    2014-01-01

    Beginning in 2014, individuals and small businesses are able to purchase private health insurance through competitive Marketplaces. The Affordable Care Act (ACA) provides for a program of risk adjustment in the individual and small group markets in 2014 as Marketplaces are implemented and new market reforms take effect. The purpose of risk adjustment is to lessen or eliminate the influence of risk selection on the premiums that plans charge. The risk adjustment methodology includes the risk adjustment model and the risk transfer formula. This article is the second of three in this issue of the Review that describe the Department of Health and Human Services (HHS) risk adjustment methodology and focuses on the risk adjustment model. In our first companion article, we discuss the key issues and choices in developing the methodology. In this article, we present the risk adjustment model, which is named the HHS-Hierarchical Condition Categories (HHS-HCC) risk adjustment model. We first summarize the HHS-HCC diagnostic classification, which is the key element of the risk adjustment model. Then the data and methods, results, and evaluation of the risk adjustment model are presented. Fifteen separate models are developed. For each age group (adult, child, and infant), a model is developed for each cost sharing level (platinum, gold, silver, and bronze metal levels, as well as catastrophic plans). Evaluation of the risk adjustment models shows good predictive accuracy, both for individuals and for groups. Lastly, this article provides examples of how the model output is used to calculate risk scores, which are an input into the risk transfer formula. Our third companion paper describes the risk transfer formula. PMID:25360387

  7. Outcome Prediction after Traumatic Brain Injury: Comparison of the Performance of Routinely Used Severity Scores and Multivariable Prognostic Models

    PubMed Central

    Majdan, Marek; Brazinova, Alexandra; Rusnak, Martin; Leitgeb, Johannes

    2017-01-01

    Objectives: Prognosis of outcome after traumatic brain injury (TBI) is important in the assessment of quality of care and can help improve treatment and outcome. The aim of this study was to compare the prognostic value of relatively simple injury severity scores between each other and against a gold standard model – the IMPACT-extended (IMP-E) multivariable prognostic model. Materials and Methods: For this study, 866 patients with moderate/severe TBI from Austria were analyzed. The prognostic performances of the Glasgow coma scale (GCS), GCS motor (GCSM) score, abbreviated injury scale for the head region, Marshall computed tomographic (CT) classification, and Rotterdam CT score were compared side-by-side and against the IMP-E score. The area under the receiver operating characteristics curve (AUC) and Nagelkerke's R2 were used to assess the prognostic performance. Outcomes at the Intensive Care Unit, at hospital discharge, and at 6 months (mortality and unfavorable outcome) were used as end-points. Results: Comparing AUCs and R2s of the same model across four outcomes, only little variation was apparent. A similar pattern is observed when comparing the models between each other: Variation of AUCs <±0.09 and R2s by up to ±0.17 points suggest that all scores perform similarly in predicting outcomes at various points (AUCs: 0.65–0.77; R2s: 0.09–0.27). All scores performed significantly worse than the IMP-E model (with AUC > 0.83 and R2 > 0.42 for all outcomes): AUCs were worse by 0.10–0.22 (P < 0.05) and R2s were worse by 0.22–0.39 points. Conclusions: All tested simple scores can provide reasonably valid prognosis. However, it is confirmed that well-developed multivariable prognostic models outperform these scores significantly and should be used for prognosis in patients after TBI wherever possible. PMID:28149077

  8. Lithium-ion Open Circuit Voltage (OCV) curve modelling and its ageing adjustment

    NASA Astrophysics Data System (ADS)

    Lavigne, L.; Sabatier, J.; Francisco, J. Mbala; Guillemard, F.; Noury, A.

    2016-08-01

    This paper is a contribution to lithium-ion batteries modelling taking into account aging effects. It first analyses the impact of aging on electrode stoichiometry and then on lithium-ion cell Open Circuit Voltage (OCV) curve. Through some hypotheses and an appropriate definition of the cell state of charge, it shows that each electrode equilibrium potential, but also the whole cell equilibrium potential can be modelled by a polynomial that requires only one adjustment parameter during aging. An adjustment algorithm, based on the idea that for two fixed OCVs, the state of charge between these two equilibrium states is unique for a given aging level, is then proposed. Its efficiency is evaluated on a battery pack constituted of four cells.

  9. Modelling goal adjustment in social relationships: Two experimental studies with children and adults.

    PubMed

    Thomsen, Tamara; Kappes, Cathleen; Schwerdt, Laura; Sander, Johanna; Poller, Charlotte

    2016-10-23

    In two experiments, we investigated observational learning in social relationships as one possible pathway to the development of goal adjustment processes. In the first experiment, 56 children (M = 9.29 years) observed their parent as a model; in the second, 50 adults (M = 32.27 years) observed their romantic partner. Subjects were randomly assigned to three groups: goal engagement (GE), goal disengagement (GD), or control group (CO) and were asked to solve (unsolvable) puzzles. Before trying to solve the puzzles by themselves, subjects observed the instructed model, who was told to continue with the same puzzle (GE) or to switch to the next puzzle (GD). Results show that children in the GE group switched significantly less than in the GD or CO group. There was no difference between the GD group and CO group. Adults in the GE group switched significantly less than in the GD or CO group, whereas subjects in the GD group switched significantly more often than the CO group. Statement of contribution What is already known on this subject? Previous research focused mainly on the functions of goal adjustment processes. It rarely considered processes and conditions that contribute to the development of goal engagement and goal disengagement. There are only two cross-sectional studies that directly investigate this topic. Previous research that claims observational learning as a pathway of learning emotion regulation or adjustment processes has (only) relied on correlational methods and, thus, do not allow any causal interpretations. Previous research, albeit claiming a life span focus, mostly investigated goal adjustment processes in one specific age group (mainly adults). There is no study that investigates the same processes in different age groups. What does this study add? In our two studies, we focus on the conditions of goal adjustment processes and sought to demonstrate one potential pathway of learning or changing the application of goal adjustment

  10. Multi-variable mathematical models for the air-cathode microbial fuel cell system

    DOE PAGES

    Ou, Shiqi; Kashima, Hiroyuki; Aaron, Douglas S.; ...

    2016-03-10

    This research adopted the version control system into the model construction for the single chamber air-cathode microbial fuel cell (MFC) system, to understand the interrelation of biological, chemical, and electrochemical reactions. The anodic steady state model was used to consider the chemical species diffusion and electric migration influence to the MFC performance. In the cathodic steady state model, the mass transport and reactions in a multi-layer, abiotic cathode and multi-bacteria cathode biofilm were simulated. Transport of hydroxide was assumed for cathodic pH change. This assumption is an alternative to the typical notion of proton consumption during oxygen reduction to explainmore » elevated cathode pH. The cathodic steady state model provided the power density and polarization curve performance results that can be compared to an experimental MFC system. Another aspect we considered was the relative contributions of platinum catalyst and microbes on the cathode to the oxygen reduction reaction (ORR). We found simulation results showed that the biocatalyst in a cathode that includes a Pt/C catalyst likely plays a minor role in ORR, contributing up to 8% of the total power calculated by the models.« less

  11. Multi-variable mathematical models for the air-cathode microbial fuel cell system

    NASA Astrophysics Data System (ADS)

    Ou, Shiqi; Kashima, Hiroyuki; Aaron, Douglas S.; Regan, John M.; Mench, Matthew M.

    2016-05-01

    This research adopted the version control system into the model construction for the single chamber air-cathode microbial fuel cell (MFC) system, to understand the interrelation of biological, chemical, and electrochemical reactions. The anodic steady state model was used to consider the chemical species diffusion and electric migration influence to the MFC performance. In the cathodic steady state model, the mass transport and reactions in a multi-layer, abiotic cathode and multi-bacteria cathode biofilm were simulated. Transport of hydroxide was assumed for cathodic pH change. This assumption is an alternative to the typical notion of proton consumption during oxygen reduction to explain elevated cathode pH. The cathodic steady state model provided the power density and polarization curve performance results that can be compared to an experimental MFC system. Another aspect considered was the relative contributions of platinum catalyst and microbes on the cathode to the oxygen reduction reaction (ORR). Simulation results showed that the biocatalyst in a cathode that includes a Pt/C catalyst likely plays a minor role in ORR, contributing up to 8% of the total power calculated by the models.

  12. Multi-variable mathematical models for the air-cathode microbial fuel cell system

    SciTech Connect

    Ou, Shiqi; Kashima, Hiroyuki; Aaron, Douglas S.; Regan, John M.; Mench, Matthew M.

    2016-03-10

    This research adopted the version control system into the model construction for the single chamber air-cathode microbial fuel cell (MFC) system, to understand the interrelation of biological, chemical, and electrochemical reactions. The anodic steady state model was used to consider the chemical species diffusion and electric migration influence to the MFC performance. In the cathodic steady state model, the mass transport and reactions in a multi-layer, abiotic cathode and multi-bacteria cathode biofilm were simulated. Transport of hydroxide was assumed for cathodic pH change. This assumption is an alternative to the typical notion of proton consumption during oxygen reduction to explain elevated cathode pH. The cathodic steady state model provided the power density and polarization curve performance results that can be compared to an experimental MFC system. Another aspect we considered was the relative contributions of platinum catalyst and microbes on the cathode to the oxygen reduction reaction (ORR). We found simulation results showed that the biocatalyst in a cathode that includes a Pt/C catalyst likely plays a minor role in ORR, contributing up to 8% of the total power calculated by the models.

  13. Prediction of water quality parameters from SAR images by using multivariate and texture analysis models

    NASA Astrophysics Data System (ADS)

    Shareef, Muntadher A.; Toumi, Abdelmalek; Khenchaf, Ali

    2014-10-01

    Remote sensing is one of the most important tools for monitoring and assisting to estimate and predict Water Quality parameters (WQPs). The traditional methods used for monitoring pollutants are generally relied on optical images. In this paper, we present a new approach based on the Synthetic Aperture Radar (SAR) images which we used to map the region of interest and to estimate the WQPs. To achieve this estimation quality, the texture analysis is exploited to improve the regression models. These models are established and developed to estimate six common concerned water quality parameters from texture parameters extracted from Terra SAR-X data. In this purpose, the Gray Level Cooccurrence Matrix (GLCM) is used to estimate several regression models using six texture parameters such as contrast, correlation, energy, homogeneity, entropy and variance. For each predicted model, an accuracy value is computed from the probability value given by the regression analysis model of each parameter. In order to validate our approach, we have used tow dataset of water region for training and test process. To evaluate and validate the proposed model, we applied it on the training set. In the last stage, we used the fuzzy K-means clustering to generalize the water quality estimation on the whole of water region extracted from segmented Terra SAR-X image. Also, the obtained results showed that there are a good statistical correlation between the in situ water quality and Terra SAR-X data, and also demonstrated that the characteristics obtained by texture analysis are able to monitor and predicate the distribution of WQPs in large rivers with high accuracy.

  14. Scrutiny of Appropriate Model Error Specification in Multivariate Assimilation Framework using mHM

    NASA Astrophysics Data System (ADS)

    Rakovec, O.; Noh, S. J.; Kumar, R.; Samaniego, L. E.

    2015-12-01

    Reliable and accurate predictions of regional scale water fluxes and states is of great challenge to the scientific community. Several sectors of society (municipalities, agriculture, energy, etc.) may benefit from successful solutions to appropriately quantify uncertainties in hydro-meteorological prediction systems, with particular attention to extreme weather conditions.Increased availability and quality of near real-time data enables better understanding of predictive skill of forecasting frameworks. To address this issue, automatic model-observation integrations are required for appropriate model initializations. In this study, the effects of noise specification on the quality of hydrological forecasts is scrutinized via a data assimilation system. This framework has been developed by incorporating the mesoscale hydrologic model (mHM, {http://www.ufz.de/mhm) with particle filtering (PF) approach used for model state updating. In comparison with previous works, lag PF is considered to better account for the response times of internal hydrologic processes.The objective of this study is to assess the benefits of model state updating for prediction of water fluxes and states up to 3-month ahead forecast using particle filtering. The efficiency of this system is demonstrated in 10 large European basins. We evaluate the model skill for five assimilation scenarios using observed (1) discharge (Q); (2) MODIS evapotranspiration (ET); (3) GRACE terrestrial total water storage (TWS) anomaly; (4) ESA-CCI soil moisture; and (5) the combination of Q, ET, TWS, and SM in a hindcast experiment (2004-2010). The effects of error perturbations for both, the analysis and the forecasts are presented, and optimal trade-offs are discussed. While large perturbations are preferred for the analysis time step, steep deterioration is observed for longer lead times, for which more conservative error measures should be considered. From all the datasets, complementary GRACE TWS data together

  15. Army Physical Therapy Productivity According to the Performance Based Adjustment Model

    DTIC Science & Technology

    2008-05-02

    FTE) data from 34 military treatment facilities (MTFs). Results: Statistical process control identified extensive special cause variation in Army PT... Treatment Facility (MTF) efficiency with specialty specific productivity benchmarks established by the Performance Based Adjustment Model (PBAM...generates 1.2 RVUs and a 15-minute ultrasound treatment generates .21 RVUs of workload. See Appendix A for a list of commonly used physical therapy

  16. Predictive modeling in Clostridium acetobutylicum fermentations employing Raman spectroscopy and multivariate data analysis for real-time culture monitoring

    NASA Astrophysics Data System (ADS)

    Zu, Theresah N. K.; Liu, Sanchao; Germane, Katherine L.; Servinsky, Matthew D.; Gerlach, Elliot S.; Mackie, David M.; Sund, Christian J.

    2016-05-01

    The coupling of optical fibers with Raman instrumentation has proven to be effective for real-time monitoring of chemical reactions and fermentations when combined with multivariate statistical data analysis. Raman spectroscopy is relatively fast, with little interference from the water peak present in fermentation media. Medical research has explored this technique for analysis of mammalian cultures for potential diagnosis of some cancers. Other organisms studied via this route include Escherichia coli, Saccharomyces cerevisiae, and some Bacillus sp., though very little work has been performed on Clostridium acetobutylicum cultures. C. acetobutylicum is a gram-positive anaerobic bacterium, which is highly sought after due to its ability to use a broad spectrum of substrates and produce useful byproducts through the well-known Acetone-Butanol-Ethanol (ABE) fermentation. In this work, real-time Raman data was acquired from C. acetobutylicum cultures grown on glucose. Samples were collected concurrently for comparative off-line product analysis. Partial-least squares (PLS) models were built both for agitated cultures and for static cultures from both datasets. Media components and metabolites monitored include glucose, butyric acid, acetic acid, and butanol. Models were cross-validated with independent datasets. Experiments with agitation were more favorable for modeling with goodness of fit (QY) values of 0.99 and goodness of prediction (Q2Y) values of 0.98. Static experiments did not model as well as agitated experiments. Raman results showed the static experiments were chaotic, especially during and shortly after manual sampling.

  17. Unifying Amplitude and Phase Analysis: A Compositional Data Approach to Functional Multivariate Mixed-Effects Modeling of Mandarin Chinese.

    PubMed

    Hadjipantelis, P Z; Aston, J A D; Müller, H G; Evans, J P

    2015-04-03

    Mandarin Chinese is characterized by being a tonal language; the pitch (or F0) of its utterances carries considerable linguistic information. However, speech samples from different individuals are subject to changes in amplitude and phase, which must be accounted for in any analysis that attempts to provide a linguistically meaningful description of the language. A joint model for amplitude, phase, and duration is presented, which combines elements from functional data analysis, compositional data analysis, and linear mixed effects models. By decomposing functions via a functional principal component analysis, and connecting registration functions to compositional data analysis, a joint multivariate mixed effect model can be formulated, which gives insights into the relationship between the different modes of variation as well as their dependence on linguistic and nonlinguistic covariates. The model is applied to the COSPRO-1 dataset, a comprehensive database of spoken Taiwanese Mandarin, containing approximately 50,000 phonetically diverse sample F0 contours (syllables), and reveals that phonetic information is jointly carried by both amplitude and phase variation. Supplementary materials for this article are available online.

  18. Unifying Amplitude and Phase Analysis: A Compositional Data Approach to Functional Multivariate Mixed-Effects Modeling of Mandarin Chinese

    PubMed Central

    Hadjipantelis, P. Z.; Aston, J. A. D.; Müller, H. G.; Evans, J. P.

    2015-01-01

    Mandarin Chinese is characterized by being a tonal language; the pitch (or F 0) of its utterances carries considerable linguistic information. However, speech samples from different individuals are subject to changes in amplitude and phase, which must be accounted for in any analysis that attempts to provide a linguistically meaningful description of the language. A joint model for amplitude, phase, and duration is presented, which combines elements from functional data analysis, compositional data analysis, and linear mixed effects models. By decomposing functions via a functional principal component analysis, and connecting registration functions to compositional data analysis, a joint multivariate mixed effect model can be formulated, which gives insights into the relationship between the different modes of variation as well as their dependence on linguistic and nonlinguistic covariates. The model is applied to the COSPRO-1 dataset, a comprehensive database of spoken Taiwanese Mandarin, containing approximately 50,000 phonetically diverse sample F 0 contours (syllables), and reveals that phonetic information is jointly carried by both amplitude and phase variation. Supplementary materials for this article are available online. PMID:26692591

  19. Higher Dimensional Clayton–Oakes Models for Multivariate Failure Time Data

    PubMed Central

    Prentice, R. L.

    2016-01-01

    Summary The Clayton–Oakes bivariate failure time model is extended to dimensions m > 2 in a manner that allows unspecified marginal survivor functions for all dimensions less than m. Special cases that allow unspecified marginal survivor functions of dimension q with q < m, while making some provisions for dependencies of dimension greater than q, are also described. PMID:27738350

  20. Multisite-multivariable sensitivity analysis of distributed watershed models: enhancing the perceptions from computationally frugal methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper assesses the impact of different likelihood functions in identifying sensitive parameters of the highly parameterized, spatially distributed Soil and Water Assessment Tool (SWAT) watershed model for multiple variables at multiple sites. The global one-factor-at-a-time (OAT) method of Morr...

  1. A Model for Multivariate Prediction of Academic Success of "Transfer Students" in Pharmacy School

    ERIC Educational Resources Information Center

    Palmieri, Anthony, III

    1977-01-01

    A model for the systematic and rapid evaluation of transfer students is presented. Transfer students are defined as having completed organic chemistry and other basic courses and are ready to enter the first professional year in a standard 2:3 program. Data for the predictor equations are from the University of Wyoming School of Pharmacy. (LBH)

  2. MULTIVARIATE STATISTICAL MODELS FOR EFFECTS OF PM AND COPOLLUTANTS IN A DAILY TIME SERIES EPIDEMIOLOGY STUDY

    EPA Science Inventory

    Most analyses of daily time series epidemiology data relate mortality or morbidity counts to PM and other air pollutants by means of single-outcome regression models using multiple predictors, without taking into account the complex statistical structure of the predictor variable...

  3. Multivariate Dynamic Modeling to Investigate Human Adaptation to Space Flight: Initial Concepts

    NASA Technical Reports Server (NTRS)

    Shelhamer, Mark; Mindock, Jennifer; Zeffiro, Tom; Krakauer, David; Paloski, William H.; Lumpkins, Sarah

    2014-01-01

    The array of physiological changes that occur when humans venture into space for long periods presents a challenge to future exploration. The changes are conventionally investigated independently, but a complete understanding of adaptation requires a conceptual basis founded in integrative physiology, aided by appropriate mathematical modeling. NASA is in the early stages of developing such an approach.

  4. Multivariate Dynamical Modeling to Investigate Human Adaptation to Space Flight: Initial Concepts

    NASA Technical Reports Server (NTRS)

    Shelhamer, Mark; Mindock, Jennifer; Zeffiro, Tom; Krakauer, David; Paloski, William H.; Lumpkins, Sarah

    2014-01-01

    The array of physiological changes that occur when humans venture into space for long periods presents a challenge to future exploration. The changes are conventionally investigated independently, but a complete understanding of adaptation requires a conceptual basis founded in intergrative physiology, aided by appropriate mathematical modeling. NASA is in the early stages of developing such an approach.

  5. On the Power of Multivariate Latent Growth Curve Models to Detect Correlated Change

    ERIC Educational Resources Information Center

    Hertzog, Christopher; Lindenberger, Ulman; Ghisletta, Paolo; Oertzen, Timo von

    2006-01-01

    We evaluated the statistical power of single-indicator latent growth curve models (LGCMs) to detect correlated change between two variables (covariance of slopes) as a function of sample size, number of longitudinal measurement occasions, and reliability (measurement error variance). Power approximations following the method of Satorra and Saris…

  6. Adjusting for unmeasured confounding due to either of two crossed factors with a logistic regression model.

    PubMed

    Li, Li; Brumback, Babette A; Weppelmann, Thomas A; Morris, J Glenn; Ali, Afsar

    2016-08-15

    Motivated by an investigation of the effect of surface water temperature on the presence of Vibrio cholerae in water samples collected from different fixed surface water monitoring sites in Haiti in different months, we investigated methods to adjust for unmeasured confounding due to either of the two crossed factors site and month. In the process, we extended previous methods that adjust for unmeasured confounding due to one nesting factor (such as site, which nests the water samples from different months) to the case of two crossed factors. First, we developed a conditional pseudolikelihood estimator that eliminates fixed effects for the levels of each of the crossed factors from the estimating equation. Using the theory of U-Statistics for independent but non-identically distributed vectors, we show that our estimator is consistent and asymptotically normal, but that its variance depends on the nuisance parameters and thus cannot be easily estimated. Consequently, we apply our estimator in conjunction with a permutation test, and we investigate use of the pigeonhole bootstrap and the jackknife for constructing confidence intervals. We also incorporate our estimator into a diagnostic test for a logistic mixed model with crossed random effects and no unmeasured confounding. For comparison, we investigate between-within models extended to two crossed factors. These generalized linear mixed models include covariate means for each level of each factor in order to adjust for the unmeasured confounding. We conduct simulation studies, and we apply the methods to the Haitian data. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Utilizing a Multi-Variate Approach in the Reorganization of a University Academic Department Based upon a Dynamic Macro Model of Change in Education.

    ERIC Educational Resources Information Center

    Pedras, Melvin J.

    The model used in a multivariate fashion to reorganize the Department of Industrial Technology Education at the University of Idaho thereby undergoing a test for effectiveness is presented. This model is a product of a seminar held in West Germany in 1986 in which a group of professional educators from several countries produced a generic model…

  8. [Applying temporally-adjusted land use regression models to estimate ambient air pollution exposure during pregnancy].

    PubMed

    Zhang, Y J; Xue, F X; Bai, Z P

    2017-03-06

    The impact of maternal air pollution exposure on offspring health has received much attention. Precise and feasible exposure estimation is particularly important for clarifying exposure-response relationships and reducing heterogeneity among studies. Temporally-adjusted land use regression (LUR) models are exposure assessment methods developed in recent years that have the advantage of having high spatial-temporal resolution. Studies on the health effects of outdoor air pollution exposure during pregnancy have been increasingly carried out using this model. In China, research applying LUR models was done mostly at the model construction stage, and findings from related epidemiological studies were rarely reported. In this paper, the sources of heterogeneity and research progress of meta-analysis research on the associations between air pollution and adverse pregnancy outcomes were analyzed. The methods of the characteristics of temporally-adjusted LUR models were introduced. The current epidemiological studies on adverse pregnancy outcomes that applied this model were systematically summarized. Recommendations for the development and application of LUR models in China are presented. This will encourage the implementation of more valid exposure predictions during pregnancy in large-scale epidemiological studies on the health effects of air pollution in China.

  9. Development and validation of multivariable predictive model for thromboembolic events in lymphoma patients.

    PubMed

    Antic, Darko; Milic, Natasa; Nikolovski, Srdjan; Todorovic, Milena; Bila, Jelena; Djurdjevic, Predrag; Andjelic, Bosko; Djurasinovic, Vladislava; Sretenovic, Aleksandra; Vukovic, Vojin; Jelicic, Jelena; Hayman, Suzanne; Mihaljevic, Biljana

    2016-10-01

    Lymphoma patients are at increased risk of thromboembolic events but thromboprophylaxis in these patients is largely underused. We sought to develop and validate a simple model, based on individual clinical and laboratory patient characteristics that would designate lymphoma patients at risk for thromboembolic event. The study population included 1,820 lymphoma patients who were treated in the Lymphoma Departments at the Clinics of Hematology, Clinical Center of Serbia and Clinical Center Kragujevac. The model was developed using data from a derivation cohort (n = 1,236), and further assessed in the validation cohort (n = 584). Sixty-five patients (5.3%) in the derivation cohort and 34 (5.8%) patients in the validation cohort developed thromboembolic events. The variables independently associated with risk for thromboembolism were: previous venous and/or arterial events, mediastinal involvement, BMI>30 kg/m(2) , reduced mobility, extranodal localization, development of neutropenia and hemoglobin level < 100g/L. Based on the risk model score, the population was divided into the following risk categories: low (score 0-1), intermediate (score 2-3), and high (score >3). For patients classified at risk (intermediate and high-risk scores), the model produced negative predictive value of 98.5%, positive predictive value of 25.1%, sensitivity of 75.4%, and specificity of 87.5%. A high-risk score had positive predictive value of 65.2%. The diagnostic performance measures retained similar values in the validation cohort. Developed prognostic Thrombosis Lymphoma - ThroLy score is more specific for lymphoma patients than any other available score targeting thrombosis in cancer patients. Am. J. Hematol. 91:1014-1019, 2016. © 2016 Wiley Periodicals, Inc.

  10. Sutherland-Type Trigonometric Models, Trigonometric Invariants and Multivariable Polynomials II:. E7 Case

    NASA Astrophysics Data System (ADS)

    López Vieyra, J. C.; García, M. A. G.; Turbiner, A. V.

    It is shown that the E7 trigonometric Olshanetsky-Perelomov Hamiltonian, when written in terms of the Fundamental Trigonometric Invariants (FTI), is in algebraic form, i.e. has polynomial coefficients, and preserves the infinite flag of polynomial spaces with the characteristic vector α = (1, 2, 2, 2, 3, 3, 4). Its flag coincides with one of the minimal characteristic vectors for the E7 rational model, which in turn coincides with the E7 highest root.

  11. Explaining nitrate pollution pressure on the groundwater resource in Kinshasa using a multivariate statistical modelling approach

    NASA Astrophysics Data System (ADS)

    Mfumu Kihumba, Antoine; Vanclooster, Marnik

    2013-04-01

    Drinking water in Kinshasa, the capital of the Democratic Republic of Congo, is provided by extracting groundwater from the local aquifer, particularly in peripheral areas. The exploited groundwater body is mainly unconfined and located within a continuous detrital aquifer, primarily composed of sedimentary formations. However, the aquifer is subjected to an increasing threat of anthropogenic pollution pressure. Understanding the detailed origin of this pollution pressure is important for sustainable drinking water management in Kinshasa. The present study aims to explain the observed nitrate pollution problem, nitrate being considered as a good tracer for other pollution threats. The analysis is made in terms of physical attributes that are readily available using a statistical modelling approach. For the nitrate data, use was made of a historical groundwater quality assessment study, for which the data were re-analysed. The physical attributes are related to the topography, land use, geology and hydrogeology of the region. Prior to the statistical modelling, intrinsic and specific vulnerability for nitrate pollution was assessed. This vulnerability assessment showed that the alluvium area in the northern part of the region is the most vulnerable area. This area consists of urban land use with poor sanitation. Re-analysis of the nitrate pollution data demonstrated that the spatial variability of nitrate concentrations in the groundwater body is high, and coherent with the fragmented land use of the region and the intrinsic and specific vulnerability maps. For the statistical modeling use was made of multiple regression and regression tree analysis. The results demonstrated the significant impact of land use variables on the Kinshasa groundwater nitrate pollution and the need for a detailed delineation of groundwater capture zones around the monitoring stations. Key words: Groundwater , Isotopic, Kinshasa, Modelling, Pollution, Physico-chemical.

  12. Evaluating the Relationship between Team Performance and Joint Attention with Longitudinal Multivariate Mixed Models

    DTIC Science & Technology

    2016-09-23

    models, with an emphasis on exploring the correlation structure between the variances in growth trajectories of team performance and joint attention...the correlation structure between the variances in growth trajectories of team performance and joint attention around estimated means. Observed...and transfer of human motor skills. Psychological Bulletin, 101(1), 41-74. Chung, S. (2015). Do friends perform better?: A meta-analytic review of

  13. Using Green's Functions to initialize and adjust a global, eddying ocean biogeochemistry general circulation model

    NASA Astrophysics Data System (ADS)

    Brix, H.; Menemenlis, D.; Hill, C.; Dutkiewicz, S.; Jahn, O.; Wang, D.; Bowman, K.; Zhang, H.

    2015-11-01

    The NASA Carbon Monitoring System (CMS) Flux Project aims to attribute changes in the atmospheric accumulation of carbon dioxide to spatially resolved fluxes by utilizing the full suite of NASA data, models, and assimilation capabilities. For the oceanic part of this project, we introduce ECCO2-Darwin, a new ocean biogeochemistry general circulation model based on combining the following pre-existing components: (i) a full-depth, eddying, global-ocean configuration of the Massachusetts Institute of Technology general circulation model (MITgcm), (ii) an adjoint-method-based estimate of ocean circulation from the Estimating the Circulation and Climate of the Ocean, Phase II (ECCO2) project, (iii) the MIT ecosystem model "Darwin", and (iv) a marine carbon chemistry model. Air-sea gas exchange coefficients and initial conditions of dissolved inorganic carbon, alkalinity, and oxygen are adjusted using a Green's Functions approach in order to optimize modeled air-sea CO2 fluxes. Data constraints include observations of carbon dioxide partial pressure (pCO2) for 2009-2010, global air-sea CO2 flux estimates, and the seasonal cycle of the Takahashi et al. (2009) Atlas. The model sensitivity experiments (or Green's Functions) include simulations that start from different initial conditions as well as experiments that perturb air-sea gas exchange parameters and the ratio of particulate inorganic to organic carbon. The Green's Functions approach yields a linear combination of these sensitivity experiments that minimizes model-data differences. The resulting initial conditions and gas exchange coefficients are then used to integrate the ECCO2-Darwin model forward. Despite the small number (six) of control parameters, the adjusted simulation is significantly closer to the data constraints (37% cost function reduction, i.e., reduction in the model-data difference, relative to the baseline simulation) and to independent observations (e.g., alkalinity). The adjusted air-sea gas

  14. Update on Multi-Variable Parametric Cost Models for Ground and Space Telescopes

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda

    2012-01-01

    Parametric cost models can be used by designers and project managers to perform relative cost comparisons between major architectural cost drivers and allow high-level design trades; enable cost-benefit analysis for technology development investment; and, provide a basis for estimating total project cost between related concepts. This paper reports on recent revisions and improvements to our ground telescope cost model and refinements of our understanding of space telescope cost models. One interesting observation is that while space telescopes are 50X to 100X more expensive than ground telescopes, their respective scaling relationships are similar. Another interesting speculation is that the role of technology development may be different between ground and space telescopes. For ground telescopes, the data indicates that technology development tends to reduce cost by approximately 50% every 20 years. But for space telescopes, there appears to be no such cost reduction because we do not tend to re-fly similar systems. Thus, instead of reducing cost, 20 years of technology development may be required to enable a doubling of space telescope capability. Other findings include: mass should not be used to estimate cost; spacecraft and science instrument costs account for approximately 50% of total mission cost; and, integration and testing accounts for only about 10% of total mission cost.

  15. Towards a Multi-Variable Parametric Cost Model for Ground and Space Telescopes

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Henrichs, Todd

    2016-01-01

    Parametric cost models can be used by designers and project managers to perform relative cost comparisons between major architectural cost drivers and allow high-level design trades; enable cost-benefit analysis for technology development investment; and, provide a basis for estimating total project cost between related concepts. This paper hypothesizes a single model, based on published models and engineering intuition, for both ground and space telescopes: OTA Cost approximately (X) D(exp (1.75 +/- 0.05)) lambda(exp(-0.5 +/- 0.25) T(exp -0.25) e (exp (-0.04)Y). Specific findings include: space telescopes cost 50X to 100X more ground telescopes; diameter is the most important CER; cost is reduced by approximately 50% every 20 years (presumably because of technology advance and process improvements); and, for space telescopes, cost associated with wavelength performance is balanced by cost associated with operating temperature. Finally, duplication only reduces cost for the manufacture of identical systems (i.e. multiple aperture sparse arrays or interferometers). And, while duplication does reduce the cost of manufacturing the mirrors of segmented primary mirror, this cost savings does not appear to manifest itself in the final primary mirror assembly (presumably because the structure for a segmented mirror is more complicated than for a monolithic mirror).

  16. Identification of Patients Affected by Mitral Valve Prolapse with Severe Regurgitation: A Multivariable Regression Model

    PubMed Central

    Songia, Paola; Chiesa, Mattia; Alamanni, Francesco; Tremoli, Elena

    2017-01-01

    Background. Mitral valve prolapse (MVP) is the most common cause of severe mitral regurgitation. Besides echocardiography, up to now there are no reliable biomarkers available for the identification of this pathology. We aim to generate a predictive model, based on circulating biomarkers, able to identify MVP patients with the highest accuracy. Methods. We analysed 43 patients who underwent mitral valve repair due to MVP and compared to 29 matched controls. We assessed the oxidative stress status measuring the oxidized and the reduced form of glutathione by liquid chromatography-tandem mass spectrometry method. Osteoprotegerin (OPG) plasma levels were measured by an enzyme-linked immunosorbent assay. The combination of these biochemical variables was used to implement several logistic regression models. Results. Oxidative stress levels and OPG concentrations were significantly higher in patients compared to control subjects (0.116 ± 0.007 versus 0.053 ± 0.013 and 1748 ± 100.2 versus 1109 ± 45.3 pg/mL, respectively; p < 0.0001). The best regression model was able to correctly classify 62 samples out of 72 with accuracy in terms of area under the curve of 0.92. Conclusions. To the best of our knowledge, this is the first study to show a strong association between OPG and oxidative stress status in patients affected by MVP with severe regurgitation. PMID:28261377

  17. Quality-by-Design: Multivariate Model for Multicomponent Quantification in Refining Process of Honey

    PubMed Central

    Li, Xiaoying; Wu, Zhisheng; Feng, Xin; Liu, Shanshan; Yu, Xiaojie; Ma, Qun; Qiao, Yanjiang

    2017-01-01

    Objective: A method for rapid analysis of the refining process of honey was developed based on near-infrared (NIR) spectroscopy. Methods: Partial least square calibration models were built for the four components after the selection of the optimal spectral pretreatment method and latent factors. Results: The models covered the samples of different temperatures and time points, therefore the models were robust and universal. Conclusions: These results highlighted that the NIR technology could extract the information of critical process and provide essential process knowledge of the honey refining process. SUMMARY A method for rapid analysis of the refining process of honey was developed based on near-infrared (NIR) spectroscopy. Abbreviation used: NIR: Near-infrared; 5-HMF: 5-hydroxymethylfurfural; RMSEP: Root mean square error of prediction; R: correlation coefficients; PRESS: prediction residual error-sum squares; TCM: Traditional Chinese medicine; HPLC: High-performance liquid chromatography; HPLC-DAD: HPLC-diode array detector; PLS: Partial least square; MSC: multiplicative scatter correction; RMSECV: Root mean square error of cross validation; RPD: Residual predictive deviation; 1D: 1st order derivative; SG: Savitzky-Golay smooth; 2D: 2nd order derivative. PMID:28216906

  18. Multivariate Calibration and Model Integrity for Wood Chemistry Using Fourier Transform Infrared Spectroscopy

    PubMed Central

    Zhou, Chengfeng; Jiang, Wei; Cheng, Qingzheng; Via, Brian K.

    2015-01-01

    This research addressed a rapid method to monitor hardwood chemical composition by applying Fourier transform infrared (FT-IR) spectroscopy, with particular interest in model performance for interpretation and prediction. Partial least squares (PLS) and principal components regression (PCR) were chosen as the primary models for comparison. Standard laboratory chemistry methods were employed on a mixed genus/species hardwood sample set to collect the original data. PLS was found to provide better predictive capability while PCR exhibited a more precise estimate of loading peaks and suggests that PCR is better for model interpretation of key underlying functional groups. Specifically, when PCR was utilized, an error in peak loading of ±15 cm−1 from the true mean was quantified. Application of the first derivative appeared to assist in improving both PCR and PLS loading precision. Research results identified the wavenumbers important in the prediction of extractives, lignin, cellulose, and hemicellulose and further demonstrated the utility in FT-IR for rapid monitoring of wood chemistry. PMID:26576321

  19. Multivariate near infrared spectroscopy models for predicting methanol and water content in biodiesel.

    PubMed

    Felizardo, Pedro; Baptista, Patrícia; Menezes, José C; Correia, M Joana Neiva

    2007-07-09

    The transesterification of vegetable oils, animal fats or waste oils with an alcohol (such as methanol) in the presence of a homogeneous catalyst (sodium hydroxide or methoxyde) is commonly used to produce biodiesel. The quality control of the final product is an important issue and near infrared (NIR) spectroscopy recently appears as an appealing alternative to the conventional analytical methods. The use of NIR spectroscopy for this purpose first involves the development of calibration models to relate the near infrared spectrum of biodiesel with the analytical data. The type of pre-processing technique applied to the data prior to the development of calibration may greatly influence the performance of the model. This work analyses the effect of some commonly used pre-processing techniques applied prior to partial least squares (PLS) and principal components regressions (PCR) in the quality of the calibration models developed to relate the near infrared spectrum of biodiesel and its content of methanol and water. The results confirm the importance of testing various pre-processing techniques. For the water content, the smaller validation and prediction errors were obtained by a combination of a second order Savitsky-Golay derivative followed by mean centring prior to PLS and PCR, whereas for methanol calibration the best results were obtained with a first order Savitsky-Golay derivative plus mean centring followed by the orthogonal signal correction.

  20. Adjusting exposure limits for long and short exposure periods using a physiological pharmacokinetic model.

    PubMed

    Andersen, M E; MacNaughton, M G; Clewell, H J; Paustenbach, D J

    1987-04-01

    The rationale for adjusting occupational exposure limits for unusual work schedules is to assure, as much as possible, that persons on these schedules are placed at no greater risk of injury or discomfort than persons who work a standard 8 hr/day, 40 hr/week. For most systemic toxicants, the risk index upon which the adjustments are made will be either peak blood concentration or integrated tissue dose, depending on what chemical's presumed mechanism of toxicity. Over the past ten years, at least four different models have been proposed for adjusting exposure limits for unusually short and long work schedules. This paper advocates use of a physiologically-based pharmacokinetic (PB-PK) model for determining adjustment factors for unusual exposure schedules, an approach that should be more accurate than those proposed previously. The PB-PK model requires data on the blood:air and tissue:blood partition coefficients, the rate of metabolism of the chemical, organ volumes, organ blood flows and ventilation rates in humans. Laboratory data on two industrially important chemicals--styrene and methylene chloride--were used to illustrate the PB-PK approach. At inhaled concentrations near their respective 8-hr Threshold Limit Value-Time-weighted averages (TLV-TWAs), both of these chemicals are primarily eliminated from the body by metabolism. For these two chemicals, the appropriate risk indexing parameters are integrated tissue dose or total amount of parent chemical metabolized. Since methylene chloride is metabolized to carbon monoxide, the maximum blood carboxyhemoglobin concentrations also might be useful as an index of risk for this chemical.(ABSTRACT TRUNCATED AT 250 WORDS)

  1. Hydrogeochemical Processes of Groundwater Using Multivariate Statistical Analyses and Inverse Geochemical Modeling in Samrak Park of Nakdong River Basin, Korea

    NASA Astrophysics Data System (ADS)

    Chung, Sang Yong

    2015-04-01

    Multivariate statistical methods and inverse geochemical modelling were used to assess the hydrogeochemical processes of groundwater in Nakdong River basin. The study area is located in a part of Nakdong River basin, the Busan Metropolitan City, Kora. Quaternary deposits forms Samrak Park region and are underlain by intrusive rocks of Bulkuksa group and sedimentary rocks of Yucheon group in the Cretaceous Period. The Samrak park region is acting as two aquifer systems of unconfined aquifer and confined aquifer. The unconfined aquifer consists of upper sand, and confined aquifer is comprised of clay, lower sand, gravel, weathered rock. Porosity and hydraulic conductivity of the area is 37 to 59% and 1.7 to 200m/day, respectively. Depth of the wells ranges from 9 to 77m. Piper's trilinear diagram, CaCl2 type was useful for unconfined aquifer and NaCl type was dominant for confined aquifer. By hierarchical cluster analysis (HCA), Group 1 and Group 2 are fully composed of unconfined aquifer and confined aquifer, respectively. In factor analysis (FA), Factor 1 is described by the strong loadings of EC, Na, K, Ca, Mg, Cl, HCO3, SO4 and Si, and Factor 2 represents the strong loadings of pH and Al. Base on the Gibbs diagram, the unconfined and confined aquifer samples are scattered discretely in the rock and evaporation areas. The principal hydrogeochemical processes occurring in the confined and unconfined aquifers are the ion exchange due to the phenomena of freshening under natural recharge and water-rock interactions followed by evaporation and dissolution. The saturation index of minerals such as Ca-montmorillonite, dolomite and calcite represents oversaturated, and the albite, gypsum and halite show undersaturated. Inverse geochemical modeling using PHREEQC code demonstrated that relatively few phases were required to derive the differences in groundwater chemistry along the flow path in the area. It also suggested that dissolution of carbonate and ion exchange

  2. Design and Implementation of a Parallel Multivariate Ensemble Kalman Filter for the Poseidon Ocean General Circulation Model

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele M.; Koblinsky, Chester (Technical Monitor)

    2001-01-01

    A multivariate ensemble Kalman filter (MvEnKF) implemented on a massively parallel computer architecture has been implemented for the Poseidon ocean circulation model and tested with a Pacific Basin model configuration. There are about two million prognostic state-vector variables. Parallelism for the data assimilation step is achieved by regionalization of the background-error covariances that are calculated from the phase-space distribution of the ensemble. Each processing element (PE) collects elements of a matrix measurement functional from nearby PEs. To avoid the introduction of spurious long-range covariances associated with finite ensemble sizes, the background-error covariances are given compact support by means of a Hadamard (element by element) product with a three-dimensional canonical correlation function. The methodology and the MvEnKF configuration are discussed. It is shown that the regionalization of the background covariances; has a negligible impact on the quality of the analyses. The parallel algorithm is very efficient for large numbers of observations but does not scale well beyond 100 PEs at the current model resolution. On a platform with distributed memory, memory rather than speed is the limiting factor.

  3. Transient multivariable sensor evaluation

    DOEpatents

    Vilim, Richard B.; Heifetz, Alexander

    2017-02-21

    A method and system for performing transient multivariable sensor evaluation. The method and system includes a computer system for identifying a model form, providing training measurement data, generating a basis vector, monitoring system data from sensor, loading the system data in a non-transient memory, performing an estimation to provide desired data and comparing the system data to the desired data and outputting an alarm for a defective sensor.

  4. Contemporary Modeling of Gene-by-Environment Effects In Randomized Multivariate Longitudinal Studies

    PubMed Central

    McArdle, John J.; Prescott, Carol A.

    2010-01-01

    There is a great deal of interest in the analysis of genotype by environment interactions (GxE). There are some limitations in the typical models for the analysis of GxE, including well-known statistical problems in identifying interactions and unobserved heterogeneity of persons across groups. The impact of a treatment may depend on the level of an unobserved variable, and this variation may dampen the estimated impact of treatment. A case has been made that genetic variation may sometimes account for unobserved, and hence unaccounted for, heterogeneity. The statistical power associated with the GxE design has been studied in many different ways, and most results show that the small effects expected require relatively large or non-representative samples (i.e., extreme groups). In this report, we describe some alternative approaches, such as randomized designs with multiple measures, multiple groups, multiple occasions, and analyses to identify latent (unobserved) classes of people. These are illustrated with data from the HRS/ADAMs study, examining the relations among episodic memory (based on word recall), APOE4 genotype, and educational attainment (as a proxy for an environmental exposure). Randomized clinical trials (RCT) or randomized field trials (RFT) have multiple strengths in the estimation of causal influences, and we discuss how measured genotypes can be incorporated into these designs. Use of these contemporary modeling techniques often requires different kinds of data be collected and encourages the formation of parsimonious models with fewer overall parameters, allowing specific GxE hypotheses to be investigated with a reasonable statistical foundation. PMID:22472970

  5. Multivariate explanatory model for sporadic carcinoma of the colon in Dukes' stages I and IIa

    PubMed Central

    Villadiego-Sánchez, J.M.; Ortega-Calvo, M.; Pino-Mejías, R.; Cayuela, A.; Iglesias-Bonilla, P.; la Corte, F. García-de; Santos-Lozano, J.M.; Lapetra-Peralta, José

    2009-01-01

    Objective: We obtained before an explanatory model with six dependant variables: age of the patient, total cholesterol (TC), HDL cholesterol (HDL-C), VLDL cholesterol (VLDL-C), alkaline phosphatase (AP) and the CA 19.9 tumour marker. Our objective in this study was to validate the model by means of the acquisition of new records for an additional analysis. Design: Non-paired case control study. Setting: Urban and rural hospitals and primary health facilities in Western Andalusia and Extremadura (Spain). Patients: At both the primary care facilities and hospital level, controls were gathered in a prospective manner (n= 275). Cases were prospective and retrospective manner collected on (n=126). Main outcome measures: Descriptive statistics, logistic regression and bootstrap analysis. Results: The AGE (odds ratio 1.02; 95% CI 1.003-1.037) (p= 0.01), the TC (odds ratio 0.986; 95% C.I. 0.980-0.992) (p< 0.001) and the CA 19.9 (odds ratio 1.023; 95% C.I. 1.012- 1.034) (p<0.001) were the variables that showed significant values at logistic regression analysis and bootstrap. Berkson's bias was statistically assessed. Conclusions: The model, validated by means of logistic regression and bootstrap analysis, contains the variables AGE, TC, and CA 19.9 (three of the original six) and has a level 4 over 5 according to the criteria of Justice et al. (multiple independent validations) [Ann. Intern. Med.1999; 130: 515]. PMID:19214243

  6. Modeling of topographic effects on Antarctic sea ice using multivariate adaptive regression splines

    NASA Technical Reports Server (NTRS)

    De Veaux, Richard D.; Gordon, Arnold L.; Comiso, Joey C.; Bacherer, Nadine E.

    1993-01-01

    The role of seafloor topography in the spatial variations of the southern ocean sea ice cover as observed (every other day) by the Nimbus 7 scanning multichannel microwave radiometer satellite in the years 1980, 1983, and 1984 is studied. Bottom bathymetry can affect sea ice surface characteristics because of the basically barotropic circulation of the ocean south of the Antarctic Circumpolar current. The main statistical tool used to quantify this effect is a local nonparametric regression model of sea ice concentration as a function of the depth and its first two derivatives in both meridional and zonal directions. First, we model the relationship of bathymetry to sea ice concentration in two sudy areas, one over the Maud Rise and the other over the Ross Sea shelf region. The multiple correlation coefficient is found to average 44% in the Maud Rise study area and 62% in the Ross Sea study area over the years 1980, 1983, and 1984. Second, a strategy of dividing the entire Antarctic region into an overlapping mosaic of small areas, or windows is considered. Keeping the windows small reduces the correlation of bathymetry with other factors such as wind, sea temperature, and distance to the continent. We find that although the form of the model varies from window to window due to the changing role of other relevant environmental variables, we are left with a spatially consistent ordering of the relative importance of the topographic predictors. For a set of three representative days in the Austral winter of 1980, the analysis shows that an average of 54% of the spatial variation in sea ice concentration over the entire ice cover can be attributed to topographic variables. The results thus support the hypothesis that there is a sea ice to bottom bathymetry link. However this should not undermine the considerable influence of wind, current, and temperature which affect the ice distribution directly and are partly responsible for the observed bathymetric effects.

  7. A nonlinear model with latent process for cognitive evolution using multivariate longitudinal data

    PubMed Central

    Proust, Cécile; Jacqmin-Gadda, Hélène; Taylor, Jérémy M.G.; Ganiayre, Julien; Commenges, Daniel

    2006-01-01

    Summary Cognition is not directly measurable. It is assessed using psychometric tests which can be viewed as quantitative measures of cognition with error. The aim of this paper is to propose a model to describe the evolution in continuous time of unobserved cognition in the elderly and assess the impact of covariates directly on it. The latent cognitive process is defined using a linear mixed model including a Brownian motion and time-dependent covariates. The observed psychometric tests are considered as the results of parametrized nonlinear transformations of it at discrete occasions. Estimation of the parameters contained both in the transformations and in the linear mixed model is achieved by maximizing the observed likelihood and graphical methods are performed to assess the goodness-of-fit of the model. The method is applied to data from PAQUID, a French prospective cohort study of ageing. La cognition n’est pas directement mesurable. Elle est évaluée à l’aide d’une batterie de tests psychométriques qui peuvent être considérés comme des mesures quantitatives bruitées de la cognition. L’objectif de cet article est de proposer un modèle permettant de décrire la cognition non observée chez les personnes âgées et d’évaluer l’impact de variables explicatives directement dessus. Le processus latent défini en temps continu et représentant la cognition est modélisé par un modèle linéaire mixte prenant en compte des variables dépendantes du temps et les tests psychométriques sont ensuite définis comme des transformations nonlinéaires paramétrées du processus latent. L’estimation des paramètres à la fois dans le modèles mixte et dans les transformations nonlinéaires est obtenue en maximisant la vraisemblance observée et des méthodes graphiques sont utilisées pour évaluer l’adéquation du modèle. La méthode est appliquée aux données de la cohorte prospective française PAQUID. PMID:17156275

  8. A PSO-based optimal tuning strategy for constrained multivariable predictive controllers with model uncertainty.

    PubMed

    Nery, Gesner A; Martins, Márcio A F; Kalid, Ricardo

    2014-03-01

    This paper describes the development of a method to optimally tune constrained MPC algorithms with model uncertainty. The proposed method is formulated by using the worst-case control scenario, which is characterized by the Morari resiliency index and the condition number, and a given nonlinear multi-objective performance criterion. The resulting constrained mixed-integer nonlinear optimization problem is solved on the basis of a modified version of the particle swarm optimization technique, because of its effectiveness in dealing with this kind of problem. The performance of this PSO-based tuning method is evaluated through its application to the well-known Shell heavy oil fractionator process.

  9. Environmental influence on coastal phytoplankton and zooplankton diversity: a multivariate statistical model analysis.

    PubMed

    Chou, Wei-Rung; Fang, Lee-Shing; Wang, Wei-Hsien; Tew, Kwee Siong

    2012-09-01

    In a marine ecosystem, the diversity of phytoplankton can influence the diversity of zooplankton, or vice versa, and both can be affected by the environmental factors. In this study, we used principal component analysis (PCA) to identify the major sources of influence on the coastal water near an industrial park, following by construction of structural equation model (SEM) to determine the direct and indirect effect of the factors on phytoplankton and zooplankton diversity. PCA results indicated that the coastal area was mainly affected by riverine discharge (represented by high PC factor loadings of transparency and turbidity) and seasonal change (represented by temperature). SEM further suggested that both riverine discharge and seasonal influences can directly affect phytoplankton diversity, but indirectly affected zooplankton diversity via changes in phytoplankton. Using PCA to determine the sources of influence followed by construction of SEM allowed us to understand the relative importance of the environmental factors, direct or indirect, on phytoplankton and zooplankton diversity. When environmental changes occur, a new SEM could be constructed using the same category of physical and biological data and then compared to the current model to verify whether the environmental changes were the cause of alterations in planktonic communities in the area.

  10. Hyper-Fit: Fitting Linear Models to Multidimensional Data with Multivariate Gaussian Uncertainties

    NASA Astrophysics Data System (ADS)

    Robotham, A. S. G.; Obreschkow, D.

    2015-09-01

    Astronomical data is often uncertain with errors that are heteroscedastic (different for each data point) and covariant between different dimensions. Assuming that a set of D-dimensional data points can be described by a (D - 1)-dimensional plane with intrinsic scatter, we derive the general likelihood function to be maximised to recover the best fitting model. Alongside the mathematical description, we also release the hyper-fit package for the R statistical language (http://github.com/asgr/hyper.fit) and a user-friendly web interface for online fitting (http://hyperfit.icrar.org). The hyper-fit package offers access to a large number of fitting routines, includes visualisation tools, and is fully documented in an extensive user manual. Most of the hyper-fit functionality is accessible via the web interface. In this paper, we include applications to toy examples and to real astronomical data from the literature: the mass-size, Tully-Fisher, Fundamental Plane, and mass-spin-morphology relations. In most cases, the hyper-fit solutions are in good agreement with published values, but uncover more information regarding the fitted model.

  11. Interfacial free energy adjustable phase field crystal model for homogeneous nucleation.

    PubMed

    Guo, Can; Wang, Jincheng; Wang, Zhijun; Li, Junjie; Guo, Yaolin; Huang, Yunhao

    2016-05-18

    To describe the homogeneous nucleation process, an interfacial free energy adjustable phase-field crystal model (IPFC) was proposed by reconstructing the energy functional of the original phase field crystal (PFC) methodology. Compared with the original PFC model, the additional interface term in the IPFC model effectively can adjust the magnitude of the interfacial free energy, but does not affect the equilibrium phase diagram and the interfacial energy anisotropy. The IPFC model overcame the limitation that the interfacial free energy of the original PFC model is much less than the theoretical results. Using the IPFC model, we investigated some basic issues in homogeneous nucleation. From the viewpoint of simulation, we proceeded with an in situ observation of the process of cluster fluctuation and obtained quite similar snapshots to colloidal crystallization experiments. We also counted the size distribution of crystal-like clusters and the nucleation rate. Our simulations show that the size distribution is independent of the evolution time, and the nucleation rate remains constant after a period of relaxation, which are consistent with experimental observations. The linear relation between logarithmic nucleation rate and reciprocal driving force also conforms to the steady state nucleation theory.

  12. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Schröter, Kai; Merz, Bruno

    2016-05-01

    Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  13. Parametric Adjustments to the Rankine Vortex Wind Model for Gulf of Mexico Hurricanes

    DTIC Science & Technology

    2012-11-01

    Gulf of Mexico hurricanes show considerable differences between the resulting wind speeds and data. The differences are used to guide the development of adjustment factors to improve the wind fields resulting from the Rankine Vortex model. The corrected model shows a significant improvement in the shape, size, and wind speed contours for 14 out of 17 hurricanes examined. The effect on wave fields resulting from the original and modified wind fields are on the order of 4 m, which is important for the estimation of extreme wave

  14. Adjustment of the k-ω SST turbulence model for prediction of airfoil characteristics near stall

    NASA Astrophysics Data System (ADS)

    Matyushenko, A. A.; Garbaruk, A. V.

    2016-11-01

    A version of k-ra SST turbulence model adjusted for flow around airfoils at high Reynolds numbers is presented. The modified version decreases eddy viscosity and significantly improves the accuracy of prediction of aerodynamic characteristics in a wide range of angles of attack. However, considered reduction of eddy viscosity destroys calibration of the model, which leads to decreasing accuracy of skin-friction coefficient prediction even for relatively simple wall-bounded turbulent flows. Therefore, the area of applicability of the suggested modification is limited to flows around airfoils.

  15. A NEW MULTIVARIATE MEASUREMENT ERROR MODEL WITH ZERO-INFLATED DIETARY DATA, AND ITS APPLICATION TO DIETARY ASSESSMENT.

    PubMed

    Zhang, Saijuan; Midthune, Douglas; Guenther, Patricia M; Krebs-Smith, Susan M; Kipnis, Victor; Dodd, Kevin W; Buckman, Dennis W; Tooze, Janet A; Freedman, Laurence; Carroll, Raymond J

    2011-06-01

    In the United States the preferred method of obtaining dietary intake data is the 24-hour dietary recall, yet the measure of most interest is usual or long-term average daily intake, which is impossible to measure. Thus, usual dietary intake is assessed with considerable measurement error. Also, diet represents numerous foods, nutrients and other components, each of which have distinctive attributes. Sometimes, it is useful to examine intake of these components separately, but increasingly nutritionists are interested in exploring them collectively to capture overall dietary patterns. Consumption of these components varies widely: some are consumed daily by almost everyone on every day, while others are episodically consumed so that 24-hour recall data are zero-inflated. In addition, they are often correlated with each other. Finally, it is often preferable to analyze the amount of a dietary component relative to the amount of energy (calories) in a diet because dietary recommendations often vary with energy level. The quest to understand overall dietary patterns of usual intake has to this point reached a standstill. There are no statistical methods or models available to model such complex multivariate data with its measurement error and zero inflation. This paper proposes the first such model, and it proposes the first workable solution to fit such a model. After describing the model, we use survey-weighted MCMC computations to fit the model, with uncertainty estimation coming from balanced repeated replication.The methodology is illustrated through an application to estimating the population distribution of the Healthy Eating Index-2005 (HEI-2005), a multi-component dietary quality index involving ratios of interrelated dietary components to energy, among children aged 2-8 in the United States. We pose a number of interesting questions about the HEI-2005 and provide answers that were not previously within the realm of possibility, and we indicate ways that our

  16. Asymptotically Normal and Efficient Estimation of Covariate-Adjusted Gaussian Graphical Model

    PubMed Central

    Chen, Mengjie; Ren, Zhao; Zhao, Hongyu; Zhou, Harrison

    2015-01-01

    A tuning-free procedure is proposed to estimate the covariate-adjusted Gaussian graphical model. For each finite subgraph, this estimator is asymptotically normal and efficient. As a consequence, a confidence interval can be obtained for each edge. The procedure enjoys easy implementation and efficient computation through parallel estimation on subgraphs or edges. We further apply the asymptotic normality result to perform support recovery through edge-wise adaptive thresholding. This support recovery procedure is called ANTAC, standing for Asymptotically Normal estimation with Thresholding after Adjusting Covariates. ANTAC outperforms other methodologies in the literature in a range of simulation studies. We apply ANTAC to identify gene-gene interactions using an eQTL dataset. Our result achieves better interpretability and accuracy in comparison with CAMPE. PMID:27499564

  17. A new multivariate statistical model for change detection in images acquired by homogeneous and heterogeneous sensors.

    PubMed

    Prendes, Jorge; Chabert, Marie; Pascal, Frederic; Giros, Alain; Tourneret, Jean-Yves

    2015-03-01

    Remote sensing images are commonly used to monitor the earth surface evolution. This surveillance can be conducted by detecting changes between images acquired at different times and possibly by different kinds of sensors. A representative case is when an optical image of a given area is available and a new image is acquired in an emergency situation (resulting from a natural disaster for instance) by a radar satellite. In such a case, images with heterogeneous properties have to be compared for change detection. This paper proposes a new approach for similarity measurement between images acquired by heterogeneous sensors. The approach exploits the considered sensor physical properties and specially the associated measurement noise models and local joint distributions. These properties are inferred through manifold learning. The resulting similarity measure has been successfully applied to detect changes between many kinds of images, including pairs of optical images and pairs of optical-radar images.

  18. Accounting for sex differences in PTSD: A multi-variable mediation model

    PubMed Central

    Christiansen, Dorte M.; Hansen, Maj

    2015-01-01

    Background Approximately twice as many females as males are diagnosed with posttraumatic stress disorder (PTSD). However, little is known about why females report more PTSD symptoms than males. Prior studies have generally focused on few potential mediators at a time and have often used methods that were not ideally suited to test for mediation effects. Prior research has identified a number of individual risk factors that may contribute to sex differences in PTSD severity, although these cannot fully account for the increased symptom levels in females when examined individually. Objective The present study is the first to systematically test the hypothesis that a combination of pre-, peri-, and posttraumatic risk factors more prevalent in females can account for sex differences in PTSD severity. Method The study was a quasi-prospective questionnaire survey assessing PTSD and related variables in 73.3% of all Danish bank employees exposed to bank robbery during the period from April 2010 to April 2011. Participants filled out questionnaires 1 week (T1, N=450) and 6 months after the robbery (T2, N=368; 61.1% females). Mediation was examined using an analysis designed specifically to test a multiple mediator model. Results Females reported more PTSD symptoms than males and higher levels of neuroticism, depression, physical anxiety sensitivity, peritraumatic fear, horror, and helplessness (the A2 criterion), tonic immobility, panic, dissociation, negative posttraumatic cognitions about self and the world, and feeling let down. These variables were included in the model as potential mediators. The combination of risk factors significantly mediated the association between sex and PTSD severity, accounting for 83% of the association. Conclusions The findings suggest that females report more PTSD symptoms because they experience higher levels of associated risk factors. The results are relevant to other trauma populations and to other trauma-related psychiatric disorders

  19. Remote Sensing-based Methodologies for Snow Model Adjustments in Operational Streamflow Prediction

    NASA Astrophysics Data System (ADS)

    Bender, S.; Miller, W. P.; Bernard, B.; Stokes, M.; Oaida, C. M.; Painter, T. H.

    2015-12-01

    Water management agencies rely on hydrologic forecasts issued by operational agencies such as NOAA's Colorado Basin River Forecast Center (CBRFC). The CBRFC has partnered with the Jet Propulsion Laboratory (JPL) under funding from NASA to incorporate research-oriented, remotely-sensed snow data into CBRFC operations and to improve the accuracy of CBRFC forecasts. The partnership has yielded valuable analysis of snow surface albedo as represented in JPL's MODIS Dust Radiative Forcing in Snow (MODDRFS) data, across the CBRFC's area of responsibility. When dust layers within a snowpack emerge, reducing the snow surface albedo, the snowmelt rate may accelerate. The CBRFC operational snow model (SNOW17) is a temperature-index model that lacks explicit representation of snowpack surface albedo. CBRFC forecasters monitor MODDRFS data for emerging dust layers and may manually adjust SNOW17 melt rates. A technique was needed for efficient and objective incorporation of the MODDRFS data into SNOW17. Initial development focused in Colorado, where dust-on-snow events frequently occur. CBRFC forecasters used retrospective JPL-CBRFC analysis and developed a quantitative relationship between MODDRFS data and mean areal temperature (MAT) data. The relationship was used to generate adjusted, MODDRFS-informed input for SNOW17. Impacts of the MODDRFS-SNOW17 MAT adjustment method on snowmelt-driven streamflow prediction varied spatially and with characteristics of the dust deposition events. The largest improvements occurred in southwestern Colorado, in years with intense dust deposition events. Application of the method in other regions of Colorado and in "low dust" years resulted in minimal impact. The MODDRFS-SNOW17 MAT technique will be implemented in CBRFC operations in late 2015, prior to spring 2016 runoff. Collaborative investigation of remote sensing-based adjustment methods for the CBRFC operational hydrologic forecasting environment will continue over the next several years.

  20. “A model of mother-child Adjustment in Arab Muslim Immigrants to the US”

    PubMed Central

    Hough, Edythe s; Templin, Thomas N; Kulwicki, Anahid; Ramaswamy, Vidya; Katz, Anne

    2009-01-01

    We examined the mother-child adjustment and child behavior problems in Arab Muslim immigrant families residing in the U.S.A. The sample of 635 mother-child dyads was comprised of mothers who emigrated from 1989 or later and had at least one early adolescent child between the ages of 11 to 15 years old who was also willing to participate. Arabic speaking research assistants collected the data from the mothers and children using established measures of maternal and child stressors, coping, and social support; maternal distress; parent-child relationship; and child behavior problems. A structural equation model (SEM) was specified a priori with 17 predicted pathways. With a few exceptions, the final SEM model was highly consistent with the proposed model and had a good fit to the data. The model accounted for 67% of the variance in child behavior problems. Child stressors, mother-child relationship, and maternal stressors were the causal variables that contributed the most to child behavior problems. The model also accounted for 27% of the variance in mother-child relationship. Child active coping, child gender, mother’s education, and maternal distress were all predictive of the mother-child relationship. Mother-child relationship also mediated the effects of maternal distress and child active coping on child behavior problems. These findings indicate that immigrant mothers contribute greatly to adolescent adjustment, both as a source of risk and protection. These findings also suggest that intervening with immigrant mothers to reduce their stress and strengthening the parent-child relationship are two important areas for promoting adolescent adjustment. PMID:19758737

  1. A spatial model of bird abundance as adjusted for detection probability

    USGS Publications Warehouse

    Gorresen, P.M.; Mcmillan, G.P.; Camp, R.J.; Pratt, T.K.

    2009-01-01

    Modeling the spatial distribution of animals can be complicated by spatial and temporal effects (i.e. spatial autocorrelation and trends in abundance over time) and other factors such as imperfect detection probabilities and observation-related nuisance variables. Recent advances in modeling have demonstrated various approaches that handle most of these factors but which require a degree of sampling effort (e.g. replication) not available to many field studies. We present a two-step approach that addresses these challenges to spatially model species abundance. Habitat, spatial and temporal variables were handled with a Bayesian approach which facilitated modeling hierarchically structured data. Predicted abundance was subsequently adjusted to account for imperfect detection and the area effectively sampled for each species. We provide examples of our modeling approach for two endemic Hawaiian nectarivorous honeycreepers: 'i'iwi Vestiaria coccinea and 'apapane Himatione sanguinea. ?? 2009 Ecography.

  2. Mark-specific proportional hazards model with multivariate continuous marks and its application to HIV vaccine efficacy trials

    PubMed Central

    Sun, Yanqing; Li, Mei; Gilbert, Peter B.

    2013-01-01

    For time-to-event data with finitely many competing risks, the proportional hazards model has been a popular tool for relating the cause-specific outcomes to covariates (Prentice and others, 1978. The analysis of failure time in the presence of competing risks. Biometrics 34, 541–554). Inspired by previous research in HIV vaccine efficacy trials, the cause of failure is replaced by a continuous mark observed only in subjects who fail. This article studies an extension of this approach to allow a multivariate continuum of competing risks, to better account for the fact that the candidate HIV vaccines tested in efficacy trials have contained multiple HIV sequences, with a purpose to elicit multiple types of immune response that recognize and block different types of HIV viruses. We develop inference for the proportional hazards model in which the regression parameters depend parametrically on the marks, to avoid the curse of dimensionality, and the baseline hazard depends nonparametrically on both time and marks. Goodness-of-fit tests are constructed based on generalized weighted martingale residuals. The finite-sample performance of the proposed methods is examined through extensive simulations. The methods are applied to a vaccine efficacy trial to examine whether and how certain antigens represented inside the vaccine are relevant for protection or anti-protection against the exposing HIVs. PMID:22764174

  3. Sugar and acid content of Citrus prediction modeling using FT-IR fingerprinting in combination with multivariate statistical analysis.

    PubMed

    Song, Seung Yeob; Lee, Young Koung; Kim, In-Jung

    2016-01-01

    A high-throughput screening system for Citrus lines were established with higher sugar and acid contents using Fourier transform infrared (FT-IR) spectroscopy in combination with multivariate analysis. FT-IR spectra confirmed typical spectral differences between the frequency regions of 950-1100 cm(-1), 1300-1500 cm(-1), and 1500-1700 cm(-1). Principal component analysis (PCA) and subsequent partial least square-discriminant analysis (PLS-DA) were able to discriminate five Citrus lines into three separate clusters corresponding to their taxonomic relationships. The quantitative predictive modeling of sugar and acid contents from Citrus fruits was established using partial least square regression algorithms from FT-IR spectra. The regression coefficients (R(2)) between predicted values and estimated sugar and acid content values were 0.99. These results demonstrate that by using FT-IR spectra and applying quantitative prediction modeling to Citrus sugar and acid contents, excellent Citrus lines can be early detected with greater accuracy.

  4. Multivariate General Linear Models (MGLM) on Riemannian Manifolds with Applications to Statistical Analysis of Diffusion Weighted Images

    PubMed Central

    Kim, Hyunwoo J.; Adluru, Nagesh; Collins, Maxwell D.; Chung, Moo K.; Bendlin, Barbara B.; Johnson, Sterling C.; Davidson, Richard J.; Singh, Vikas

    2014-01-01

    Linear regression is a parametric model which is ubiquitous in scientific analysis. The classical setup where the observations and responses, i.e., (xi, yi) pairs, are Euclidean is well studied. The setting where yi is manifold valued is a topic of much interest, motivated by applications in shape analysis, topic modeling, and medical imaging. Recent work gives strategies for max-margin classifiers, principal components analysis, and dictionary learning on certain types of manifolds. For parametric regression specifically, results within the last year provide mechanisms to regress one real-valued parameter, xi ∈ R, against a manifold-valued variable, yi ∈ . We seek to substantially extend the operating range of such methods by deriving schemes for multivariate multiple linear regression —a manifold-valued dependent variable against multiple independent variables, i.e., f : Rn → . Our variational algorithm efficiently solves for multiple geodesic bases on the manifold concurrently via gradient updates. This allows us to answer questions such as: what is the relationship of the measurement at voxel y to disease when conditioned on age and gender. We show applications to statistical analysis of diffusion weighted images, which give rise to regression tasks on the manifold GL(n)/O(n) for diffusion tensor images (DTI) and the Hilbert unit sphere for orientation distribution functions (ODF) from high angular resolution acquisition. The companion open-source code is available on nitrc.org/projects/riem_mglm. PMID:25580070

  5. Application of Multivariable Model Predictive Advanced Control for a 2×310T/H CFB Boiler Unit

    NASA Astrophysics Data System (ADS)

    Weijie, Zhao; Zongllao, Dai; Rong, Gou; Wengan, Gong

    When a CFB boiler is in automatic control, there are strong interactions between various process variables and inverse response characteristics of bed temperature control target. Conventional Pill control strategy cannot deliver satisfactory control demand. Kalman wave filter technology is used to establish a non-linear combustion model, based on the CFB combustion characteristics of bed fuel inventory, heating values, bed lime inventory and consumption. CFB advanced combustion control utilizes multivariable model predictive control technology to optimize primary and secondary air flow, bed temperature, air flow, fuel flow and heat flux. In addition to providing advanced combustion control to 2×310t/h CFB+1×100MW extraction condensing turbine generator unit, the control also provides load allocation optimization and advanced control for main steam pressure, combustion and temperature. After the successful implementation, under 10% load change, main steam pressure varied less than ±0.07MPa, temperature less than ±1°C, bed temperature less than ±4°C, and air flow (O2) less than ±0.4%.

  6. Multivariate spatio-temporal modelling for assessing Antarctica's present-day contribution to sea-level rise

    PubMed Central

    Zammit-Mangion, Andrew; Rougier, Jonathan; Schön, Nana; Lindgren, Finn; Bamber, Jonathan

    2015-01-01

    Antarctica is the world's largest fresh-water reservoir, with the potential to raise sea levels by about 60 m. An ice sheet contributes to sea-level rise (SLR) when its rate of ice discharge and/or surface melting exceeds accumulation through snowfall. Constraining the contribution of the ice sheets to present-day SLR is vital both for coastal development and planning, and climate projections. Information on various ice sheet processes is available from several remote sensing data sets, as well as in situ data such as global positioning system data. These data have differing coverage, spatial support, temporal sampling and sensing characteristics, and thus, it is advantageous to combine them all in a single framework for estimation of the SLR contribution and the assessment of processes controlling mass exchange with the ocean. In this paper, we predict the rate of height change due to salient geophysical processes in Antarctica and use these to provide estimates of SLR contribution with associated uncertainties. We employ a multivariate spatio-temporal model, approximated as a Gaussian Markov random field, to take advantage of differing spatio-temporal properties of the processes to separate the causes of the observed change. The process parameters are estimated from geophysical models, while the remaining parameters are estimated using a Markov chain Monte Carlo scheme, designed to operate in a high-performance computing environment across multiple nodes. We validate our methods against a separate data set and compare the results to those from studies that invariably employ numerical model outputs directly. We conclude that it is possible, and insightful, to assess Antarctica's contribution without explicit use of numerical models. Further, the results obtained here can be used to test the geophysical numerical models for which in situ data are hard to obtain. © 2015 The Authors. Environmetrics published by John Wiley & Sons Ltd. PMID:25937792

  7. Multivariate spatio-temporal modelling for assessing Antarctica's present-day contribution to sea-level rise.

    PubMed

    Zammit-Mangion, Andrew; Rougier, Jonathan; Schön, Nana; Lindgren, Finn; Bamber, Jonathan

    2015-05-01

    Antarctica is the world's largest fresh-water reservoir, with the potential to raise sea levels by about 60 m. An ice sheet contributes to sea-level rise (SLR) when its rate of ice discharge and/or surface melting exceeds accumulation through snowfall. Constraining the contribution of the ice sheets to present-day SLR is vital both for coastal development and planning, and climate projections. Information on various ice sheet processes is available from several remote sensing data sets, as well as in situ data such as global positioning system data. These data have differing coverage, spatial support, temporal sampling and sensing characteristics, and thus, it is advantageous to combine them all in a single framework for estimation of the SLR contribution and the assessment of processes controlling mass exchange with the ocean. In this paper, we predict the rate of height change due to salient geophysical processes in Antarctica and use these to provide estimates of SLR contribution with associated uncertainties. We employ a multivariate spatio-temporal model, approximated as a Gaussian Markov random field, to take advantage of differing spatio-temporal properties of the processes to separate the causes of the observed change. The process parameters are estimated from geophysical models, while the remaining parameters are estimated using a Markov chain Monte Carlo scheme, designed to operate in a high-performance computing environment across multiple nodes. We validate our methods against a separate data set and compare the results to those from studies that invariably employ numerical model outputs directly. We conclude that it is possible, and insightful, to assess Antarctica's contribution without explicit use of numerical models. Further, the results obtained here can be used to test the geophysical numerical models for which in situ data are hard to obtain. © 2015 The Authors. Environmetrics published by John Wiley & Sons Ltd.

  8. Importance of prediction outlier diagnostics in determining a successful inter-vendor multivariate calibration model transfer.

    PubMed

    Guenard, Robert D; Wehlburg, Christine M; Pell, Randy J; Haaland, David M

    2007-07-01

    This paper reports on the transfer of calibration models between Fourier transform near-infrared (FT-NIR) instruments from four different manufacturers. The piecewise direct standardization (PDS) method is compared with the new hybrid calibration method known as prediction augmented classical least squares/partial least squares (PACLS/PLS). The success of a calibration transfer experiment is judged by prediction error and by the number of samples that are flagged as outliers that would not have been flagged as such if a complete recalibration were performed. Prediction results must be acceptable and the outlier diagnostics capabilities must be preserved for the transfer to be deemed successful. Previous studies have measured the success of a calibration transfer method by comparing only the prediction performance (e.g., the root mean square error of prediction, RMSEP). However, our study emphasizes the need to consider outlier detection performance as well. As our study illustrates, the RMSEP values for a calibration transfer can be within acceptable range; however, statistical analysis of the spectral residuals can show that differences in outlier performance can vary significantly between competing transfer methods. There was no statistically significant difference in the prediction error between the PDS and PACLS/PLS methods when the same subset sample selection method was used for both methods. However, the PACLS/PLS method was better at preserving the outlier detection capabilities and therefore was judged to have performed better than the PDS algorithm when transferring calibrations with the use of a subset of samples to define the transfer function. The method of sample subset selection was found to make a significant difference in the calibration transfer results using the PDS algorithm, while the transfer results were less sensitive to subset selection when the PACLS/PLS method was used.

  9. Predictive value of traction force measurement in vacuum extraction: Development of a multivariate prognostic model

    PubMed Central

    Pettersson, Kristina; Yousaf, Khurram; Ranstam, Jonas; Westgren, Magnus; Ajne, Gunilla

    2017-01-01

    Objective To enable early prediction of strong traction force vacuum extraction. Design Observational cohort. Setting Karolinska University Hospital delivery ward, tertiary unit. Population and sample size Term mid and low metal cup vacuum extraction deliveries June 2012—February 2015, n = 277. Methods Traction forces during vacuum extraction were collected prospectively using an intelligent handle. Levels of traction force were analysed pairwise by subjective category strong versus non-strong extraction, in order to define an objective predictive value for strong extraction. Statistical analysis A logistic regression model based on the shrinkage and selection method lasso was used to identify the predictive capacity of the different traction force variables. Predictors Total (time force integral, Newton minutes) and peak traction (Newton) force in the first to third pull; difference in traction force between the second and first pull, as well as the third and first pull respectively. Accumulated traction force at the second and third pull. Outcome Subjectively categorized extraction as strong versus non-strong. Results The prevalence of strong extraction was 26%. Prediction including the first and second pull: AUC 0,85 (CI 0,80–0,90); specificity 0,76; sensitivity 0,87; PPV 0,56; NPV 0,94. Prediction including the first to third pull: AUC 0,86 (CI 0,80–0,91); specificity 0,87; sensitivity 0,70; PPV 0,65; NPV 0,89. Conclusion Traction force measurement during vacuum extraction can help exclude strong category extraction from the second pull. From the third pull, two-thirds of strong extractions can be predicted. PMID:28257459

  10. Identifying Thoracic Malignancies Through Pleural Fluid Biomarkers: A Predictive Multivariate Model.

    PubMed

    Porcel, José M; Esquerda, Aureli; Martínez-Alonso, Montserrat; Bielsa, Silvia; Salud, Antonieta

    2016-03-01

    The diagnosis of malignant pleural effusions may be challenging when cytological examination of aspirated pleural fluid is equivocal or noncontributory. The purpose of this study was to identify protein candidate biomarkers differentially expressed in the pleural fluid of patients with mesothelioma, lung adenocarcinoma, lymphoma, and tuberculosis (TB).A multiplex protein biochip comprising 120 biomarkers was used to determine the pleural fluid protein profile of 29 mesotheliomas, 29 lung adenocarcinomas, 12 lymphomas, and 35 tuberculosis. The relative abundance of these predetermined biomarkers among groups served to establish the differential diagnosis of: malignant versus benign (TB) effusions, lung adenocarcinoma versus mesothelioma, and lymphoma versus TB. The selected putative markers were validated using widely available commercial techniques in an independent sample of 102 patients.Significant differences were found in the protein expressions of metalloproteinase-9 (MMP-9), cathepsin-B, C-reactive protein, and chondroitin sulfate between malignant and TB effusions. When integrated into a scoring model, these proteins yielded 85% sensitivity, 100% specificity, and an area under the curve (AUC) of 0.98 for labeling malignancy in the verification sample. For lung adenocarcinoma-mesothelioma discrimination, combining CA19-9, CA15-3, and kallikrein-12 had maximal discriminatory capacity (65% sensitivity, 100% specificity, AUC 0.94); figures which also refer to the validation set. Last, cathepsin-B in isolation was only moderately useful (sensitivity 89%, specificity 62%, AUC 0.75) in separating lymphomatous and TB effusions. However, this last differentiation improved significantly when cathepsin-B was used with respect to the patient's age (sensitivity 72%, specificity 100%, AUC 0.94).In conclusion, panels of 4 (i.e., MMP-9, cathepsin-B, C-reactive protein, chondroitin sulfate), or 3 (i.e., CA19-9, CA15-3, kallikrein-12) different protein biomarkers on pleural

  11. Dynamically adjustable foot-ground contact model to estimate ground reaction force during walking and running.

    PubMed

    Jung, Yihwan; Jung, Moonki; Ryu, Jiseon; Yoon, Sukhoon; Park, Sang-Kyoon; Koo, Seungbum

    2016-03-01

    Human dynamic models have been used to estimate joint kinetics during various activities. Kinetics estimation is in demand in sports and clinical applications where data on external forces, such as the ground reaction force (GRF), are not available. The purpose of this study was to estimate the GRF during gait by utilizing distance- and velocity-dependent force models between the foot and ground in an inverse-dynamics-based optimization. Ten males were tested as they walked at four different speeds on a force plate-embedded treadmill system. The full-GRF model whose foot-ground reaction elements were dynamically adjusted according to vertical displacement and anterior-posterior speed between the foot and ground was implemented in a full-body skeletal model. The model estimated the vertical and shear forces of the GRF from body kinematics. The shear-GRF model with dynamically adjustable shear reaction elements according to the input vertical force was also implemented in the foot of a full-body skeletal model. Shear forces of the GRF were estimated from body kinematics, vertical GRF, and center of pressure. The estimated full GRF had the lowest root mean square (RMS) errors at the slow walking speed (1.0m/s) with 4.2, 1.3, and 5.7% BW for anterior-posterior, medial-lateral, and vertical forces, respectively. The estimated shear forces were not significantly different between the full-GRF and shear-GRF models, but the RMS errors of the estimated knee joint kinetics were significantly lower for the shear-GRF model. Providing COP and vertical GRF with sensors, such as an insole-type pressure mat, can help estimate shear forces of the GRF and increase accuracy for estimation of joint kinetics.

  12. Glacial isostatic adjustment using GNSS permanent stations and GIA modelling tools

    NASA Astrophysics Data System (ADS)

    Kollo, Karin; Spada, Giorgio; Vermeer, Martin

    2013-04-01

    Glacial Isostatic Adjustment (GIA) affects the Earth's mantle in areas which were once ice covered and the process is still ongoing. In this contribution we focus on GIA processes in Fennoscandian and North American uplift regions. In this contribution we use horizontal and vertical uplift rates from Global Navigation Satellite System (GNSS) permanent stations. For Fennoscandia the BIFROST dataset (Lidberg, 2010) and North America the dataset from Sella, 2007 were used respectively. We perform GIA modelling with the SELEN program (Spada and Stocchi, 2007) and we vary ice model parameters in space in order to find ice model which suits best with uplift values obtained from GNSS time series analysis. In the GIA modelling, the ice models ICE-5G (Peltier, 2004) and the ice model denoted as ANU05 ((Fleming and Lambeck, 2004) and references therein) were used. As reference, the velocity field from GNSS permanent station time series was used for both target areas. Firstly the sensitivity to the harmonic degree was tested in order to reduce the computation time. In the test, nominal viscosity values and pre-defined lithosphere thicknesses models were used, varying maximum harmonic degree values. Main criteria for choosing the suitable harmonic degree was chi-square fit - if the error measure does not differ more than 10%, then one might use as well lower harmonic degree value. From this test, maximum harmonic degree of 72 was chosen to perform calculations, as the larger value did not significantly modify the results obtained, as well the computational time for observations was kept reasonable. Secondly the GIA computations were performed to find the model, which could fit with highest probability to the GNSS-based velocity field in the target areas. In order to find best fitting Earth viscosity parameters, different viscosity profiles for the Earth models were tested and their impact on horizontal and vertical velocity rates from GIA modelling was studied. For every

  13. The distribution of arsenic in shallow alluvial groundwater under agricultural land in central Portugal: insights from multivariate geostatistical modeling.

    PubMed

    Andrade, A I A S S; Stigter, T Y

    2013-04-01

    In this study multivariate and geostatistical methods are jointly applied to model the spatial and temporal distribution of arsenic (As) concentrations in shallow groundwater as a function of physicochemical, hydrogeological and land use parameters, as well as to assess the related uncertainty. The study site is located in the Mondego River alluvial body in Central Portugal, where maize, rice and some vegetable crops dominate. In a first analysis scatter plots are used, followed by the application of principal component analysis to two different data matrices, of 112 and 200 samples, with the aim of detecting associations between As levels and other quantitative parameters. In the following phase explanatory models of As are created through factorial regression based on correspondence analysis, integrating both quantitative and qualitative parameters. Finally, these are combined with indicator-geostatistical techniques to create maps indicating the predicted probability of As concentrations in groundwater exceeding the current global drinking water guideline of 10 μg/l. These maps further allow assessing the uncertainty and representativeness of the monitoring network. A clear effect of the redox state on the presence of As is observed, and together with significant correlations with dissolved oxygen, nitrate, sulfate, iron, manganese and alkalinity, points towards the reductive dissolution of Fe (hydr)oxides as the essential mechanism of As release. The association of high As values with rice crop, known to promote reduced environments due to ponding, further corroborates this hypothesis. An additional source of As from fertilizers cannot be excluded, as the correlation with As is higher where rice is associated with vegetables, normally associated with higher fertilization rates. The best explanatory model of As occurrence integrates the parameters season, crop type, well and water depth, nitrate and Eh, though a model without the last two parameters also gives

  14. High speed classification of individual bacterial cells using a model-based light scatter system and multivariate statistics

    NASA Astrophysics Data System (ADS)

    Venkatapathi, Murugesan; Rajwa, Bartek; Ragheb, Kathy; Banada, Padmapriya P.; Lary, Todd; Robinson, J. Paul; Hirleman, E. Daniel

    2008-02-01

    We describe a model-based instrument design combined with a statistical classification approach for the development and realization of high speed cell classification systems based on light scatter. In our work, angular light scatter from cells of four bacterial species of interest, Bacillus subtilis, Escherichia coli, Listeria innocua, and Enterococcus faecalis, was modeled using the discrete dipole approximation. We then optimized a scattering detector array design subject to some hardware constraints, configured the instrument, and gathered experimental data from the relevant bacterial cells. Using these models and experiments, it is shown that optimization using a nominal bacteria model (i.e., using a representative size and refractive index) is insufficient for classification of most bacteria in realistic applications. Hence the computational predictions were constituted in the form of scattering-data-vector distributions that accounted for expected variability in the physical properties between individual bacteria within the four species. After the detectors were optimized using the numerical results, they were used to measure scatter from both the known control samples and unknown bacterial cells. A multivariate statistical method based on a support vector machine (SVM) was used to classify the bacteria species based on light scatter signatures. In our final instrument, we realized correct classification of B. subtilis in the presence of E. coli,L. innocua, and E. faecalis using SVM at 99.1%, 99.6%, and 98.5%, respectively, in the optimal detector array configuration. For comparison, the corresponding values for another set of angles were only 69.9%, 71.7%, and 70.2% using SVM, and more importantly, this improved performance is consistent with classification predictions.

  15. A Multivariate Model of Determinants of Change in Gross-Motor Abilities and Engagement in Self-Care and Play of Young Children with Cerebral Palsy

    ERIC Educational Resources Information Center

    Chiarello, Lisa A.; Palisano, Robert J.; Bartlett, Doreen J.; McCoy, Sarah Westcott

    2011-01-01

    A multivariate model of determinants of change in gross-motor ability and engagement in self-care and play provides physical and occupational therapists a framework for decisions on interventions and supports for young children with cerebral palsy and their families. Aspects of the child, family ecology, and rehabilitation and community services…

  16. Validation of cross-sectional time series and multivariate adaptive regression splines models for the prediction of energy expenditure in children and adolescents using doubly labeled water

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Accurate, nonintrusive, and inexpensive techniques are needed to measure energy expenditure (EE) in free-living populations. Our primary aim in this study was to validate cross-sectional time series (CSTS) and multivariate adaptive regression splines (MARS) models based on observable participant cha...

  17. Multivariate Modeling of Proteins Related to Trapezius Myalgia, a Comparative Study of Female Cleaners with or without Pain

    PubMed Central

    Hadrevi, Jenny; Ghafouri, Bijar; Larsson, Britt; Gerdle, Björn; Hellström, Fredrik

    2013-01-01

    The prevalence of chronic trapezius myalgia is high in women with high exposure to awkward working positions, repetitive movements and movements with high precision demands. The mechanisms behind chronic trapezius myalgia are not fully understood. The purpose of this study was to explore the differences in protein content between healthy and myalgic trapezius muscle using proteomics. Muscle biopsies from 12 female cleaners with work-related trapezius myalgia and 12 pain free female cleaners were obtained from the descending part of the trapezius. Proteins were separated with two-dimensional differential gel electrophoresis (2D-DIGE) and selected proteins were identified with mass spectrometry. In order to discriminate the two groups, quantified proteins were fitted to a multivariate analysis: partial least square discriminate analysis. The model separated 28 unique proteins which were related to glycolysis, the tricaboxylic acid cycle, to the contractile apparatus, the cytoskeleton and to acute response proteins. The results suggest altered metabolism, a higher abundance of proteins related to inflammation in myalgic cleaners compared to healthy, and a possible alteration of the contractile apparatus. This explorative proteomic screening of proteins related to chronic pain in the trapezius muscle provides new important aspects of the pathophysiology behind chronic trapezius myalgia. PMID:24023854

  18. Resolution of a Rank-Deficient Adjustment Model Via an Isomorphic Geometrical Setup with Tensor Structure.

    DTIC Science & Technology

    1987-03-01

    AFGL,-TR-87-0102 4. TITLE (ad Subtile) S . TYPE Of REPORT & ERIOD COVERED RESOLUTION OF A RANK-DEFICIENT ADJUSTMENT MODEL Final Report. VIA AN...transformation of multiple integrals. i IVnc’lass it i cd e- S CURITY C1 AIrIC ATIOIN O f THIS PAG P .𔃻 ’ FnI.f* d) TABLE OF CONTENTS Page ABSTRACT i...associated metric tensor is then given as g = krIs +jrs r ... + s + while the necessary associated metric tensor is -4 grs =ars jr s ’g = +jj 4 .... where

  19. Multivariant function model generation

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The development of computer programs applicable to space vehicle guidance was conducted. The subjects discussed are as follows: (1) determination of optimum reentry trajectories, (2) development of equations for performance of trajectory computation, (3) vehicle control for fuel optimization, (4) development of equations for performance trajectory computations, (5) applications and solution of Hamilton-Jacobi equation, and (6) stresses in dome shaped shells with discontinuities at the apex.

  20. Modular multivariable control improves hydrocracking

    SciTech Connect

    Chia, T.L.; Lefkowitz, I.; Tamas, P.D.

    1996-10-01

    Modular multivariable control (MMC), a system of interconnected, single process variable controllers, can be a user-friendly, reliable and cost-effective alternative to centralized, large-scale multivariable control packages. MMC properties and features derive directly from the properties of the coordinated controller which, in turn, is based on internal model control technology. MMC was applied to a hydrocracking unit involving two process variables and three controller outputs. The paper describes modular multivariable control, MMC properties, tuning considerations, application at the DCS level, constraints handling, and process application and results.

  1. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling study

    PubMed Central

    Girling, Alan J; Hofer, Timothy P; Wu, Jianhua; Chilton, Peter J; Nicholl, Jonathan P; Mohammed, Mohammed A; Lilford, Richard J

    2012-01-01

    Risk-adjustment schemes are used to monitor hospital performance, on the assumption that excess mortality not explained by case mix is largely attributable to suboptimal care. We have developed a model to estimate the proportion of the variation in standardised mortality ratios (SMRs) that can be accounted for by variation in preventable mortality. The model was populated with values from the literature to estimate a predictive value of the SMR in this context—specifically the proportion of those hospitals with SMRs among the highest 2.5% that fall among the worst 2.5% for preventable mortality. The extent to which SMRs reflect preventable mortality rates is highly sensitive to the proportion of deaths that are preventable. If 6% of hospital deaths are preventable (as suggested by the literature), the predictive value of the SMR can be no greater than 9%. This value could rise to 30%, if 15% of deaths are preventable. The model offers a ‘reality check’ for case mix adjustment schemes designed to isolate the preventable component of any outcome rate. PMID:23069860

  2. Validation, replication, and sensitivity testing of Heckman-type selection models to adjust estimates of HIV prevalence.

    PubMed

    Clark, Samuel J; Houle, Brian

    2014-01-01

    A recent study using Heckman-type selection models to adjust for non-response in the Zambia 2007 Demographic and Health Survey (DHS) found a large correction in HIV prevalence for males. We aim to validate this finding, replicate the adjustment approach in other DHSs, apply the adjustment approach in an external empirical context, and assess the robustness of the technique to different adjustment approaches. We used 6 DHSs, and an HIV prevalence study from rural South Africa to validate and replicate the adjustment approach. We also developed an alternative, systematic model of selection processes and applied it to all surveys. We decomposed corrections from both approaches into rate change and age-structure change components. We are able to reproduce the adjustment approach for the 2007 Zambia DHS and derive results comparable with the original findings. We are able to replicate applying the approach in several other DHSs. The approach also yields reasonable adjustments for a survey in rural South Africa. The technique is relatively robust to how the adjustment approach is specified. The Heckman selection model is a useful tool for assessing the possibility and extent of selection bias in HIV prevalence estimates from sample surveys.

  3. Four-Dimensional Data Assimilation of Gale Data Using a Multivariate Analysis Scheme and a Mesoscale Model with Diabatic Initialization.

    NASA Astrophysics Data System (ADS)

    Harms, Dewey Elvin

    1992-01-01

    A method of assimilating 3-hourly sounding data is developed and successfully tested in this study. First, the successive corrections scheme of Bratseth (1986), which converges to optimum interpolation, is applied for the numerical analysis of data collected during the Genesis of Atlantic Lows Experiment (GALE). Univariate analyses of the mass and wind field are produced. The coupling of the mass and wind field is achieved by further iterations of the geopotential utilizing improving estimates of the geostrophic wind to extrapolate the geopotential to the grid points. The univariate wind analysis is then corrected for the new geostrophic wind. Next, diabatic forcing is incorporated into a vertical mode initialization scheme to provide more realistic initial conditions and to shorten the spinup time of the Naval Research Laboratory/North Carolina State University (NRL/NCSU) mesoscale model. Latent-heating profiles are computed from 'spun-up' model-generated and observed rainfall. The latent heating is distributed in the vertical according to the cumulus convective parameterization scheme (Kuo scheme) of the model. Compatibility between the specified heating during initialization and the heating during early model integration is retained by merging the model integrated rainfall and heating rates with those rates from the initialization. Finally, the multivariate, successive correction analysis scheme and the diabatic initialization procedure are combined with the NRL/NCSU model to form an intermittent data-assimilation system. Assimilations of the GALE data over a 2{1over2}-day period were performed with differing update cycles of 3, 6, and 12 h. Twelve-hour NMC hemispheric analyses served as the "no assimilation" control case for comparison. The assimilation of 3-hourly GALE data led to large decreases in background forecast rms errors and smaller decreases in analysis rms error. Better consistency in time was achieved between forecasts and analyses in the

  4. Automatic parameter estimation of multicompartmental neuron models via minimization of trace error with control adjustment

    PubMed Central

    Goeritz, Marie L.; Marder, Eve

    2014-01-01

    We describe a new technique to fit conductance-based neuron models to intracellular voltage traces from isolated biological neurons. The biological neurons are recorded in current-clamp with pink (1/f) noise injected to perturb the activity of the neuron. The new algorithm finds a set of parameters that allows a multicompartmental model neuron to match the recorded voltage trace. Attempting to match a recorded voltage trace directly has a well-known problem: mismatch in the timing of action potentials between biological and model neuron is inevitable and results in poor phenomenological match between the model and data. Our approach avoids this by applying a weak control adjustment to the model to promote alignment during the fitting procedure. This approach is closely related to the control theoretic concept of a Luenberger observer. We tested this approach on synthetic data and on data recorded from an anterior gastric receptor neuron from the stomatogastric ganglion of the crab Cancer borealis. To test the flexibility of this approach, the synthetic data were constructed with conductance models that were different from the ones used in the fitting model. For both synthetic and biological data, the resultant models had good spike-timing accuracy. PMID:25008414

  5. Drought forecasting in eastern Australia using multivariate adaptive regression spline, least square support vector machine and M5Tree model

    NASA Astrophysics Data System (ADS)

    Deo, Ravinesh C.; Kisi, Ozgur; Singh, Vijay P.

    2017-02-01

    Drought forecasting using standardized metrics of rainfall is a core task in hydrology and water resources management. Standardized Precipitation Index (SPI) is a rainfall-based metric that caters for different time-scales at which the drought occurs, and due to its standardization, is well-suited for forecasting drought at different periods in climatically diverse regions. This study advances drought modelling using multivariate adaptive regression splines (MARS), least square support vector machine (LSSVM), and M5Tree models by forecasting SPI in eastern Australia. MARS model incorporated rainfall as mandatory predictor with month (periodicity), Southern Oscillation Index, Pacific Decadal Oscillation Index and Indian Ocean Dipole, ENSO Modoki and Nino 3.0, 3.4 and 4.0 data added gradually. The performance was evaluated with root mean square error (RMSE), mean absolute error (MAE), and coefficient of determination (r2). Best MARS model required different input combinations, where rainfall, sea surface temperature and periodicity were used for all stations, but ENSO Modoki and Pacific Decadal Oscillation indices were not required for Bathurst, Collarenebri and Yamba, and the Southern Oscillation Index was not required for Collarenebri. Inclusion of periodicity increased the r2 value by 0.5-8.1% and reduced RMSE by 3.0-178.5%. Comparisons showed that MARS superseded the performance of the other counterparts for three out of five stations with lower MAE by 15.0-73.9% and 7.3-42.2%, respectively. For the other stations, M5Tree was better than MARS/LSSVM with lower MAE by 13.8-13.4% and 25.7-52.2%, respectively, and for Bathurst, LSSVM yielded more accurate result. For droughts identified by SPI ≤ - 0.5, accurate forecasts were attained by MARS/M5Tree for Bathurst, Yamba and Peak Hill, whereas for Collarenebri and Barraba, M5Tree was better than LSSVM/MARS. Seasonal analysis revealed disparate results where MARS/M5Tree was better than LSSVM. The results highlight the

  6. Lower-order effects adjustment in quantitative traits model-based multifactor dimensionality reduction.

    PubMed

    Mahachie John, Jestinah M; Cattaert, Tom; Lishout, François Van; Gusareva, Elena S; Steen, Kristel Van

    2012-01-01

    Identifying gene-gene interactions or gene-environment interactions in studies of human complex diseases remains a big challenge in genetic epidemiology. An additional challenge, often forgotten, is to account for important lower-order genetic effects. These may hamper the identification of genuine epistasis. If lower-order genetic effects contribute to the genetic variance of a trait, identified statistical interactions may simply be due to a signal boost of these effects. In this study, we restrict attention to quantitative traits and bi-allelic SNPs as genetic markers. Moreover, our interaction study focuses on 2-way SNP-SNP interactions. Via simulations, we assess the performance of different corrective measures for lower-order genetic effects in Model-Based Multifactor Dimensionality Reduction epistasis detection, using additive and co-dominant coding schemes. Performance is evaluated in terms of power and familywise error rate. Our simulations indicate that empirical power estimates are reduced with correction of lower-order effects, likewise familywise error rates. Easy-to-use automatic SNP selection procedures, SNP selection based on "top" findings, or SNP selection based on p-value criterion for interesting main effects result in reduced power but also almost zero false positive rates. Always accounting for main effects in the SNP-SNP pair under investigation during Model-Based Multifactor Dimensionality Reduction analysis adequately controls false positive epistasis findings. This is particularly true when adopting a co-dominant corrective coding scheme. In conclusion, automatic search procedures to identify lower-order effects to correct for during epistasis screening should be avoided. The same is true for procedures that adjust for lower-order effects prior to Model-Based Multifactor Dimensionality Reduction and involve using residuals as the new trait. We advocate using "on-the-fly" lower-order effects adjusting when screening for SNP-SNP interactions

  7. Bioinformatics Multivariate Analysis Determined a Set of Phase-Specific Biomarker Candidates in a Novel Mouse Model for Viral Myocarditis

    PubMed Central

    Omura, Seiichi; Kawai, Eiichiro; Sato, Fumitaka; Martinez, Nicholas E.; Chaitanya, Ganta V.; Rollyson, Phoebe A.; Cvek, Urska; Trutschl, Marjan; Alexander, J. Steven; Tsunoda, Ikuo

    2015-01-01

    Background Myocarditis is an inflammatory disease of the cardiac muscle and is mainly caused by viral infections. Viral myocarditis has been proposed to be divided into 3 phases: the acute viral phase, the subacute immune phase, and the chronic cardiac remodeling phase. Although individualized therapy should be applied depending on the phase, no clinical or experimental studies have found biomarkers that distinguish between the 3 phases. Theiler’s murine encephalomyelitis virus belongs to the genus Cardiovirus and can cause myocarditis in susceptible mouse strains. Methods and Results Using this novel model for viral myocarditis induced with Theiler’s murine encephalomyelitis virus, we conducted multivariate analysis including echocardiography, serum troponin and viral RNA titration, and microarray to identify the biomarker candidates that can discriminate the 3 phases. Using C3H mice infected with Theiler’s murine encephalomyelitis virus on 4, 7, and 60 days post infection, we conducted bioinformatics analyses, including principal component analysis and k-means clustering of microarray data, because our traditional cardiac and serum assays, including 2-way comparison of microarray data, did not lead to the identification of a single biomarker. Principal component analysis separated heart samples clearly between the groups of 4, 7, and 60 days post infection. Representative genes contributing to the separation were as follows: 4 and 7 days post infection, innate immunity–related genes, such as Irf7 and Cxcl9; 7 and 60 days post infection, acquired immunity–related genes, such as Cd3g and H2-Aa; and cardiac remodeling–related genes, such as Mmp12 and Gpnmb. Conclusions Sets of molecules, not single molecules, identified by unsupervised principal component analysis, were found to be useful as phase-specific biomarkers. PMID:25031303

  8. Unintentional Interpersonal Synchronization Represented as a Reciprocal Visuo-Postural Feedback System: A Multivariate Autoregressive Modeling Approach

    PubMed Central

    Okazaki, Shuntaro; Hirotani, Masako; Koike, Takahiko; Bosch-Bayard, Jorge; Takahashi, Haruka K.; Hashiguchi, Maho; Sadato, Norihiro

    2015-01-01

    People’s behaviors synchronize. It is difficult, however, to determine whether synchronized behaviors occur in a mutual direction—two individuals influencing one another—or in one direction—one individual leading the other, and what the underlying mechanism for synchronization is. To answer these questions, we hypothesized a non-leader-follower postural sway synchronization, caused by a reciprocal visuo-postural feedback system operating on pairs of individuals, and tested that hypothesis both experimentally and via simulation. In the behavioral experiment, 22 participant pairs stood face to face either 20 or 70 cm away from each other wearing glasses with or without vision blocking lenses. The existence and direction of visual information exchanged between pairs of participants were systematically manipulated. The time series data for the postural sway of these pairs were recorded and analyzed with cross correlation and causality. Results of cross correlation showed that postural sway of paired participants was synchronized, with a shorter time lag when participant pairs could see one another’s head motion than when one of the participants was blindfolded. In addition, there was less of a time lag in the observed synchronization when the distance between participant pairs was smaller. As for the causality analysis, noise contribution ratio (NCR), the measure of influence using a multivariate autoregressive model, was also computed to identify the degree to which one’s postural sway is explained by that of the other’s and how visual information (sighted vs. blindfolded) interacts with paired participants’ postural sway. It was found that for synchronization to take place, it is crucial that paired participants be sighted and exert equal influence on one another by simultaneously exchanging visual information. Furthermore, a simulation for the proposed system with a wider range of visual input showed a pattern of results similar to the behavioral

  9. Unintentional Interpersonal Synchronization Represented as a Reciprocal Visuo-Postural Feedback System: A Multivariate Autoregressive Modeling Approach.

    PubMed

    Okazaki, Shuntaro; Hirotani, Masako; Koike, Takahiko; Bosch-Bayard, Jorge; Takahashi, Haruka K; Hashiguchi, Maho; Sadato, Norihiro

    2015-01-01

    People's behaviors synchronize. It is difficult, however, to determine whether synchronized behaviors occur in a mutual direction--two individuals influencing one another--or in one direction--one individual leading the other, and what the underlying mechanism for synchronization is. To answer these questions, we hypothesized a non-leader-follower postural sway synchronization, caused by a reciprocal visuo-postural feedback system operating on pairs of individuals, and tested that hypothesis both experimentally and via simulation. In the behavioral experiment, 22 participant pairs stood face to face either 20 or 70 cm away from each other wearing glasses with or without vision blocking lenses. The existence and direction of visual information exchanged between pairs of participants were systematically manipulated. The time series data for the postural sway of these pairs were recorded and analyzed with cross correlation and causality. Results of cross correlation showed that postural sway of paired participants was synchronized, with a shorter time lag when participant pairs could see one another's head motion than when one of the participants was blindfolded. In addition, there was less of a time lag in the observed synchronization when the distance between participant pairs was smaller. As for the causality analysis, noise contribution ratio (NCR), the measure of influence using a multivariate autoregressive model, was also computed to identify the degree to which one's postural sway is explained by that of the other's and how visual information (sighted vs. blindfolded) interacts with paired participants' postural sway. It was found that for synchronization to take place, it is crucial that paired participants be sighted and exert equal influence on one another by simultaneously exchanging visual information. Furthermore, a simulation for the proposed system with a wider range of visual input showed a pattern of results similar to the behavioral results.

  10. Multisite multivariate modeling of daily precipitation and temperature in the Canadian Prairie Provinces using generalized linear models

    NASA Astrophysics Data System (ADS)

    Asong, Zilefac E.; Khaliq, M. N.; Wheater, H. S.

    2016-11-01

    Based on the Generalized Linear Model (GLM) framework, a multisite stochastic modelling approach is developed using daily observations of precipitation and minimum and maximum temperatures from 120 sites located across the Canadian Prairie Provinces: Alberta, Saskatchewan and Manitoba. Temperature is modeled using a two-stage normal-heteroscedastic model by fitting mean and variance components separately. Likewise, precipitation occurrence and conditional precipitation intensity processes are modeled separately. The relationship between precipitation and temperature is accounted for by using transformations of precipitation as covariates to predict temperature fields. Large scale atmospheric covariates from the National Center for Environmental Prediction Reanalysis-I, teleconnection indices, geographical site attributes, and observed precipitation and temperature records are used to calibrate these models for the 1971-2000 period. Validation of the developed models is performed on both pre- and post-calibration period data. Results of the study indicate that the developed models are able to capture spatiotemporal characteristics of observed precipitation and temperature fields, such as inter-site and inter-variable correlation structure, and systematic regional variations present in observed sequences. A number of simulated weather statistics ranging from seasonal means to characteristics of temperature and precipitation extremes and some of the commonly used climate indices are also found to be in close agreement with those derived from observed data. This GLM-based modelling approach will be developed further for multisite statistical downscaling of Global Climate Model outputs to explore climate variability and change in this region of Canada.

  11. A model of the western Laurentide Ice Sheet, using observations of glacial isostatic adjustment

    NASA Astrophysics Data System (ADS)

    Gowan, Evan J.; Tregoning, Paul; Purcell, Anthony; Montillet, Jean-Philippe; McClusky, Simon

    2016-05-01

    We present the results of a new numerical model of the late glacial western Laurentide Ice Sheet, constrained by observations of glacial isostatic adjustment (GIA), including relative sea level indicators, uplift rates from permanent GPS stations, contemporary differential lake level change, and postglacial tilt of glacial lake level indicators. The later two datasets have been underutilized in previous GIA based ice sheet reconstructions. The ice sheet model, called NAICE, is constructed using simple ice physics on the basis of changing margin location and basal shear stress conditions in order to produce ice volumes required to match GIA. The model matches the majority of the observations, while maintaining a relatively realistic ice sheet geometry. Our model has a peak volume at 18,000 yr BP, with a dome located just east of Great Slave Lake with peak thickness of 4000 m, and surface elevation of 3500 m. The modelled ice volume loss between 16,000 and 14,000 yr BP amounts to about 7.5 m of sea level equivalent, which is consistent with the hypothesis that a large portion of Meltwater Pulse 1A was sourced from this part of the ice sheet. The southern part of the ice sheet was thin and had a low elevation profile. This model provides an accurate representation of ice thickness and paleo-topography, and can be used to assess present day uplift and infer past climate.

  12. Procedures for adjusting regional regression models of urban-runoff quality using local data

    USGS Publications Warehouse

    Hoos, A.B.; Sisolak, J.K.

    1993-01-01

    Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for

  13. Race and Gender Influences on Adjustment in Early Adolescence: Investigation of an Integrative Model.

    ERIC Educational Resources Information Center

    DuBois, David L.; Burk-Braxton, Carol; Swenson, Lance P.; Tevendale, Heather D.; Hardesty, Jennifer L.

    2002-01-01

    Investigated the influence of racial and gender discrimination and difficulties on adolescent adjustment. Found that discrimination and hassles contribute to a general stress context which in turn influences emotional and behavioral problems in adjustment, while racial and gender identity positively affect self-esteem and thus adjustment. Revealed…

  14. Social Support and Psychological Adjustment Among Latinas With Arthritis: A Test of a Theoretical Model

    PubMed Central

    Abraído-Lanza, Ana F.

    2013-01-01

    Background Among people coping with chronic illness, tangible social support sometimes has unintended negative consequences on the recipient’s psychological health. Identity processes may help explain these effects. Individuals derive self-worth and a sense of competence by enacting social roles that are central to the self-concept. Purpose This study tested a model drawing from some of these theoretical propositions. The central hypothesis was that tangible support in fulfilling a highly valued role undermines self-esteem and a sense of self-efficacy, which, in turn, affect psychological adjustment Methods Structured interviews were conducted with 98 Latina women with arthritis who rated the homemaker identity as being of central importance to the self-concept. Results A path analysis indicated that, contrary to predictions, tangible housework support was related to less psychological distress. Emotional support predicted greater psychological well-being. These relationships were not mediated by self-esteem or self-efficacy. Qualitative data revealed that half of the sample expressed either ambivalent or negative feelings about receiving housework support Conclusions Results may reflect social and cultural norms concerning the types of support that are helpful and appropriate from specific support providers. Future research should consider the cultural meaning and normative context of the support transaction. This study contributes to scarce literatures on the mechanisms that mediate the relationship between social support and adjustment, as well as illness and psychosocial adaptation among Latina women with chronic illness. PMID:15184092

  15. Adjusting Satellite Rainfall Error in Mountainous Areas for Flood Modeling Applications

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Anagnostou, E. N.; Astitha, M.; Vergara, H. J.; Gourley, J. J.; Hong, Y.

    2014-12-01

    This study aims to investigate the use of high-resolution Numerical Weather Prediction (NWP) for evaluating biases of satellite rainfall estimates of flood-inducing storms in mountainous areas and associated improvements in flood modeling. Satellite-retrieved precipitation has been considered as a feasible data source for global-scale flood modeling, given that satellite has the spatial coverage advantage over in situ (rain gauges and radar) observations particularly over mountainous areas. However, orographically induced heavy precipitation events tend to be underestimated and spatially smoothed by satellite products, which error propagates non-linearly in flood simulations.We apply a recently developed retrieval error and resolution effect correction method (Zhang et al. 2013*) on the NOAA Climate Prediction Center morphing technique (CMORPH) product based on NWP analysis (or forecasting in the case of real-time satellite products). The NWP rainfall is derived from the Weather Research and Forecasting Model (WRF) set up with high spatial resolution (1-2 km) and explicit treatment of precipitation microphysics.In this study we will show results on NWP-adjusted CMORPH rain rates based on tropical cyclones and a convective precipitation event measured during NASA's IPHEX experiment in the South Appalachian region. We will use hydrologic simulations over different basins in the region to evaluate propagation of bias correction in flood simulations. We show that the adjustment reduced the underestimation of high rain rates thus moderating the strong rainfall magnitude dependence of CMORPH rainfall bias, which results in significant improvement in flood peak simulations. Further study over Blue Nile Basin (western Ethiopia) will be investigated and included in the presentation. *Zhang, X. et al. 2013: Using NWP Simulations in Satellite Rainfall Estimation of Heavy Precipitation Events over Mountainous Areas. J. Hydrometeor, 14, 1844-1858.

  16. A data-driven model of present-day glacial isostatic adjustment in North America

    NASA Astrophysics Data System (ADS)

    Simon, Karen; Riva, Riccardo

    2016-04-01

    Geodetic measurements of gravity change and vertical land motion are incorporated into an a priori model of present-day glacial isostatic adjustment (GIA) via least-squares inversion. The result is an updated model of present-day GIA wherein the final predicted signal is informed by both observational data with realistic errors, and prior knowledge of GIA inferred from forward models. This method and other similar techniques have been implemented within a limited but growing number of GIA studies (e.g., Hill et al. 2010). The combination method allows calculation of the uncertainties of predicted GIA fields, and thus offers a significant advantage over predictions from purely forward GIA models. Here, we show the results of using the combination approach to predict present-day rates of GIA in North America through the incorporation of both GPS-measured vertical land motion rates and GRACE-measured gravity observations into the prior model. In order to assess the influence of each dataset on the final GIA prediction, the vertical motion and gravimetry datasets are incorporated into the model first independently (i.e., one dataset only), then simultaneously. Because the a priori GIA model and its associated covariance are developed by averaging predictions from a suite of forward models that varies aspects of the Earth rheology and ice sheet history, the final GIA model is not independent of forward model predictions. However, we determine the sensitivity of the final model result to the prior GIA model information by using different representations of the input model covariance. We show that when both datasets are incorporated into the inversion, the final model adequately predicts available observational constraints, minimizes the uncertainty associated with the forward modelled GIA inputs, and includes a realistic estimation of the formal error associated with the GIA process. Along parts of the North American coastline, improved predictions of the long-term (kyr

  17. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples.

    PubMed

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-05

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses.

  18. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-01

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses.

  19. The use of satellites in gravity field determination and model adjustment

    NASA Astrophysics Data System (ADS)

    Visser, Petrus Nicolaas Anna Maria

    1992-06-01

    Methods to improve gravity field models of the Earth with available data from satellite observations are proposed and discussed. In principle, all types of satellite observations mentioned give information of the satellite orbit perturbations and in conjunction the Earth's gravity field, because the satellite orbits are affected most by the Earth's gravity field. Therefore, two subjects are addressed: representation forms of the gravity field of the Earth and the theory of satellite orbit perturbations. An analytical orbit perturbation theory is presented and shown to be sufficiently accurate for describing satellite orbit perturbations if certain conditions are fulfilled. Gravity field adjustment experiments using the analytical orbit perturbation theory are discussed using real satellite observations. These observations consisted of Seasat laser range measurements and crossover differences, and of Geosat altimeter measurements and crossover differences. A look into the future, particularly relating to the ARISTOTELES (Applications and Research Involving Space Techniques for the Observation of the Earth's field from Low Earth Orbit Spacecraft) mission, is given.

  20. Positive Adjustment Among American Repatriated Prisoners of the Vietnam War: Modeling the Long-Term Effects of Captivity.

    PubMed

    King, Daniel W; King, Lynda A; Park, Crystal L; Lee, Lewina O; Kaiser, Anica Pless; Spiro, Avron; Moore, Jeffrey L; Kaloupek, Danny G; Keane, Terence M

    2015-11-01

    A longitudinal lifespan model of factors contributing to later-life positive adjustment was tested on 567 American repatriated prisoners from the Vietnam War. This model encompassed demographics at time of capture and attributes assessed after return to the U.S. (reports of torture and mental distress) and approximately 3 decades later (later-life stressors, perceived social support, positive appraisal of military experiences, and positive adjustment). Age and education at time of capture and physical torture were associated with repatriation mental distress, which directly predicted poorer adjustment 30 years later. Physical torture also had a salutary effect, enhancing later-life positive appraisals of military experiences. Later-life events were directly and indirectly (through concerns about retirement) associated with positive adjustment. Results suggest that the personal resources of older age and more education and early-life adverse experiences can have cascading effects over the lifespan to impact well-being in both positive and negative ways.

  1. Positive Adjustment Among American Repatriated Prisoners of the Vietnam War: Modeling the Long-Term Effects of Captivity

    PubMed Central

    King, Daniel W.; King, Lynda A.; Park, Crystal L.; Lee, Lewina O.; Kaiser, Anica Pless; Spiro, Avron; Moore, Jeffrey L.; Kaloupek, Danny G.; Keane, Terence M.

    2015-01-01

    A longitudinal lifespan model of factors contributing to later-life positive adjustment was tested on 567 American repatriated prisoners from the Vietnam War. This model encompassed demographics at time of capture and attributes assessed after return to the U.S. (reports of torture and mental distress) and approximately 3 decades later (later-life stressors, perceived social support, positive appraisal of military experiences, and positive adjustment). Age and education at time of capture and physical torture were associated with repatriation mental distress, which directly predicted poorer adjustment 30 years later. Physical torture also had a salutary effect, enhancing later-life positive appraisals of military experiences. Later-life events were directly and indirectly (through concerns about retirement) associated with positive adjustment. Results suggest that the personal resources of older age and more education and early-life adverse experiences can have cascading effects over the lifespan to impact well-being in both positive and negative ways. PMID:26693100

  2. Adjustment of regional climate model output for modeling the climatic mass balance of all glaciers on Svalbard.

    PubMed

    Möller, Marco; Obleitner, Friedrich; Reijmer, Carleen H; Pohjola, Veijo A; Głowacki, Piotr; Kohler, Jack

    2016-05-27

    Large-scale modeling of glacier mass balance relies often on the output from regional climate models (RCMs). However, the limited accuracy and spatial resolution of RCM output pose limitations on mass balance simulations at subregional or local scales. Moreover, RCM output is still rarely available over larger regions or for longer time periods. This study evaluates the extent to which it is possible to derive reliable region-wide glacier mass balance estimates, using coarse resolution (10 km) RCM output for model forcing. Our data cover the entire Svalbard archipelago over one decade. To calculate mass balance, we use an index-based model. Model parameters are not calibrated, but the RCM air temperature and precipitation fields are adjusted using in situ mass balance measurements as reference. We compare two different calibration methods: root mean square error minimization and regression optimization. The obtained air temperature shifts (+1.43°C versus +2.22°C) and precipitation scaling factors (1.23 versus 1.86) differ considerably between the two methods, which we attribute to inhomogeneities in the spatiotemporal distribution of the reference data. Our modeling suggests a mean annual climatic mass balance of -0.05 ± 0.40 m w.e. a(-1) for Svalbard over 2000-2011 and a mean equilibrium line altitude of 452 ± 200 m  above sea level. We find that the limited spatial resolution of the RCM forcing with respect to real surface topography and the usage of spatially homogeneous RCM output adjustments and mass balance model parameters are responsible for much of the modeling uncertainty. Sensitivity of the results to model parameter uncertainty is comparably small and of minor importance.

  3. Introduction to multivariate discrimination

    NASA Astrophysics Data System (ADS)

    Kégl, Balázs

    2013-07-01

    Multivariate discrimination or classification is one of the best-studied problem in machine learning, with a plethora of well-tested and well-performing algorithms. There are also several good general textbooks [1-9] on the subject written to an average engineering, computer science, or statistics graduate student; most of them are also accessible for an average physics student with some background on computer science and statistics. Hence, instead of writing a generic introduction, we concentrate here on relating the subject to a practitioner experimental physicist. After a short introduction on the basic setup (Section 1) we delve into the practical issues of complexity regularization, model selection, and hyperparameter optimization (Section 2), since it is this step that makes high-complexity non-parametric fitting so different from low-dimensional parametric fitting. To emphasize that this issue is not restricted to classification, we illustrate the concept on a low-dimensional but non-parametric regression example (Section 2.1). Section 3 describes the common algorithmic-statistical formal framework that unifies the main families of multivariate classification algorithms. We explain here the large-margin principle that partly explains why these algorithms work. Section 4 is devoted to the description of the three main (families of) classification algorithms, neural networks, the support vector machine, and AdaBoost. We do not go into the algorithmic details; the goal is to give an overview on the form of the functions these methods learn and on the objective functions they optimize. Besides their technical description, we also make an attempt to put these algorithm into a socio-historical context. We then briefly describe some rather heterogeneous applications to illustrate the pattern recognition pipeline and to show how widespread the use of these methods is (Section 5). We conclude the chapter with three essentially open research problems that are either

  4. Reciprocal Benefits of Mass-Univariate and Multivariate Modeling in Brain Mapping: Applications to Event-Related Functional MRI, H2 15O-, and FDG-PET

    PubMed Central

    Habeck, Christian G.

    2006-01-01

    In brain mapping studies of sensory, cognitive, and motor operations, specific waveforms of dynamic neural activity are predicted based on theoretical models of human information processing. For example in event-related functional MRI (fMRI), the general linear model (GLM) is employed in mass-univariate analyses to identify the regions whose dynamic activity closely matches the expected waveforms. By comparison multivariate analyses based on PCA or ICA provide greater flexibility in detecting spatiotemporal properties of experimental data that may strongly support alternative neuroscientific explanations. We investigated conjoint multivariate and mass-univariate analyses that combine the capabilities to (1) verify activation of neural machinery we already understand and (2) discover reliable signatures of new neural machinery. We examined combinations of GLM and PCA that recover latent neural signals (waveforms and footprints) with greater accuracy than either method alone. Comparative results are illustrated with analyses of real fMRI data, adding to Monte Carlo simulation support. PMID:23165047

  5. Multivariate analysis in thoracic research.

    PubMed

    Mengual-Macenlle, Noemí; Marcos, Pedro J; Golpe, Rafael; González-Rivas, Diego

    2015-03-01

    Multivariate analysis is based in observation and analysis of more than one statistical outcome variable at a time. In design and analysis, the technique is used to perform trade studies across multiple dimensions while taking into account the effects of all variables on the responses of interest. The development of multivariate methods emerged to analyze large databases and increasingly complex data. Since the best way to represent the knowledge of reality is the modeling, we should use multivariate statistical methods. Multivariate methods are designed to simultaneously analyze data sets, i.e., the analysis of different variables for each person or object studied. Keep in mind at all times that all variables must be treated accurately reflect the reality of the problem addressed. There are different types of multivariate analysis and each one should be employed according to the type of variables to analyze: dependent, interdependence and structural methods. In conclusion, multivariate methods are ideal for the analysis of large data sets and to find the cause and effect relationships between variables; there is a wide range of analysis types that we can use.

  6. Multivariate analysis in thoracic research

    PubMed Central

    Mengual-Macenlle, Noemí; Marcos, Pedro J.; Golpe, Rafael

    2015-01-01

    Multivariate analysis is based in observation and analysis of more than one statistical outcome variable at a time. In design and analysis, the technique is used to perform trade studies across multiple dimensions while taking into account the effects of all variables on the responses of interest. The development of multivariate methods emerged to analyze large databases and increasingly complex data. Since the best way to represent the knowledge of reality is the modeling, we should use multivariate statistical methods. Multivariate methods are designed to simultaneously analyze data sets, i.e., the analysis of different variables for each person or object studied. Keep in mind at all times that all variables must be treated accurately reflect the reality of the problem addressed. There are different types of multivariate analysis and each one should be employed according to the type of variables to analyze: dependent, interdependence and structural methods. In conclusion, multivariate methods are ideal for the analysis of large data sets and to find the cause and effect relationships between variables; there is a wide range of analysis types that we can use. PMID:25922743

  7. Multivariate Longitudinal Analysis with Bivariate Correlation Test.

    PubMed

    Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory

    2016-01-01

    In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model's parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated.

  8. Nonlinear relative-proportion-based route adjustment process for day-to-day traffic dynamics: modeling, equilibrium and stability analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Wenlong; Ma, Shoufeng; Tian, Junfang; Li, Geng

    2016-11-01

    Travelers' route adjustment behaviors in a congested road traffic network are acknowledged as a dynamic game process between them. Existing Proportional-Switch Adjustment Process (PSAP) models have been extensively investigated to characterize travelers' route choice behaviors; PSAP has concise structure and intuitive behavior rule. Unfortunately most of which have some limitations, i.e., the flow over adjustment problem for the discrete PSAP model, the absolute cost differences route adjustment problem, etc. This paper proposes a relative-Proportion-based Route Adjustment Process (rePRAP) maintains the advantages of PSAP and overcomes these limitations. The rePRAP describes the situation that travelers on higher cost route switch to those with lower cost at the rate that is unilaterally depended on the relative cost differences between higher cost route and its alternatives. It is verified to be consistent with the principle of the rational behavior adjustment process. The equivalence among user equilibrium, stationary path flow pattern and stationary link flow pattern is established, which can be applied to judge whether a given network traffic flow has reached UE or not by detecting the stationary or non-stationary state of link flow pattern. The stability theorem is proved by the Lyapunov function approach. A simple example is tested to demonstrate the effectiveness of the rePRAP model.

  9. Determining the Number of Component Clusters in the Standard Multivariate Normal Mixture Model Using Model-Selection Criteria.

    DTIC Science & Technology

    1983-06-16

    AD 3 020 DETERMINING THE’ NUMBER OF COMPONENT CLUSTERS IN THESTANDARD MULTIVARIAT (U) ILLI OIS UNIV AT CHICAGO CIRCLE DEPT OF QUANTITATIVE METHODS ...ARMY RESEARCH OFFICE UNDER CONTRACT OAAG29-82-K-0155 with the University of Illinois at Chicago Statistical Models and Methods for Cluster Analysis and...CRITERIA* HAMPARSUM BOZDOGAN Department of Quantitative Methods University of Illinois ’- ,,, CONTENTS D Abstract 1. Introduction 2. The Standard

  10. A new glacial isostatic adjustment model of the Innuitian Ice Sheet, Arctic Canada

    NASA Astrophysics Data System (ADS)

    Simon, K. M.; James, T. S.; Dyke, A. S.

    2015-07-01

    A reconstruction of the Innuitian Ice Sheet (IIS) is developed that incorporates first-order constraints on its spatial extent and history as suggested by regional glacial geology studies. Glacial isostatic adjustment modelling of this ice sheet provides relative sea-level predictions that are in good agreement with measurements of post-glacial sea-level change at 18 locations. The results indicate peak thicknesses of the Innuitian Ice Sheet of approximately 1600 m, up to 400 m thicker than the minimum peak thicknesses estimated from glacial geology studies, but between approximately 1000 to 1500 m thinner than the peak thicknesses present in previous GIA models. The thickness history of the best-fit Innuitian Ice Sheet model developed here, termed SJD15, differs from the ICE-5G reconstruction and provides an improved fit to sea-level measurements from the lowland sector of the ice sheet. Both models provide a similar fit to relative sea-level measurements from the alpine sector. The vertical crustal motion predictions of the best-fit IIS model are in general agreement with limited GPS observations, after correction for a significant elastic crustal response to present-day ice mass change. The new model provides approximately 2.7 m equivalent contribution to global sea-level rise, an increase of +0.6 m compared to the Innuitian portion of ICE-5G. SJD15 is qualitatively more similar to the recent ICE-6G ice sheet reconstruction, which appears to also include more spatially extensive ice cover in the Innuitian region than ICE-5G.

  11. Parameter Sensitivity in Multivariate Methods

    ERIC Educational Resources Information Center

    Green, Bert F., Jr.

    1977-01-01

    Interpretation of multivariate models requires knowing how much the fit of the model is impaired by changes in the parameters. The relation of parameter change to loss of goodness of fit can be called parameter sensitivity. Formulas are presented for assessing the sensitivity of multiple regression and principal component weights. (Author/JKS)

  12. Development of a GIA (Glacial Isostatic Adjustment) - Fault Model of Greenland

    NASA Astrophysics Data System (ADS)

    Steffen, R.; Lund, B.

    2015-12-01

    The increase in sea level due to climate change is an intensely discussed phenomenon, while less attention is being paid to the change in earthquake activity that may accompany disappearing ice masses. The melting of the Greenland Ice Sheet, for example, induces changes in the crustal stress field, which could result in the activation of existing faults and the generation of destructive earthquakes. Such glacially induced earthquakes are known to have occurred in Fennoscandia 10,000 years ago. Within a new project ("Glacially induced earthquakes in Greenland", start in October 2015), we will analyse the potential for glacially induced earthquakes in Greenland due to the ongoing melting. The objectives include the development of a three-dimensional (3D) subsurface model of Greenland, which is based on geologic, geophysical and geodetic datasets, and which also fulfils the boundary conditions of glacial isostatic adjustment (GIA) modelling. Here we will present an overview of the project, including the most recently available datasets and the methodologies needed for model construction and the simulation of GIA induced earthquakes.

  13. Firefly algorithm versus genetic algorithm as powerful variable selection tools and their effect on different multivariate calibration models in spectroscopy: A comparative study

    NASA Astrophysics Data System (ADS)

    Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed

    2017-01-01

    For the first time, a new variable selection method based on swarm intelligence namely firefly algorithm is coupled with three different multivariate calibration models namely, concentration residual augmented classical least squares, artificial neural network and support vector regression in UV spectral data. A comparative study between the firefly algorithm and the well-known genetic algorithm was developed. The discussion revealed the superiority of using this new powerful algorithm over the well-known genetic algorithm. Moreover, different statistical tests were performed and no significant differences were found between all the models regarding their predictabilities. This ensures that simpler and faster models were obtained without any deterioration of the quality of the calibration.

  14. Case-mix adjustment of the National CAHPS benchmarking data 1.0: a violation of model assumptions?

    PubMed Central

    Elliott, M N; Swartz, R; Adams, J; Spritzer, K L; Hays, R D

    2001-01-01

    OBJECTIVE: To compare models for the case-mix adjustment of consumer reports and ratings of health care. DATA SOURCES: The study used the Consumer Assessment of Health Plans (CAHPS) survey 1.0 National CAHPS Benchmarking Database data from 54 commercial and 31 Medicaid health plans from across the United States: 19,541 adults (age > or = 18 years) in commercial plans and 8,813 adults in Medicaid plans responded regarding their own health care, and 9,871 Medicaid adults responded regarding the health care of their minor children. STUDY DESIGN: Four case-mix models (no adjustment; self-rated health and age; health, age, and education; and health, age, education, and plan interactions) were compared on 21 ratings and reports regarding health care for three populations (adults in commercial plans, adults in Medicaid plans, and children in Medicaid plans). The magnitude of case-mix adjustments, the effects of adjustments on plan rankings, and the homogeneity of these effects across plans were examined. DATA EXTRACTION: All ratings and reports were linearly transformed to a possible range of 0 to 100 for comparability. PRINCIPAL FINDINGS: Case-mix adjusters, especially self-rated health, have substantial effects, but these effects vary substantially from plan to plan, a violation of standard case-mix assumptions. CONCLUSION: Case-mix adjustment of CAHPS data needs to be re-examined, perhaps by using demographically stratified reporting or by developing better measures of response bias. PMID:11482589

  15. ESTIMATION OF EMISSION ADJUSTMENTS FROM THE APPLICATION OF FOUR-DIMENSIONAL DATA ASSIMILATION TO PHOTOCHEMICAL AIR QUALITY MODELING. (R826372)

    EPA Science Inventory

    Four-dimensional data assimilation applied to photochemical air quality modeling is used to suggest adjustments to the emissions inventory of the Atlanta, Georgia metropolitan area. In this approach, a three-dimensional air quality model, coupled with direct sensitivity analys...

  16. Adjusting multistate capture-recapture models for misclassification bias: manatee breeding proportions

    USGS Publications Warehouse

    Kendall, W.L.; Hines, J.E.; Nichols, J.D.

    2003-01-01

    Matrix population models are important tools for research and management of populations. Estimating the parameters of these models is an important step in applying them to real populations. Multistate capture-recapture methods have provided a useful means for estimating survival and parameters of transition between locations or life history states but have mostly relied on the assumption that the state occupied by each detected animal is known with certainty. Nevertheless, in some cases animals can be misclassified. Using multiple capture sessions within each period of interest, we developed a method that adjusts estimates of transition probabilities for bias due to misclassification. We applied this method to 10 years of sighting data for a population of Florida manatees (Trichechus manatus latirostris) in order to estimate the annual probability of transition from nonbreeding to breeding status. Some sighted females were unequivocally classified as breeders because they were clearly accompanied by a first-year calf. The remainder were classified, sometimes erroneously, as nonbreeders because an attendant first-year calf was not observed or was classified as more than one year old. We estimated a conditional breeding probability of 0.31 + 0.04 (estimate + 1 SE) when we ignored misclassification bias, and 0.61 + 0.09 when we accounted for misclassification.

  17. MULTIVARIATE KERNEL PARTITION PROCESS MIXTURES

    PubMed Central

    Dunson, David B.

    2013-01-01

    Mixtures provide a useful approach for relaxing parametric assumptions. Discrete mixture models induce clusters, typically with the same cluster allocation for each parameter in multivariate cases. As a more flexible approach that facilitates sparse nonparametric modeling of multivariate random effects distributions, this article proposes a kernel partition process (KPP) in which the cluster allocation varies for different parameters. The KPP is shown to be the driving measure for a multivariate ordered Chinese restaurant process that induces a highly-flexible dependence structure in local clustering. This structure allows the relative locations of the random effects to inform the clustering process, with spatially-proximal random effects likely to be assigned the same cluster index. An exact block Gibbs sampler is developed for posterior computation, avoiding truncation of the infinite measure. The methods are applied to hormone curve data, and a dependent KPP is proposed for classification from functional predictors. PMID:24478563

  18. Enhancing multiple-point geostatistical modeling: 1. Graph theory and pattern adjustment

    NASA Astrophysics Data System (ADS)

    Tahmasebi, Pejman; Sahimi, Muhammad

    2016-03-01

    In recent years, higher-order geostatistical methods have been used for modeling of a wide variety of large-scale porous media, such as groundwater aquifers and oil reservoirs. Their popularity stems from their ability to account for qualitative data and the great flexibility that they offer for conditioning the models to hard (quantitative) data, which endow them with the capability for generating realistic realizations of porous formations with very complex channels, as well as features that are mainly a barrier to fluid flow. One group of such models consists of pattern-based methods that use a set of data points for generating stochastic realizations by which the large-scale structure and highly-connected features are reproduced accurately. The cross correlation-based simulation (CCSIM) algorithm, proposed previously by the authors, is a member of this group that has been shown to be capable of simulating multimillion cell models in a matter of a few CPU seconds. The method is, however, sensitive to pattern's specifications, such as boundaries and the number of replicates. In this paper the original CCSIM algorithm is reconsidered and two significant improvements are proposed for accurately reproducing large-scale patterns of heterogeneities in porous media. First, an effective boundary-correction method based on the graph theory is presented by which one identifies the optimal cutting path/surface for removing the patchiness and discontinuities in the realization of a porous medium. Next, a new pattern adjustment method is proposed that automatically transfers the features in a pattern to one that seamlessly matches the surrounding patterns. The original CCSIM algorithm is then combined with the two methods and is tested using various complex two- and three-dimensional examples. It should, however, be emphasized that the methods that we propose in this paper are applicable to other pattern-based geostatistical simulation methods.

  19. Adjustments of the TaD electron density reconstruction model with GNSS-TEC parameters for operational application purposes

    NASA Astrophysics Data System (ADS)

    Kutiev, Ivan; Marinov, Pencho; Fidanova, Stefka; Belehaki, Anna; Tsagouri, Ioanna

    2012-12-01

    Validation results on the latest version of TaD model (TaDv2) show realistic reconstruction of the electron density profiles (EDPs) with an average error of 3 TECU, similar to the error obtained from GNSS-TEC calculated paremeters. The work presented here has the aim to further improve the accuracy of the TaD topside reconstruction, adjusting the TEC parameter calculated from TaD model with the TEC parameter calculated by GNSS transmitting RINEX files provided by receivers co-located with the Digisondes. The performance of the new version is tested during a storm period demonstrating further improvements in respect to the previous version. Statistical comparison of modeled and observed TEC confirms the validity of the proposed adjustment. A significant benefit of the proposed upgrade is that it facilitates the real-time implementation of TaD. The model needs a reliable measure of the scale height at the peak height, which is supposed to be provided by Digisondes. Oftenly, the automatic scaling software fails to correctly calculate the scale height at the peak, Hm, due to interferences in the receiving signal. Consequently the model estimated topside scale height is wrongly calculated leading to unrealistic results for the modeled EDP. The proposed TEC adjustment forces the model to correctly reproduce the topside scale height, despite the inaccurate values of Hm. This adjustment is very important for the application of TaD in an operational environment.

  20. An assessment of the ICE6G_C(VM5a) glacial isostatic adjustment model

    NASA Astrophysics Data System (ADS)

    Purcell, A.; Tregoning, P.; Dehecq, A.

    2016-05-01

    The recent release of the next-generation global ice history model, ICE6G_C(VM5a), is likely to be of interest to a wide range of disciplines including oceanography (sea level studies), space gravity (mass balance studies), glaciology, and, of course, geodynamics (Earth rheology studies). In this paper we make an assessment of some aspects of the ICE6G_C(VM5a) model and show that the published present-day radial uplift rates are too high along the eastern side of the Antarctic Peninsula (by ˜8.6 mm/yr) and beneath the Ross Ice Shelf (by ˜5 mm/yr). Furthermore, the published spherical harmonic coefficients—which are meant to represent the dimensionless present-day changes due to glacial isostatic adjustment (GIA)—contain excessive power for degree ≥90, do not agree with physical expectations and do not represent accurately the ICE6G_C(VM5a) model. We show that the excessive power in the high-degree terms produces erroneous uplift rates when the empirical relationship of Purcell et al. (2011) is applied, but when correct Stokes coefficients are used, the empirical relationship produces excellent agreement with the fully rigorous computation of the radial velocity field, subject to the caveats first noted by Purcell et al. (2011). Using the Australian National University (ANU) groups CALSEA software package, we recompute the present-day GIA signal for the ice thickness history and Earth rheology used by Peltier et al. (2015) and provide dimensionless Stokes coefficients that can be used to correct satellite altimetry observations for GIA over oceans and by the space gravity community to separate GIA and present-day mass balance change signals. We denote the new data sets as ICE6G_ANU.

  1. Comparison of Two Foreign Body Retrieval Devices with Adjustable Loops in a Swine Model

    SciTech Connect

    Konya, Andras

    2006-12-15

    The purpose of the study was to compare two similar foreign body retrieval devices, the Texan{sup TM} (TX) and the Texan LONGhorn{sup TM} (TX-LG), in a swine model. Both devices feature a {<=}30-mm adjustable loop. Capture times and total procedure times for retrieving foreign bodies from the infrarenal aorta, inferior vena cava, and stomach were compared. All attempts with both devices (TX, n = 15; TX-LG, n = 14) were successful. Foreign bodies in the vasculature were captured quickly using both devices (mean {+-} SD, 88 {+-} 106 sec for TX vs 67 {+-} 42 sec for TX-LG) with no significant difference between them. The TX-LG, however, allowed significantly better capture times than the TX in the stomach (p = 0.022), Overall, capture times for the TX-LG were significantly better than for the TX (p = 0.029). There was no significant difference between the total procedure times in any anatomic region. TX-LG performed significantly better than the TX in the stomach and therefore overall. The better torque control and maneuverability of TX-LG resulted in better performance in large anatomic spaces.

  2. DasPy 1.0 - the Open Source Multivariate Land Data Assimilation Framework in combination with the Community Land Model 4.5

    NASA Astrophysics Data System (ADS)

    Han, X.; Li, X.; He, G.; Kumbhar, P.; Montzka, C.; Kollet, S.; Miyoshi, T.; Rosolem, R.; Zhang, Y.; Vereecken, H.; Franssen, H.-J. H.

    2015-08-01

    Data assimilation has become a popular method to integrate observations from multiple sources with land surface models to improve predictions of the water and energy cycles of the soil-vegetation-atmosphere continuum. Multivariate data assimilation refers to the simultaneous assimilation of observation data from multiple model state variables into a simulation model. In recent years, several land data assimilation systems have been developed in different research agencies. Because of the software availability or adaptability, these systems are not easy to apply for the purpose of multivariate land data assimilation research. We developed an open source multivariate land data assimilation framework (DasPy) which is implemented using the Python script language mixed with the C++ and Fortran programming languages. LETKF (Local Ensemble Transform Kalman Filter) is implemented as the main data assimilation algorithm, and uncertainties in the data assimilation can be introduced by perturbed atmospheric forcing data, and represented by perturbed soil and vegetation parameters and model initial conditions. The Community Land Model (CLM) was integrated as the model operator. The implementation allows also parameter estimation (soil properties and/or leaf area index) on the basis of the joint state and parameter estimation approach. The Community Microwave Emission Modelling platform (CMEM), COsmic-ray Soil Moisture Interaction Code (COSMIC) and the Two-Source Formulation (TSF) were integrated as observation operators for the assimilation of L-band passive microwave, cosmic-ray soil moisture probe and land surface temperature measurements, respectively. DasPy has been evaluated in several assimilation studies of neutron count intensity (soil moisture), L-band brightness temperature and land surface temperature. DasPy is parallelized using the hybrid Message Passing Interface and Open Multi-Processing techniques. All the input and output data flows are organized efficiently

  3. Sensitivity assessment, adjustment, and comparison of mathematical models describing the migration of pesticides in soil using lysimetric data

    NASA Astrophysics Data System (ADS)

    Shein, E. V.; Kokoreva, A. A.; Gorbatov, V. S.; Umarova, A. B.; Kolupaeva, V. N.; Perevertin, K. A.

    2009-07-01

    The water block of physically founded models of different levels (chromatographic PEARL models and dual-porosity MACRO models) was parameterized using laboratory experimental data and tested using the results of studying the water regime of loamy soddy-podzolic soil in large lysimeters of the Experimental Soil Station of Moscow State University. The models were adapted using a stepwise approach, which involved the sequential assessment and adjustment of each submodel. The models unadjusted for the water block underestimated the lysimeter flow and overestimated the soil water content. The theoretical necessity of the model adjustment was explained by the different scales of the experimental objects (soil samples) and simulated phenomenon (soil profile). The adjustment of the models by selecting the most sensitive hydrophysical parameters of the soils (the approximation parameters of the soil water retention curve (SWRC)) gave good agreement between the predicted moisture profiles and their actual values. In distinction from the PEARL model, the MARCO model reliably described the migration of a pesticide through the soil profile, which confirmed the necessity of physically founded models accounting for the separation of preferential flows in the pore space for the prediction, analysis, optimization, and management of modern agricultural technologies.

  4. An assessment of the ICE6G_C (VM5A) glacial isostatic adjustment model

    NASA Astrophysics Data System (ADS)

    Purcell, Anthony; Tregoning, Paul; Dehecq, Amaury

    2016-04-01

    The recent release of the next-generation global ice history model, ICE6G_C(VM5a) [Peltier et al., 2015, Argus et al. 2014] is likely to be of interest to a wide range of disciplines including oceanography (sea level studies), space gravity (mass balance studies), glaciology and, of course, geodynamics (Earth rheology studies). In this presentation I will assess some aspects of the ICE6G_C(VM5a) model and the accompanying published data sets. I will demonstrate that the published present-day radial uplift rates are too high along the eastern side of the Antarctic Peninsula (by ˜8.6 mm/yr) and beneath the Ross Ice Shelf (by ˜5 mm/yr). Further, the published spherical harmonic coefficients - which are meant to represent the dimensionless present-day changes due to glacial isostatic adjustment (GIA) - will be shown to contain excessive power for degree ≥ 90, to be physically implausible and to not represent accurately the ICE6G_C(VM5a) model. The excessive power in the high degree terms produces erroneous uplift rates when the empirical relationship of Purcell et al. [2011] is applied but, when correct Stokes' coefficients are used, the empirical relationship will be shown to produce excellent agreement with the fully rigorous computation of the radial velocity field, subject to the caveats first noted by Purcell et al. [2011]. Finally, a global radial velocity field for the present-day GIA signal, and corresponding Stoke's coefficients will be presented for the ICE6GC ice model history using the VM5a rheology model. These results have been obtained using the ANU group's CALSEA software package and can be used to correct satellite altimetry observations for GIA over oceans and by the space gravity community to separate GIA and present-day mass balance change signals without any of the shortcomings of the previously published data-sets. We denote the new data sets ICE6G_ANU.

  5. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    NASA Astrophysics Data System (ADS)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.

    2015-12-01

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  6. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    SciTech Connect

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V. Tkachenko, N. P.

    2015-12-15

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  7. Transfer of multivariate classification models between laboratory and process near-infrared spectrometers for the discrimination of green Arabica and Robusta coffee beans.

    PubMed

    Myles, Anthony J; Zimmerman, Tyler A; Brown, Steven D

    2006-10-01

    Analogous to the situation found in calibration, a classification model constructed from spectra measured on one instrument may not be valid for prediction of class from spectra measured on a second instrument. In this paper, the transfer of multivariate classification models between laboratory and process near-infrared spectrometers is investigated for the discrimination of whole, green Coffea arabica (Arabica) and Coffea canefora (Robusta) coffee beans. A modified version of slope/bias correction, orthogonal signal correction trained on a vector of discrete class identities, and model updating were found to perform well in the preprocessing of data to permit the transfer of a classification model developed on data from one instrument to be used on another instrument. These techniques permitted development of robust models for the discrimination of green coffee beans on both spectrometers and resulted in misclassification errors for the transfer process in the range of 5-10%.

  8. Rejection, Feeling Bad, and Being Hurt: Using Multilevel Modeling to Clarify the Link between Peer Group Aggression and Adjustment

    ERIC Educational Resources Information Center

    Rulison, Kelly L.; Gest, Scott D.; Loken, Eric; Welsh, Janet A.

    2010-01-01

    The association between affiliating with aggressive peers and behavioral, social and psychological adjustment was examined. Students initially in 3rd, 4th, and 5th grade (N = 427) were followed biannually through 7th grade. Students' peer-nominated groups were identified. Multilevel modeling was used to examine the independent contributions of…

  9. Patterns of Children's Adrenocortical Reactivity to Interparental Conflict and Associations with Child Adjustment: A Growth Mixture Modeling Approach

    ERIC Educational Resources Information Center

    Koss, Kalsea J.; George, Melissa R. W.; Davies, Patrick T.; Cicchetti, Dante; Cummings, E. Mark; Sturge-Apple, Melissa L.

    2013-01-01

    Examining children's physiological functioning is an important direction for understanding the links between interparental conflict and child adjustment. Utilizing growth mixture modeling, the present study examined children's cortisol reactivity patterns in response to a marital dispute. Analyses revealed three different patterns of cortisol…

  10. Internal Working Models and Adjustment of Physically Abused Children: The Mediating Role of Self-Regulatory Abilities

    ERIC Educational Resources Information Center

    Hawkins, Amy L.; Haskett, Mary E.

    2014-01-01

    Background: Abused children's internal working models (IWM) of relationships are known to relate to their socioemotional adjustment, but mechanisms through which negative representations increase vulnerability to maladjustment have not been explored. We sought to expand the understanding of individual differences in IWM of abused children and…

  11. Adolescent Sibling Relationship Quality and Adjustment: Sibling Trustworthiness and Modeling, as Factors Directly and Indirectly Influencing These Associations

    ERIC Educational Resources Information Center

    Gamble, Wendy C.; Yu, Jeong Jin; Kuehn, Emily D.

    2011-01-01

    The main goal of this study was to examine the direct and moderating effects of trustworthiness and modeling on adolescent siblings' adjustment. Data were collected from 438 families including a mother, a younger sibling in fifth, sixth, or seventh grade (M = 11.6 years), and an older sibling (M = 14.3 years). Respondents completed Web-based…

  12. The Effectiveness of the Strength-Centered Career Adjustment Model for Dual-Career Women in Taiwan

    ERIC Educational Resources Information Center

    Wang, Yu-Chen; Tien, Hsiu-Lan Shelley

    2011-01-01

    The authors investigated the effectiveness of a Strength-Centered Career Adjustment Model for dual-career women (N = 28). Fourteen women in the experimental group received strength-centered career counseling for 6 to 8 sessions; the 14 women in the control group received test services in 1 to 2 sessions. All participants completed the Personal…

  13. A multivariate analysis of observed and modeled biophysical variability on the Bering Sea shelf: Multidecadal hindcasts (1970-2009) and forecasts (2010-2040)

    NASA Astrophysics Data System (ADS)

    Hermann, Albert J.; Gibson, Georgina A.; Bond, Nicholas A.; Curchitser, Enrique N.; Hedstrom, Kate; Cheng, Wei; Wang, Muyin; Stabeno, Phyllis J.; Eisner, Lisa; Cieciel, Kristin D.

    2013-10-01

    Coupled physical/biological models can be used to downscale global climate change to the ecology of subarctic regions, and to explore the bottom-up and top-down effects of that change on the spatial structure of subarctic ecosystems—for example, the relative dominance of large vs. small zooplankton in relation to ice cover. Here we utilize a multivariate statistical approach to extract the emergent properties of a coupled physical/biological hindcast of the Bering Sea for years 1970-2009, which includes multiple episodes of warming and cooling (e.g. the recent cooling of 2005-2009), and a multidecadal regional forecast of the coupled models, driven by an IPCC global model forecast of 2010-2040. Specifically, we employ multivariate empirical orthogonal function (EOF) analysis to derive the spatial covariance among physical and biological timeseries from our simulations. These are compared with EOFs derived from spatially gridded measurements of the region, collected during multiyear field programs. The model replicates observed relationships among temperature and salinity, as well as the observed inverse correlation between temperature and large crustacean zooplankton on the southeastern Bering Sea shelf. Predicted future warming of the shelf is accompanied by a northward shift in both pelagic and benthic biomass.

  14. Multivariable disturbance observer-based H2 analytical decoupling control design for multivariable systems

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Wang, Yagang; Liu, Yurong; Zhang, Weidong

    2016-01-01

    In this paper, an H2 analytical decoupling control scheme with multivariable disturbance observer for both stable and unstable multi-input/multi-output (MIMO) systems with multiple time delays is proposed. Compared with conventional control strategies, the main merit is that the proposed control scheme can improve the system performances effectively when the MIMO processes with severe model mismatches and strong external disturbances. Besides, the design method has three additional advantages. First, the derived controller and observer are given in analytical forms, the design procedure is simple. Second, the orders of the designed controller and observer are low, they can be implemented easily in practice. Finally, the performance and robustness can be adjusted easily by tuning the parameters in the designed controller and observer. It is useful for practical application. Simulations are provided to illustrate the effectiveness of the proposed control scheme.

  15. Multivariate meta-analysis: potential and promise.

    PubMed

    Jackson, Dan; Riley, Richard; White, Ian R

    2011-09-10

    The multivariate random effects model is a generalization of the standard univariate model. Multivariate meta-analysis is becoming more commonly used and the techniques and related computer software, although continually under development, are now in place. In order to raise awareness of the multivariate methods, and discuss their advantages and disadvantages, we organized a one day 'Multivariate meta-analysis' event at the Royal Statistical Society. In addition to disseminating the most recent developments, we also received an abundance of comments, concerns, insights, critiques and encouragement. This article provides a balanced account of the day's discourse. By giving others the opportunity to respond to our assessment, we hope to ensure that the various view points and opinions are aired before multivariate meta-analysis simply becomes another widely used de facto method without any proper consideration of it by the medical statistics community. We describe the areas of application that multivariate meta-analysis has found, the methods available, the difficulties typically encountered and the arguments for and against the multivariate methods, using four representative but contrasting examples. We conclude that the multivariate methods can be useful, and in particular can provide estimates with better statistical properties, but also that these benefits come at the price of making more assumptions which do not result in better inference in every case. Although there is evidence that multivariate meta-analysis has considerable potential, it must be even more carefully applied than its univariate counterpart in practice.

  16. The unusual 2013-2015 drought in South Korea in the context of a multicentury precipitation record: Inferences from a nonstationary, multivariate, Bayesian copula model

    NASA Astrophysics Data System (ADS)

    Kwon, Hyun-Han; Lall, Upmanu; Kim, Seong-Joon

    2016-08-01

    Recently, the Korean peninsula faced severe drought for more than 3 years (2013-2015). Drought in this region is characterized by multidecadal variability, as seen from one of the longest systematic records available in Asia from 1770 to 2015. This paper explores how the return period of the 2013-2015 drought varies over this historical period to provide a context for the changing climate and drought severity in the region. A nonstationary, multivariate, Bayesian copula model for drought severity and duration is developed and applied. Given the wetting trend over the last 50 years, the recent drought appears quite extreme, while such droughts were common in the eighteenth and nineteenth centuries.

  17. Multivariate meta-analysis using individual participant data

    PubMed Central

    Riley, R. D.; Price, M. J.; Jackson, D.; Wardle, M.; Gueyffier, F.; Wang, J.; Staessen, J. A.; White, I. R.

    2016-01-01

    When combining results across related studies, a multivariate meta-analysis allows the joint synthesis of correlated effect estimates from multiple outcomes. Joint synthesis can improve efficiency over separate univariate syntheses, may reduce selective outcome reporting biases, and enables joint inferences across the outcomes. A common issue is that within-study correlations needed to fit the multivariate model are unknown from published reports. However, provision of individual participant data (IPD) allows them to be calculated directly. Here, we illustrate how to use IPD to estimate within-study correlations, using a joint linear regression for multiple continuous outcomes and bootstrapping methods for binary, survival and mixed outcomes. In a meta-analysis of 10 hypertension trials, we then show how these methods enable multivariate meta-analysis to address novel clinical questions about continuous, survival and binary outcomes; treatment–covariate interactions; adjusted risk/prognostic factor effects; longitudinal data; prognostic and multiparameter models; and multiple treatment comparisons. Both frequentist and Bayesian approaches are applied, with example software code provided to derive within-study correlations and to fit the models. PMID:26099484

  18. Multivariate meta-analysis using individual participant data.

    PubMed

    Riley, R D; Price, M J; Jackson, D; Wardle, M; Gueyffier, F; Wang, J; Staessen, J A; White, I R

    2015-06-01

    When combining results across related studies, a multivariate meta-analysis allows the joint synthesis of correlated effect estimates from multiple outcomes. Joint synthesis can improve efficiency over separate univariate syntheses, may reduce selective outcome reporting biases, and enables joint inferences across the outcomes. A common issue is that within-study correlations needed to fit the multivariate model are unknown from published reports. However, provision of individual participant data (IPD) allows them to be calculated directly. Here, we illustrate how to use IPD to estimate within-study correlations, using a joint linear regression for multiple continuous outcomes and bootstrapping methods for binary, survival and mixed outcomes. In a meta-analysis of 10 hypertension trials, we then show how these methods enable multivariate meta-analysis to address novel clinical questions about continuous, survival and binary outcomes; treatment-covariate interactions; adjusted risk/prognostic factor effects; longitudinal data; prognostic and multiparameter models; and multiple treatment comparisons. Both frequentist and Bayesian approaches are applied, with example software code provided to derive within-study correlations and to fit the models.

  19. A Proportional Hazards Regression Model for the Sub-distribution with Covariates Adjusted Censoring Weight for Competing Risks Data

    PubMed Central

    HE, PENG; ERIKSSON, FRANK; SCHEIKE, THOMAS H.; ZHANG, MEI-JIE

    2015-01-01

    With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution and the covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate-adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate-adjusted weight approach works well for the variance estimator as well. We illustrate our methods with bone marrow transplant data from the Center for International Blood and Marrow Transplant Research (CIBMTR). Here cancer relapse and death in complete remission are two competing risks. PMID:27034534

  20. A nonlinearized multivariate dominant factor-based partial least squares (PLS) model for coal analysis by using laser-induced breakdown spectroscopy.

    PubMed

    Feng, Jie; Wang, Zhe; Li, Lizhi; Li, Zheng; Ni, Weidou

    2013-03-01

    A nonlinearized multivariate dominant factor-based partial least-squares (PLS) model was applied to coal elemental concentration measurement. For C concentration determination in bituminous coal, the intensities of multiple characteristic lines of the main elements in coal were applied to construct a comprehensive dominant factor that would provide main concentration results. A secondary PLS thereafter applied would further correct the model results by using the entire spectral information. In the dominant factor extraction, nonlinear transformation of line intensities (based on physical mechanisms) was embedded in the linear PLS to describe nonlinear self-absorption and inter-element interference more effectively and accurately. According to the empirical expression of self-absorption and Taylor expansion, nonlinear transformations of atomic and ionic line intensities of C were utilized to model self-absorption. Then, the line intensities of other elements, O and N, were taken into account for inter-element interference, considering the possible recombination of C with O and N particles. The specialty of coal analysis by using laser-induced breakdown spectroscopy (LIBS) was also discussed and considered in the multivariate dominant factor construction. The proposed model achieved a much better prediction performance than conventional PLS. Compared with our previous, already improved dominant factor-based PLS model, the present PLS model obtained the same calibration quality while decreasing the root mean square error of prediction (RMSEP) from 4.47 to 3.77%. Furthermore, with the leave-one-out cross-validation and L-curve methods, which avoid the overfitting issue in determining the number of principal components instead of minimum RMSEP criteria, the present PLS model also showed better performance for different splits of calibration and prediction samples, proving the robustness of the present PLS model.

  1. A new proposal for multivariable modelling of time-varying effects in survival data based on fractional polynomial time-transformation.

    PubMed

    Sauerbrei, Willi; Royston, Patrick; Look, Maxime

    2007-06-01

    The Cox proportional hazards model has become the standard for the analysis of survival time data in cancer and other chronic diseases. In most studies, proportional hazards (PH) are assumed for covariate effects. With long-term follow-up, the PH assumption may be violated, leading to poor model fit. To accommodate non-PH effects, we introduce a new procedure, MFPT, an extension of the multivariable fractional polynomial (MFP) approach, to do the following: (1) select influential variables; (2) determine a sensible dose-response function for continuous variables; (3) investigate time-varying effects; (4) model such time-varying effects on a continuous scale. Assuming PH initially, we start with a detailed model-building step, including a search for possible non-linear functions for continuous covariates. Sometimes a variable with a strong short-term effect may appear weak or non-influential if 'averaged' over time under the PH assumption. To protect against omitting such variables, we repeat the analysis over a restricted time-interval. Any additional prognostic variables identified by this second analysis are added to create our final time-fixed multivariable model. Using a forward-selection algorithm we search for possible improvements in fit by adding time-varying covariates. The first part to create a final time-fixed model does not require the use of MFP. A model may be given from 'outside' or a different strategy may be preferred for this part. This broadens the scope of the time-varying part. To motivate and illustrate the methodology, we create prognostic models from a large database of patients with primary breast cancer. Non-linear time-fixed effects are found for progesterone receptor status and number of positive lymph nodes. Highly statistically significant time-varying effects are present for progesterone receptor status and tumour size.

  2. Data Assimilation and Adjusted Spherical Harmonic Model of VTEC Map over Thailand

    NASA Astrophysics Data System (ADS)

    Klinngam, Somjai; Maruyama, Takashi; Tsugawa, Takuya; Ishii, Mamoru; Supnithi, Pornchai; Chiablaem, Athiwat

    2016-07-01

    The global navigation satellite system (GNSS) and high frequency (HF) communication are vulnerable to the ionospheric irregularities, especially when the signal travels through the low-latitude region and around the magnetic equator known as equatorial ionization anomaly (EIA) region. In order to study the ionospheric effects to the communications performance in this region, the regional map of the observed total electron content (TEC) can show the characteristic and irregularities of the ionosphere. In this work, we develop the two-dimensional (2D) map of vertical TEC (VTEC) over Thailand using the adjusted spherical harmonic model (ASHM) and the data assimilation technique. We calculate the VTEC from the receiver independent exchange (RINEX) files recorded by the dual-frequency global positioning system (GPS) receivers on July 8th, 2012 (quiet day) at 12 stations around Thailand: 0° to 25°E and 95°N to 110°N. These stations are managed by Department of Public Works and Town & Country Planning (DPT), Thailand, and the South East Asia Low-latitude ionospheric Network (SEALION) project operated by National Institute of Information and Communications Technology (NICT), Japan, and King Mongkut's Institute of Technology Ladkrabang (KMITL). We compute the median observed VTEC (OBS-VTEC) in the grids with the spatial resolution of 2.5°x5° in latitude and longitude and time resolution of 2 hours. We assimilate the OBS-VTEC with the estimated VTEC from the International Reference Ionosphere model (IRI-VTEC) as well as the ionosphere map exchange (IONEX) files provided by the International GNSS Service (IGS-VTEC). The results show that the estimation of the 15-degree ASHM can be improved when both of IRI-VTEC and IGS-VTEC are weighted by the latitude-dependent factors before assimilating with the OBS-VTEC. However, the IRI-VTEC assimilation can improve the ASHM estimation more than the IGS-VTEC assimilation. Acknowledgment: This work is partially funded by the

  3. Application of least square support vector machine and multivariate adaptive regression spline models in long term prediction of river water pollution

    NASA Astrophysics Data System (ADS)

    Kisi, Ozgur; Parmar, Kulwinder Singh

    2016-03-01

    This study investigates the accuracy of least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5Tree) in modeling river water pollution. Various combinations of water quality parameters, Free Ammonia (AMM), Total Kjeldahl Nitrogen (TKN), Water Temperature (WT), Total Coliform (TC), Fecal Coliform (FC) and Potential of Hydrogen (pH) monitored at Nizamuddin, Delhi Yamuna River in India were used as inputs to the applied models. Results indicated that the LSSVM and MARS models had almost same accuracy and they performed better than the M5Tree model in modeling monthly chemical oxygen demand (COD). The average root mean square error (RMSE) of the LSSVM and M5Tree models was decreased by 1.47% and 19.1% using MARS model, respectively. Adding TC input to the models did not increase their accuracy in modeling COD while adding FC and pH inputs to the models generally decreased the accuracy. The overall results indicated that the MARS and LSSVM models could be successfully used in estimating monthly river water pollution level by using AMM, TKN and WT parameters as inputs.

  4. Moment Reconstruction and Moment-Adjusted Imputation When Exposure is Generated by a Complex, Nonlinear Random Effects Modeling Process

    PubMed Central

    Potgieter, Cornelis J.; Wei, Rubin; Kipnis, Victor; Freedman, Laurence S.; Carroll, Raymond J.

    2016-01-01

    Summary For the classical, homoscedastic measurement error model, moment reconstruction (Freedman et al., 2004, 2008) and moment-adjusted imputation (Thomas et al., 2011) are appealing, computationally simple imputation-like methods for general model fitting. Like classical regression calibration, the idea is to replace the unobserved variable subject to measurement error with a proxy that can be used in a variety of analyses. Moment reconstruction and moment-adjusted imputation differ from regression calibration in that they attempt to match multiple features of the latent variable, and also to match some of the latent variable’s relationships with the response and additional covariates. In this note, we consider a problem where true exposure is generated by a complex, nonlinear random effects modeling process, and develop analogues of moment reconstruction and moment-adjusted imputation for this case. This general model includes classical measurement errors, Berkson measurement errors, mixtures of Berkson and classical errors and problems that are not measurement error problems, but also cases where the data generating process for true exposure is a complex, nonlinear random effects modeling process. The methods are illustrated using the National Institutes of Health-AARP Diet and Health Study where the latent variable is a dietary pattern score called the Healthy Eating Index - 2005. We also show how our general model includes methods used in radiation epidemiology as a special case. Simulations are used to illustrate the methods. PMID:27061196

  5. Remote sensing estimation of the total phosphorus concentration in a large lake using band combinations and regional multivariate statistical modeling techniques.

    PubMed

    Gao, Yongnian; Gao, Junfeng; Yin, Hongbin; Liu, Chuansheng; Xia, Ting; Wang, Jing; Huang, Qi

    2015-03-15

    Remote sensing has been widely used for ater quality monitoring, but most of these monitoring studies have only focused on a few water quality variables, such as chlorophyll-a, turbidity, and total suspended solids, which have typically been considered optically active variables. Remote sensing presents a challenge in estimating the phosphorus concentration in water. The total phosphorus (TP) in lakes has been estimated from remotely sensed observations, primarily using the simple individual band ratio or their natural logarithm and the statistical regression method based on the field TP data and the spectral reflectance. In this study, we investigated the possibility of establishing a spatial modeling scheme to estimate the TP concentration of a large lake from multi-spectral satellite imagery using band combinations and regional multivariate statistical modeling techniques, and we tested the applicability of the spatial modeling scheme. The results showed that HJ-1A CCD multi-spectral satellite imagery can be used to estimate the TP concentration in a lake. The correlation and regression analysis showed a highly significant positive relationship between the TP concentration and certain remotely sensed combination variables. The proposed modeling scheme had a higher accuracy for the TP concentration estimation in the large lake compared with the traditional individual band ratio method and the whole-lake scale regression-modeling scheme. The TP concentration values showed a clear spatial variability and were high in western Lake Chaohu and relatively low in eastern Lake Chaohu. The northernmost portion, the northeastern coastal zone and the southeastern portion of western Lake Chaohu had the highest TP concentrations, and the other regions had the lowest TP concentration values, except for the coastal zone of eastern Lake Chaohu. These results strongly suggested that the proposed modeling scheme, i.e., the band combinations and the regional multivariate

  6. Development of a multivariate calibration model for the determination of dry extract content in Brazilian commercial bee propolis extracts through UV-Vis spectroscopy

    NASA Astrophysics Data System (ADS)

    Barbeira, Paulo J. S.; Paganotti, Rosilene S. N.; Ássimos, Ariane A.

    2013-10-01

    This study had the objective of determining the content of dry extract of commercial alcoholic extracts of bee propolis through Partial Least Squares (PLS) multivariate calibration and electronic spectroscopy. The PLS model provided a good prediction of dry extract content in commercial alcoholic extracts of bee propolis in the range of 2.7 a 16.8% (m/v), presenting the advantage of being less laborious and faster than the traditional gravimetric methodology. The PLS model was optimized with outlier detection tests according to the ASTM E 1655-05. In this study it was possible to verify that a centrifugation stage is extremely important in order to avoid the presence of waxes, resulting in a more accurate model. Around 50% of the analyzed samples presented content of dry extract lower than the value established by Brazilian legislation, in most cases, the values found were differe