Sample records for dynamic regression models

  1. Hierarchical cluster-based partial least squares regression (HC-PLSR) is an efficient tool for metamodelling of nonlinear dynamic models.

    PubMed

    Tøndel, Kristin; Indahl, Ulf G; Gjuvsland, Arne B; Vik, Jon Olav; Hunter, Peter; Omholt, Stig W; Martens, Harald

    2011-06-01

    Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs) to variation in features of the trajectories of the state variables (outputs) throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR), where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR) and ordinary least squares (OLS) regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback loops. HC-PLSR is a promising approach for metamodelling in systems biology, especially for highly nonlinear or non-monotone parameter to phenotype maps. The algorithm can be flexibly adjusted to suit the complexity of the dynamic model behaviour, inviting automation in the metamodelling of complex systems.

  2. Hierarchical Cluster-based Partial Least Squares Regression (HC-PLSR) is an efficient tool for metamodelling of nonlinear dynamic models

    PubMed Central

    2011-01-01

    Background Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs) to variation in features of the trajectories of the state variables (outputs) throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR), where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR) and ordinary least squares (OLS) regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Results Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback loops. Conclusions HC-PLSR is a promising approach for metamodelling in systems biology, especially for highly nonlinear or non-monotone parameter to phenotype maps. The algorithm can be flexibly adjusted to suit the complexity of the dynamic model behaviour, inviting automation in the metamodelling of complex systems. PMID:21627852

  3. Prediction of dynamical systems by symbolic regression

    NASA Astrophysics Data System (ADS)

    Quade, Markus; Abel, Markus; Shafi, Kamran; Niven, Robert K.; Noack, Bernd R.

    2016-07-01

    We study the modeling and prediction of dynamical systems based on conventional models derived from measurements. Such algorithms are highly desirable in situations where the underlying dynamics are hard to model from physical principles or simplified models need to be found. We focus on symbolic regression methods as a part of machine learning. These algorithms are capable of learning an analytically tractable model from data, a highly valuable property. Symbolic regression methods can be considered as generalized regression methods. We investigate two particular algorithms, the so-called fast function extraction which is a generalized linear regression algorithm, and genetic programming which is a very general method. Both are able to combine functions in a certain way such that a good model for the prediction of the temporal evolution of a dynamical system can be identified. We illustrate the algorithms by finding a prediction for the evolution of a harmonic oscillator based on measurements, by detecting an arriving front in an excitable system, and as a real-world application, the prediction of solar power production based on energy production observations at a given site together with the weather forecast.

  4. Dynamic prediction in functional concurrent regression with an application to child growth.

    PubMed

    Leroux, Andrew; Xiao, Luo; Crainiceanu, Ciprian; Checkley, William

    2018-04-15

    In many studies, it is of interest to predict the future trajectory of subjects based on their historical data, referred to as dynamic prediction. Mixed effects models have traditionally been used for dynamic prediction. However, the commonly used random intercept and slope model is often not sufficiently flexible for modeling subject-specific trajectories. In addition, there may be useful exposures/predictors of interest that are measured concurrently with the outcome, complicating dynamic prediction. To address these problems, we propose a dynamic functional concurrent regression model to handle the case where both the functional response and the functional predictors are irregularly measured. Currently, such a model cannot be fit by existing software. We apply the model to dynamically predict children's length conditional on prior length, weight, and baseline covariates. Inference on model parameters and subject-specific trajectories is conducted using the mixed effects representation of the proposed model. An extensive simulation study shows that the dynamic functional regression model provides more accurate estimation and inference than existing methods. Methods are supported by fast, flexible, open source software that uses heavily tested smoothing techniques. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  5. System dynamic modeling: an alternative method for budgeting.

    PubMed

    Srijariya, Witsanuchai; Riewpaiboon, Arthorn; Chaikledkaew, Usa

    2008-03-01

    To construct, validate, and simulate a system dynamic financial model and compare it against the conventional method. The study was a cross-sectional analysis of secondary data retrieved from the National Health Security Office (NHSO) in the fiscal year 2004. The sample consisted of all emergency patients who received emergency services outside their registered hospital-catchments area. The dependent variable used was the amount of reimbursed money. Two types of model were constructed, namely, the system dynamic model using the STELLA software and the multiple linear regression model. The outputs of both methods were compared. The study covered 284,716 patients from various levels of providers. The system dynamic model had the capability of producing various types of outputs, for example, financial and graphical analyses. For the regression analysis, statistically significant predictors were composed of service types (outpatient or inpatient), operating procedures, length of stay, illness types (accident or not), hospital characteristics, age, and hospital location (adjusted R(2) = 0.74). The total budget arrived at from using the system dynamic model and regression model was US$12,159,614.38 and US$7,301,217.18, respectively, whereas the actual NHSO reimbursement cost was US$12,840,805.69. The study illustrated that the system dynamic model is a useful financial management tool, although it is not easy to construct. The model is not only more accurate in prediction but is also more capable of analyzing large and complex real-world situations than the conventional method.

  6. Quantum regression theorem and non-Markovianity of quantum dynamics

    NASA Astrophysics Data System (ADS)

    Guarnieri, Giacomo; Smirne, Andrea; Vacchini, Bassano

    2014-08-01

    We explore the connection between two recently introduced notions of non-Markovian quantum dynamics and the validity of the so-called quantum regression theorem. While non-Markovianity of a quantum dynamics has been defined looking at the behavior in time of the statistical operator, which determines the evolution of mean values, the quantum regression theorem makes statements about the behavior of system correlation functions of order two and higher. The comparison relies on an estimate of the validity of the quantum regression hypothesis, which can be obtained exactly evaluating two-point correlation functions. To this aim we consider a qubit undergoing dephasing due to interaction with a bosonic bath, comparing the exact evaluation of the non-Markovianity measures with the violation of the quantum regression theorem for a class of spectral densities. We further study a photonic dephasing model, recently exploited for the experimental measurement of non-Markovianity. It appears that while a non-Markovian dynamics according to either definition brings with itself violation of the regression hypothesis, even Markovian dynamics can lead to a failure of the regression relation.

  7. Restoration of Monotonicity Respecting in Dynamic Regression

    PubMed Central

    Huang, Yijian

    2017-01-01

    Dynamic regression models, including the quantile regression model and Aalen’s additive hazards model, are widely adopted to investigate evolving covariate effects. Yet lack of monotonicity respecting with standard estimation procedures remains an outstanding issue. Advances have recently been made, but none provides a complete resolution. In this article, we propose a novel adaptive interpolation method to restore monotonicity respecting, by successively identifying and then interpolating nearest monotonicity-respecting points of an original estimator. Under mild regularity conditions, the resulting regression coefficient estimator is shown to be asymptotically equivalent to the original. Our numerical studies have demonstrated that the proposed estimator is much more smooth and may have better finite-sample efficiency than the original as well as, when available as only in special cases, other competing monotonicity-respecting estimators. Illustration with a clinical study is provided. PMID:29430068

  8. The dynamic correlation between policy uncertainty and stock market returns in China

    NASA Astrophysics Data System (ADS)

    Yang, Miao; Jiang, Zhi-Qiang

    2016-11-01

    The dynamic correlation is examined between government's policy uncertainty and Chinese stock market returns in the period from January 1995 to December 2014. We find that the stock market is significantly correlated to policy uncertainty based on the results of the Vector Auto Regression (VAR) and Structural Vector Auto Regression (SVAR) models. In contrast, the results of the Dynamic Conditional Correlation Generalized Multivariate Autoregressive Conditional Heteroscedasticity (DCC-MGARCH) model surprisingly show a low dynamic correlation coefficient between policy uncertainty and market returns, suggesting that the fluctuations of each variable are greatly influenced by their values in the preceding period. Our analysis highlights the understanding of the dynamical relationship between stock market and fiscal and monetary policy.

  9. Dynamic Web Pages: Performance Impact on Web Servers.

    ERIC Educational Resources Information Center

    Kothari, Bhupesh; Claypool, Mark

    2001-01-01

    Discussion of Web servers and requests for dynamic pages focuses on experimentally measuring and analyzing the performance of the three dynamic Web page generation technologies: CGI, FastCGI, and Servlets. Develops a multivariate linear regression model and predicts Web server performance under some typical dynamic requests. (Author/LRW)

  10. LOGISTIC NETWORK REGRESSION FOR SCALABLE ANALYSIS OF NETWORKS WITH JOINT EDGE/VERTEX DYNAMICS

    PubMed Central

    Almquist, Zack W.; Butts, Carter T.

    2015-01-01

    Change in group size and composition has long been an important area of research in the social sciences. Similarly, interest in interaction dynamics has a long history in sociology and social psychology. However, the effects of endogenous group change on interaction dynamics are a surprisingly understudied area. One way to explore these relationships is through social network models. Network dynamics may be viewed as a process of change in the edge structure of a network, in the vertex set on which edges are defined, or in both simultaneously. Although early studies of such processes were primarily descriptive, recent work on this topic has increasingly turned to formal statistical models. Although showing great promise, many of these modern dynamic models are computationally intensive and scale very poorly in the size of the network under study and/or the number of time points considered. Likewise, currently used models focus on edge dynamics, with little support for endogenously changing vertex sets. Here, the authors show how an existing approach based on logistic network regression can be extended to serve as a highly scalable framework for modeling large networks with dynamic vertex sets. The authors place this approach within a general dynamic exponential family (exponential-family random graph modeling) context, clarifying the assumptions underlying the framework (and providing a clear path for extensions), and they show how model assessment methods for cross-sectional networks can be extended to the dynamic case. Finally, the authors illustrate this approach on a classic data set involving interactions among windsurfers on a California beach. PMID:26120218

  11. LOGISTIC NETWORK REGRESSION FOR SCALABLE ANALYSIS OF NETWORKS WITH JOINT EDGE/VERTEX DYNAMICS.

    PubMed

    Almquist, Zack W; Butts, Carter T

    2014-08-01

    Change in group size and composition has long been an important area of research in the social sciences. Similarly, interest in interaction dynamics has a long history in sociology and social psychology. However, the effects of endogenous group change on interaction dynamics are a surprisingly understudied area. One way to explore these relationships is through social network models. Network dynamics may be viewed as a process of change in the edge structure of a network, in the vertex set on which edges are defined, or in both simultaneously. Although early studies of such processes were primarily descriptive, recent work on this topic has increasingly turned to formal statistical models. Although showing great promise, many of these modern dynamic models are computationally intensive and scale very poorly in the size of the network under study and/or the number of time points considered. Likewise, currently used models focus on edge dynamics, with little support for endogenously changing vertex sets. Here, the authors show how an existing approach based on logistic network regression can be extended to serve as a highly scalable framework for modeling large networks with dynamic vertex sets. The authors place this approach within a general dynamic exponential family (exponential-family random graph modeling) context, clarifying the assumptions underlying the framework (and providing a clear path for extensions), and they show how model assessment methods for cross-sectional networks can be extended to the dynamic case. Finally, the authors illustrate this approach on a classic data set involving interactions among windsurfers on a California beach.

  12. A Linear Dynamical Systems Approach to Streamflow Reconstruction Reveals History of Regime Shifts in Northern Thailand

    NASA Astrophysics Data System (ADS)

    Nguyen, Hung T. T.; Galelli, Stefano

    2018-03-01

    Catchment dynamics is not often modeled in streamflow reconstruction studies; yet, the streamflow generation process depends on both catchment state and climatic inputs. To explicitly account for this interaction, we contribute a linear dynamic model, in which streamflow is a function of both catchment state (i.e., wet/dry) and paleoclimatic proxies. The model is learned using a novel variant of the Expectation-Maximization algorithm, and it is used with a paleo drought record—the Monsoon Asia Drought Atlas—to reconstruct 406 years of streamflow for the Ping River (northern Thailand). Results for the instrumental period show that the dynamic model has higher accuracy than conventional linear regression; all performance scores improve by 45-497%. Furthermore, the reconstructed trajectory of the state variable provides valuable insights about the catchment history—e.g., regime-like behavior—thereby complementing the information contained in the reconstructed streamflow time series. The proposed technique can replace linear regression, since it only requires information on streamflow and climatic proxies (e.g., tree-rings, drought indices); furthermore, it is capable of readily generating stochastic streamflow replicates. With a marginal increase in computational requirements, the dynamic model brings more desirable features and value to streamflow reconstructions.

  13. Dynamic Network Logistic Regression: A Logistic Choice Analysis of Inter- and Intra-Group Blog Citation Dynamics in the 2004 US Presidential Election

    PubMed Central

    2013-01-01

    Methods for analysis of network dynamics have seen great progress in the past decade. This article shows how Dynamic Network Logistic Regression techniques (a special case of the Temporal Exponential Random Graph Models) can be used to implement decision theoretic models for network dynamics in a panel data context. We also provide practical heuristics for model building and assessment. We illustrate the power of these techniques by applying them to a dynamic blog network sampled during the 2004 US presidential election cycle. This is a particularly interesting case because it marks the debut of Internet-based media such as blogs and social networking web sites as institutionally recognized features of the American political landscape. Using a longitudinal sample of all Democratic National Convention/Republican National Convention–designated blog citation networks, we are able to test the influence of various strategic, institutional, and balance-theoretic mechanisms as well as exogenous factors such as seasonality and political events on the propensity of blogs to cite one another over time. Using a combination of deviance-based model selection criteria and simulation-based model adequacy tests, we identify the combination of processes that best characterizes the choice behavior of the contending blogs. PMID:24143060

  14. Climate Prediction Center - Seasonal Outlook

    Science.gov Websites

    SEASONAL CLIMATE VARIABILITY, INCLUDING ENSO, SOIL MOISTURE, AND VARIOUS STATE-OF-THE-ART DYNAMICAL MODEL ACROSS PARTS OF THE EAST-CENTRAL CONUS CENTERED ON THE MISSISSIPPI RIVER. THIS IS DUE TO VERY HIGH SOIL TRENDS, NEGATIVE SOIL MOISTURE ANOMALIES, LAGGED ENSO REGRESSIONS, AND DYNAMICAL MODEL GUIDANCE ARE ALL

  15. Reconstruction of missing daily streamflow data using dynamic regression models

    NASA Astrophysics Data System (ADS)

    Tencaliec, Patricia; Favre, Anne-Catherine; Prieur, Clémentine; Mathevet, Thibault

    2015-12-01

    River discharge is one of the most important quantities in hydrology. It provides fundamental records for water resources management and climate change monitoring. Even very short data-gaps in this information can cause extremely different analysis outputs. Therefore, reconstructing missing data of incomplete data sets is an important step regarding the performance of the environmental models, engineering, and research applications, thus it presents a great challenge. The objective of this paper is to introduce an effective technique for reconstructing missing daily discharge data when one has access to only daily streamflow data. The proposed procedure uses a combination of regression and autoregressive integrated moving average models (ARIMA) called dynamic regression model. This model uses the linear relationship between neighbor and correlated stations and then adjusts the residual term by fitting an ARIMA structure. Application of the model to eight daily streamflow data for the Durance river watershed showed that the model yields reliable estimates for the missing data in the time series. Simulation studies were also conducted to evaluate the performance of the procedure.

  16. Bayesian dynamic regression models for interval censored survival data with application to children dental health.

    PubMed

    Wang, Xiaojing; Chen, Ming-Hui; Yan, Jun

    2013-07-01

    Cox models with time-varying coefficients offer great flexibility in capturing the temporal dynamics of covariate effects on event times, which could be hidden from a Cox proportional hazards model. Methodology development for varying coefficient Cox models, however, has been largely limited to right censored data; only limited work on interval censored data has been done. In most existing methods for varying coefficient models, analysts need to specify which covariate coefficients are time-varying and which are not at the time of fitting. We propose a dynamic Cox regression model for interval censored data in a Bayesian framework, where the coefficient curves are piecewise constant but the number of pieces and the jump points are covariate specific and estimated from the data. The model automatically determines the extent to which the temporal dynamics is needed for each covariate, resulting in smoother and more stable curve estimates. The posterior computation is carried out via an efficient reversible jump Markov chain Monte Carlo algorithm. Inference of each coefficient is based on an average of models with different number of pieces and jump points. A simulation study with three covariates, each with a coefficient of different degree in temporal dynamics, confirmed that the dynamic model is preferred to the existing time-varying model in terms of model comparison criteria through conditional predictive ordinate. When applied to a dental health data of children with age between 7 and 12 years, the dynamic model reveals that the relative risk of emergence of permanent tooth 24 between children with and without an infected primary predecessor is the highest at around age 7.5, and that it gradually reduces to one after age 11. These findings were not seen from the existing studies with Cox proportional hazards models.

  17. Dynamic modelling of n-of-1 data: powerful and flexible data analytics applied to individualised studies.

    PubMed

    Vieira, Rute; McDonald, Suzanne; Araújo-Soares, Vera; Sniehotta, Falko F; Henderson, Robin

    2017-09-01

    N-of-1 studies are based on repeated observations within an individual or unit over time and are acknowledged as an important research method for generating scientific evidence about the health or behaviour of an individual. Statistical analyses of n-of-1 data require accurate modelling of the outcome while accounting for its distribution, time-related trend and error structures (e.g., autocorrelation) as well as reporting readily usable contextualised effect sizes for decision-making. A number of statistical approaches have been documented but no consensus exists on which method is most appropriate for which type of n-of-1 design. We discuss the statistical considerations for analysing n-of-1 studies and briefly review some currently used methodologies. We describe dynamic regression modelling as a flexible and powerful approach, adaptable to different types of outcomes and capable of dealing with the different challenges inherent to n-of-1 statistical modelling. Dynamic modelling borrows ideas from longitudinal and event history methodologies which explicitly incorporate the role of time and the influence of past on future. We also present an illustrative example of the use of dynamic regression on monitoring physical activity during the retirement transition. Dynamic modelling has the potential to expand researchers' access to robust and user-friendly statistical methods for individualised studies.

  18. Forest dynamics to precipitation and temperature in the Gulf of Mexico coastal region.

    PubMed

    Li, Tianyu; Meng, Qingmin

    2017-05-01

    The forest is one of the most significant components of the Gulf of Mexico (GOM) coast. It provides livelihood to inhabitant and is known to be sensitive to climatic fluctuations. This study focuses on examining the impacts of temperature and precipitation variations on coastal forest. Two different regression methods, ordinary least squares (OLS) and geographically weighted regression (GWR), were employed to reveal the relationship between meteorological variables and forest dynamics. OLS regression analysis shows that changes in precipitation and temperature, over a span of 12 months, are responsible for 56% of NDVI variation. The forest, which is not particularly affected by the average monthly precipitation in most months, is observed to be affected by cumulative seasonal and annual precipitation explicitly. Temperature and precipitation almost equally impact on NDVI changes; about 50% of the NDVI variations is explained in OLS modeling, and about 74% of the NDVI variations is explained in GWR modeling. GWR analysis indicated that both precipitation and temperature characterize the spatial heterogeneity patterns of forest dynamics.

  19. Distributed collaborative probabilistic design for turbine blade-tip radial running clearance using support vector machine of regression

    NASA Astrophysics Data System (ADS)

    Fei, Cheng-Wei; Bai, Guang-Chen

    2014-12-01

    To improve the computational precision and efficiency of probabilistic design for mechanical dynamic assembly like the blade-tip radial running clearance (BTRRC) of gas turbine, a distribution collaborative probabilistic design method-based support vector machine of regression (SR)(called as DCSRM) is proposed by integrating distribution collaborative response surface method and support vector machine regression model. The mathematical model of DCSRM is established and the probabilistic design idea of DCSRM is introduced. The dynamic assembly probabilistic design of aeroengine high-pressure turbine (HPT) BTRRC is accomplished to verify the proposed DCSRM. The analysis results reveal that the optimal static blade-tip clearance of HPT is gained for designing BTRRC, and improving the performance and reliability of aeroengine. The comparison of methods shows that the DCSRM has high computational accuracy and high computational efficiency in BTRRC probabilistic analysis. The present research offers an effective way for the reliability design of mechanical dynamic assembly and enriches mechanical reliability theory and method.

  20. Forest dynamics to precipitation and temperature in the Gulf of Mexico coastal region

    NASA Astrophysics Data System (ADS)

    Li, Tianyu; Meng, Qingmin

    2017-05-01

    The forest is one of the most significant components of the Gulf of Mexico (GOM) coast. It provides livelihood to inhabitant and is known to be sensitive to climatic fluctuations. This study focuses on examining the impacts of temperature and precipitation variations on coastal forest. Two different regression methods, ordinary least squares (OLS) and geographically weighted regression (GWR), were employed to reveal the relationship between meteorological variables and forest dynamics. OLS regression analysis shows that changes in precipitation and temperature, over a span of 12 months, are responsible for 56% of NDVI variation. The forest, which is not particularly affected by the average monthly precipitation in most months, is observed to be affected by cumulative seasonal and annual precipitation explicitly. Temperature and precipitation almost equally impact on NDVI changes; about 50% of the NDVI variations is explained in OLS modeling, and about 74% of the NDVI variations is explained in GWR modeling. GWR analysis indicated that both precipitation and temperature characterize the spatial heterogeneity patterns of forest dynamics.

  1. Physiologic noise regression, motion regression, and TOAST dynamic field correction in complex-valued fMRI time series.

    PubMed

    Hahn, Andrew D; Rowe, Daniel B

    2012-02-01

    As more evidence is presented suggesting that the phase, as well as the magnitude, of functional MRI (fMRI) time series may contain important information and that there are theoretical drawbacks to modeling functional response in the magnitude alone, removing noise in the phase is becoming more important. Previous studies have shown that retrospective correction of noise from physiologic sources can remove significant phase variance and that dynamic main magnetic field correction and regression of estimated motion parameters also remove significant phase fluctuations. In this work, we investigate the performance of physiologic noise regression in a framework along with correction for dynamic main field fluctuations and motion regression. Our findings suggest that including physiologic regressors provides some benefit in terms of reduction in phase noise power, but it is small compared to the benefit of dynamic field corrections and use of estimated motion parameters as nuisance regressors. Additionally, we show that the use of all three techniques reduces phase variance substantially, removes undesirable spatial phase correlations and improves detection of the functional response in magnitude and phase. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. The dynamic model of enterprise revenue management

    NASA Astrophysics Data System (ADS)

    Mitsel, A. A.; Kataev, M. Yu; Kozlov, S. V.; Korepanov, K. V.

    2017-01-01

    The article presents the dynamic model of enterprise revenue management. This model is based on the quadratic criterion and linear control law. The model is founded on multiple regression that links revenues with the financial performance of the enterprise. As a result, optimal management is obtained so as to provide the given enterprise revenue, namely, the values of financial indicators that ensure the planned profit of the organization are acquired.

  3. Penalized nonparametric scalar-on-function regression via principal coordinates

    PubMed Central

    Reiss, Philip T.; Miller, David L.; Wu, Pei-Shien; Hua, Wen-Yu

    2016-01-01

    A number of classical approaches to nonparametric regression have recently been extended to the case of functional predictors. This paper introduces a new method of this type, which extends intermediate-rank penalized smoothing to scalar-on-function regression. In the proposed method, which we call principal coordinate ridge regression, one regresses the response on leading principal coordinates defined by a relevant distance among the functional predictors, while applying a ridge penalty. Our publicly available implementation, based on generalized additive modeling software, allows for fast optimal tuning parameter selection and for extensions to multiple functional predictors, exponential family-valued responses, and mixed-effects models. In an application to signature verification data, principal coordinate ridge regression, with dynamic time warping distance used to define the principal coordinates, is shown to outperform a functional generalized linear model. PMID:29217963

  4. Model-free inference of direct network interactions from nonlinear collective dynamics.

    PubMed

    Casadiego, Jose; Nitzan, Mor; Hallerberg, Sarah; Timme, Marc

    2017-12-19

    The topology of interactions in network dynamical systems fundamentally underlies their function. Accelerating technological progress creates massively available data about collective nonlinear dynamics in physical, biological, and technological systems. Detecting direct interaction patterns from those dynamics still constitutes a major open problem. In particular, current nonlinear dynamics approaches mostly require to know a priori a model of the (often high dimensional) system dynamics. Here we develop a model-independent framework for inferring direct interactions solely from recording the nonlinear collective dynamics generated. Introducing an explicit dependency matrix in combination with a block-orthogonal regression algorithm, the approach works reliably across many dynamical regimes, including transient dynamics toward steady states, periodic and non-periodic dynamics, and chaos. Together with its capabilities to reveal network (two point) as well as hypernetwork (e.g., three point) interactions, this framework may thus open up nonlinear dynamics options of inferring direct interaction patterns across systems where no model is known.

  5. Development of LACIE CCEA-1 weather/wheat yield models. [regression analysis

    NASA Technical Reports Server (NTRS)

    Strommen, N. D.; Sakamoto, C. M.; Leduc, S. K.; Umberger, D. E. (Principal Investigator)

    1979-01-01

    The advantages and disadvantages of the casual (phenological, dynamic, physiological), statistical regression, and analog approaches to modeling for grain yield are examined. Given LACIE's primary goal of estimating wheat production for the large areas of eight major wheat-growing regions, the statistical regression approach of correlating historical yield and climate data offered the Center for Climatic and Environmental Assessment the greatest potential return within the constraints of time and data sources. The basic equation for the first generation wheat-yield model is given. Topics discussed include truncation, trend variable, selection of weather variables, episodic events, strata selection, operational data flow, weighting, and model results.

  6. History of research on modelling gypsy moth population ecology

    Treesearch

    J. J. Colbert

    1991-01-01

    History of research to develop models of gypsy moth population dynamics and some related studies are described. Empirical regression-based models are reviewed, and then the more comprehensive process models are discussed. Current model- related research efforts are introduced.

  7. Median regression spline modeling of longitudinal FEV1 measurements in cystic fibrosis (CF) and chronic obstructive pulmonary disease (COPD) patients.

    PubMed

    Conrad, Douglas J; Bailey, Barbara A; Hardie, Jon A; Bakke, Per S; Eagan, Tomas M L; Aarli, Bernt B

    2017-01-01

    Clinical phenotyping, therapeutic investigations as well as genomic, airway secretion metabolomic and metagenomic investigations can benefit from robust, nonlinear modeling of FEV1 in individual subjects. We demonstrate the utility of measuring FEV1 dynamics in representative cystic fibrosis (CF) and chronic obstructive pulmonary disease (COPD) populations. Individual FEV1 data from CF and COPD subjects were modeled by estimating median regression splines and their predicted first and second derivatives. Classes were created from variables that capture the dynamics of these curves in both cohorts. Nine FEV1 dynamic variables were identified from the splines and their predicted derivatives in individuals with CF (n = 177) and COPD (n = 374). Three FEV1 dynamic classes (i.e. stable, intermediate and hypervariable) were generated and described using these variables from both cohorts. In the CF cohort, the FEV1 hypervariable class (HV) was associated with a clinically unstable, female-dominated phenotypes while stable FEV1 class (S) individuals were highly associated with the male-dominated milder clinical phenotype. In the COPD cohort, associations were found between the FEV1 dynamic classes, the COPD GOLD grades, with exacerbation frequency and symptoms. Nonlinear modeling of FEV1 with splines provides new insights and is useful in characterizing CF and COPD clinical phenotypes.

  8. Regression of Copper-Deficient Heart Hypertrophy: Reduction in the Size of Hypertrophic Cardiomyocytes

    USDA-ARS?s Scientific Manuscript database

    Dietary copper deficiency causes cardiac hypertrophy and its transition to heart failure in a mouse model. Copper repletion results in a rapid regression of cardiac hypertrophy and prevention of heart failure. The present study was undertaken to understand dynamic changes of cardiomyocytes in the hy...

  9. Trophic dilution of cyclic volatile methylsiloxanes (cVMS) in the pelagic marine food web of Tokyo Bay, Japan.

    PubMed

    Powell, David E; Suganuma, Noriyuki; Kobayashi, Keiji; Nakamura, Tsutomu; Ninomiya, Kouzo; Matsumura, Kozaburo; Omura, Naoki; Ushioka, Satoshi

    2017-02-01

    Bioaccumulation and trophic transfer of cyclic volatile methylsiloxanes (cVMS), specifically octamethylcyclotetrasiloxane (D4), decamethylcyclopentasiloxane (D5), and dodecamethylcyclohexasiloxane (D6), were evaluated in the pelagic marine food web of Tokyo Bay, Japan. Polychlorinated biphenyl (PCB) congeners that are "legacy" chemicals known to bioaccumulate in aquatic organisms and biomagnify across aquatic food webs were used as a benchmark chemical (CB-180) to calibrate the sampled food web and as a reference chemical (CB-153) to validate the results. Trophic magnification factors (TMFs) were calculated from slopes of ordinary least-squares (OLS) regression models and slopes of bootstrap regression models, which were used as robust alternatives to the OLS models. Various regression models were developed that incorporated benchmarking to control bias associated with experimental design, food web dynamics, and trophic level structure. There was no evidence from any of the regression models to suggest biomagnification of cVMS in Tokyo Bay. Rather, the regression models indicated that trophic dilution of cVMS, not trophic magnification, occurred across the sampled food web. Comparison of results for Tokyo Bay to results from other studies indicated that bioaccumulation of cVMS was not related to type of food web (pelagic vs demersal), environment (marine vs freshwater), species composition, or location. Rather, results suggested that differences between study areas was likely related to food web dynamics and variable conditions of exposure resulting from non-uniform patterns of organism movement across spatial concentration gradients. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  10. Forecasting daily meteorological time series using ARIMA and regression models

    NASA Astrophysics Data System (ADS)

    Murat, Małgorzata; Malinowska, Iwona; Gos, Magdalena; Krzyszczak, Jaromir

    2018-04-01

    The daily air temperature and precipitation time series recorded between January 1, 1980 and December 31, 2010 in four European sites (Jokioinen, Dikopshof, Lleida and Lublin) from different climatic zones were modeled and forecasted. In our forecasting we used the methods of the Box-Jenkins and Holt- Winters seasonal auto regressive integrated moving-average, the autoregressive integrated moving-average with external regressors in the form of Fourier terms and the time series regression, including trend and seasonality components methodology with R software. It was demonstrated that obtained models are able to capture the dynamics of the time series data and to produce sensible forecasts.

  11. Emergency Department Visit Forecasting and Dynamic Nursing Staff Allocation Using Machine Learning Techniques With Readily Available Open-Source Software.

    PubMed

    Zlotnik, Alexander; Gallardo-Antolín, Ascensión; Cuchí Alfaro, Miguel; Pérez Pérez, María Carmen; Montero Martínez, Juan Manuel

    2015-08-01

    Although emergency department visit forecasting can be of use for nurse staff planning, previous research has focused on models that lacked sufficient resolution and realistic error metrics for these predictions to be applied in practice. Using data from a 1100-bed specialized care hospital with 553,000 patients assigned to its healthcare area, forecasts with different prediction horizons, from 2 to 24 weeks ahead, with an 8-hour granularity, using support vector regression, M5P, and stratified average time-series models were generated with an open-source software package. As overstaffing and understaffing errors have different implications, error metrics and potential personnel monetary savings were calculated with a custom validation scheme, which simulated subsequent generation of predictions during a 4-year period. Results were then compared with a generalized estimating equation regression. Support vector regression and M5P models were found to be superior to the stratified average model with a 95% confidence interval. Our findings suggest that medium and severe understaffing situations could be reduced in more than an order of magnitude and average yearly savings of up to €683,500 could be achieved if dynamic nursing staff allocation was performed with support vector regression instead of the static staffing levels currently in use.

  12. Spatio-Temporal Regression Based Clustering of Precipitation Extremes in a Presence of Systematically Missing Covariates

    NASA Astrophysics Data System (ADS)

    Kaiser, Olga; Martius, Olivia; Horenko, Illia

    2017-04-01

    Regression based Generalized Pareto Distribution (GPD) models are often used to describe the dynamics of hydrological threshold excesses relying on the explicit availability of all of the relevant covariates. But, in real application the complete set of relevant covariates might be not available. In this context, it was shown that under weak assumptions the influence coming from systematically missing covariates can be reflected by a nonstationary and nonhomogenous dynamics. We present a data-driven, semiparametric and an adaptive approach for spatio-temporal regression based clustering of threshold excesses in a presence of systematically missing covariates. The nonstationary and nonhomogenous behavior of threshold excesses is describes by a set of local stationary GPD models, where the parameters are expressed as regression models, and a non-parametric spatio-temporal hidden switching process. Exploiting nonparametric Finite Element time-series analysis Methodology (FEM) with Bounded Variation of the model parameters (BV) for resolving the spatio-temporal switching process, the approach goes beyond strong a priori assumptions made is standard latent class models like Mixture Models and Hidden Markov Models. Additionally, the presented FEM-BV-GPD provides a pragmatic description of the corresponding spatial dependence structure by grouping together all locations that exhibit similar behavior of the switching process. The performance of the framework is demonstrated on daily accumulated precipitation series over 17 different locations in Switzerland from 1981 till 2013 - showing that the introduced approach allows for a better description of the historical data.

  13. Reconstructing latent dynamical noise for better forecasting observables

    NASA Astrophysics Data System (ADS)

    Hirata, Yoshito

    2018-03-01

    I propose a method for reconstructing multi-dimensional dynamical noise inspired by the embedding theorem of Muldoon et al. [Dyn. Stab. Syst. 13, 175 (1998)] by regarding multiple predictions as different observables. Then, applying the embedding theorem by Stark et al. [J. Nonlinear Sci. 13, 519 (2003)] for a forced system, I produce time series forecast by supplying the reconstructed past dynamical noise as auxiliary information. I demonstrate the proposed method on toy models driven by auto-regressive models or independent Gaussian noise.

  14. Analysis and prediction of flow from local source in a river basin using a Neuro-fuzzy modeling tool.

    PubMed

    Aqil, Muhammad; Kita, Ichiro; Yano, Akira; Nishiyama, Soichi

    2007-10-01

    Traditionally, the multiple linear regression technique has been one of the most widely used models in simulating hydrological time series. However, when the nonlinear phenomenon is significant, the multiple linear will fail to develop an appropriate predictive model. Recently, neuro-fuzzy systems have gained much popularity for calibrating the nonlinear relationships. This study evaluated the potential of a neuro-fuzzy system as an alternative to the traditional statistical regression technique for the purpose of predicting flow from a local source in a river basin. The effectiveness of the proposed identification technique was demonstrated through a simulation study of the river flow time series of the Citarum River in Indonesia. Furthermore, in order to provide the uncertainty associated with the estimation of river flow, a Monte Carlo simulation was performed. As a comparison, a multiple linear regression analysis that was being used by the Citarum River Authority was also examined using various statistical indices. The simulation results using 95% confidence intervals indicated that the neuro-fuzzy model consistently underestimated the magnitude of high flow while the low and medium flow magnitudes were estimated closer to the observed data. The comparison of the prediction accuracy of the neuro-fuzzy and linear regression methods indicated that the neuro-fuzzy approach was more accurate in predicting river flow dynamics. The neuro-fuzzy model was able to improve the root mean square error (RMSE) and mean absolute percentage error (MAPE) values of the multiple linear regression forecasts by about 13.52% and 10.73%, respectively. Considering its simplicity and efficiency, the neuro-fuzzy model is recommended as an alternative tool for modeling of flow dynamics in the study area.

  15. Dynamics of Markets

    NASA Astrophysics Data System (ADS)

    McCauley, Joseph L.

    2009-09-01

    Preface; 1. Econophysics: why and what; 2. Neo-classical economic theory; 3. Probability and stochastic processes; 4. Introduction to financial economics; 5. Introduction to portfolio selection theory; 6. Scaling, pair correlations, and conditional densities; 7. Statistical ensembles: deducing dynamics from time series; 8. Martingale option pricing; 9. FX market globalization: evolution of the dollar to worldwide reserve currency; 10. Macroeconomics and econometrics: regression models vs. empirically based modeling; 11. Complexity; Index.

  16. Variational dynamic background model for keyword spotting in handwritten documents

    NASA Astrophysics Data System (ADS)

    Kumar, Gaurav; Wshah, Safwan; Govindaraju, Venu

    2013-12-01

    We propose a bayesian framework for keyword spotting in handwritten documents. This work is an extension to our previous work where we proposed dynamic background model, DBM for keyword spotting that takes into account the local character level scores and global word level scores to learn a logistic regression classifier to separate keywords from non-keywords. In this work, we add a bayesian layer on top of the DBM called the variational dynamic background model, VDBM. The logistic regression classifier uses the sigmoid function to separate keywords from non-keywords. The sigmoid function being neither convex nor concave, exact inference of VDBM becomes intractable. An expectation maximization step is proposed to do approximate inference. The advantage of VDBM over the DBM is multi-fold. Firstly, being bayesian, it prevents over-fitting of data. Secondly, it provides better modeling of data and an improved prediction of unseen data. VDBM is evaluated on the IAM dataset and the results prove that it outperforms our prior work and other state of the art line based word spotting system.

  17. The Use of Multiple Regression and Trend Analysis to Understand Enrollment Fluctuations. AIR Forum 1979 Paper.

    ERIC Educational Resources Information Center

    Campbell, S. Duke; Greenberg, Barry

    The development of a predictive equation capable of explaining a significant percentage of enrollment variability at Florida International University is described. A model utilizing trend analysis and a multiple regression approach to enrollment forecasting was adapted to investigate enrollment dynamics at the university. Four independent…

  18. A Bayesian methodological framework for accommodating interannual variability of nutrient loading with the SPARROW model

    NASA Astrophysics Data System (ADS)

    Wellen, Christopher; Arhonditsis, George B.; Labencki, Tanya; Boyd, Duncan

    2012-10-01

    Regression-type, hybrid empirical/process-based models (e.g., SPARROW, PolFlow) have assumed a prominent role in efforts to estimate the sources and transport of nutrient pollution at river basin scales. However, almost no attempts have been made to explicitly accommodate interannual nutrient loading variability in their structure, despite empirical and theoretical evidence indicating that the associated source/sink processes are quite variable at annual timescales. In this study, we present two methodological approaches to accommodate interannual variability with the Spatially Referenced Regressions on Watershed attributes (SPARROW) nonlinear regression model. The first strategy uses the SPARROW model to estimate a static baseline load and climatic variables (e.g., precipitation) to drive the interannual variability. The second approach allows the source/sink processes within the SPARROW model to vary at annual timescales using dynamic parameter estimation techniques akin to those used in dynamic linear models. Model parameterization is founded upon Bayesian inference techniques that explicitly consider calibration data and model uncertainty. Our case study is the Hamilton Harbor watershed, a mixed agricultural and urban residential area located at the western end of Lake Ontario, Canada. Our analysis suggests that dynamic parameter estimation is the more parsimonious of the two strategies tested and can offer insights into the temporal structural changes associated with watershed functioning. Consistent with empirical and theoretical work, model estimated annual in-stream attenuation rates varied inversely with annual discharge. Estimated phosphorus source areas were concentrated near the receiving water body during years of high in-stream attenuation and dispersed along the main stems of the streams during years of low attenuation, suggesting that nutrient source areas are subject to interannual variability.

  19. Accurate Descriptions of Hot Flow Behaviors Across β Transus of Ti-6Al-4V Alloy by Intelligence Algorithm GA-SVR

    NASA Astrophysics Data System (ADS)

    Wang, Li-yong; Li, Le; Zhang, Zhi-hua

    2016-09-01

    Hot compression tests of Ti-6Al-4V alloy in a wide temperature range of 1023-1323 K and strain rate range of 0.01-10 s-1 were conducted by a servo-hydraulic and computer-controlled Gleeble-3500 machine. In order to accurately and effectively characterize the highly nonlinear flow behaviors, support vector regression (SVR) which is a machine learning method was combined with genetic algorithm (GA) for characterizing the flow behaviors, namely, the GA-SVR. The prominent character of GA-SVR is that it with identical training parameters will keep training accuracy and prediction accuracy at a stable level in different attempts for a certain dataset. The learning abilities, generalization abilities, and modeling efficiencies of the mathematical regression model, ANN, and GA-SVR for Ti-6Al-4V alloy were detailedly compared. Comparison results show that the learning ability of the GA-SVR is stronger than the mathematical regression model. The generalization abilities and modeling efficiencies of these models were shown as follows in ascending order: the mathematical regression model < ANN < GA-SVR. The stress-strain data outside experimental conditions were predicted by the well-trained GA-SVR, which improved simulation accuracy of the load-stroke curve and can further improve the related research fields where stress-strain data play important roles, such as speculating work hardening and dynamic recovery, characterizing dynamic recrystallization evolution, and improving processing maps.

  20. Time series modeling by a regression approach based on a latent process.

    PubMed

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  1. Dynamic regression modeling of daily nitrate-nitrogen concentrations in a large agricultural watershed.

    PubMed

    Feng, Zhujing; Schilling, Keith E; Chan, Kung-Sik

    2013-06-01

    Nitrate-nitrogen concentrations in rivers represent challenges for water supplies that use surface water sources. Nitrate concentrations are often modeled using time-series approaches, but previous efforts have typically relied on monthly time steps. In this study, we developed a dynamic regression model of daily nitrate concentrations in the Raccoon River, Iowa, that incorporated contemporaneous and lags of precipitation and discharge occurring at several locations around the basin. Results suggested that 95 % of the variation in daily nitrate concentrations measured at the outlet of a large agricultural watershed can be explained by time-series patterns of precipitation and discharge occurring in the basin. Discharge was found to be a more important regression variable than precipitation in our model but both regression parameters were strongly correlated with nitrate concentrations. The time-series model was consistent with known patterns of nitrate behavior in the watershed, successfully identifying contemporaneous dilution mechanisms from higher relief and urban areas of the basin while incorporating the delayed contribution of nitrate from tile-drained regions in a lagged response. The first difference of the model errors were modeled as an AR(16) process and suggest that daily nitrate concentration changes remain temporally correlated for more than 2 weeks although temporal correlation was stronger in the first few days before tapering off. Consequently, daily nitrate concentrations are non-stationary, i.e. of strong memory. Using time-series models to reliably forecast daily nitrate concentrations in a river based on patterns of precipitation and discharge occurring in its basin may be of great interest to water suppliers.

  2. A hybrid regional approach to model discharge at multiple sub-basins within the Calapooia Watershed, Oregon, USA

    EPA Science Inventory

    Modeling is a useful tool for quantifying ecosystem services and understanding their temporal dynamics. Here we describe a hybrid regional modeling approach for sub-basins of the Calapooia watershed that incorporates both a precipitation-runoff model and an indexed regression mo...

  3. Error modeling for surrogates of dynamical systems using machine learning: Machine-learning-based error model for surrogates of dynamical systems

    DOE PAGES

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    2017-07-14

    A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.« less

  4. Error modeling for surrogates of dynamical systems using machine learning: Machine-learning-based error model for surrogates of dynamical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.« less

  5. Effects of intracerebroventricular administration of beta-amyloid on the dynamics of learning in purebred and mongrel rats.

    PubMed

    Stepanov, I I; Kuznetsova, N N; Klement'ev, B I; Sapronov, N S

    2007-07-01

    The effects of intracerebroventricular administration of the beta-amyloid peptide fragment Abeta(25-35) on the dynamics of the acquisition of a conditioned reflex in a Y maze were studied in Wistar and mongrel rats. The dynamics of decreases in the number of errors were assessed using an exponential mathematical model describing the transfer function of a first-order system in response to stepped inputs using non-linear regression analysis. This mathematical model provided a good approximation to the learning dynamics in inbred and mongrel mice. In Wistar rats, beta-amyloid impaired learning, with reduced memory between the first and second training sessions, but without complete blockade of learning. As a result, learning dynamics were no longer approximated by the mathematical model. At the same time, comparison of the number of errors in each training sessions between the control group of Wistar rats and the group given beta-amyloid showed no significant differences (Student's t test). This result demonstrates the advantage of regression analysis based on a mathematical model over the traditionally used statistical methods. In mongrel rats, the effect of beta-amyloid was limited to an a slowing of the process of learning as compared with control mongrel rats, with retention of the approximation by the mathematical model. It is suggested that mongrel animals have some kind of innate, genetically determined protective mechanism against the harmful effects of beta-amyloid.

  6. Multiscale regression modeling in mouse supraspinatus tendons reveals that dynamic processes act as mediators in structure-function relationships.

    PubMed

    Connizzo, Brianne K; Adams, Sheila M; Adams, Thomas H; Jawad, Abbas F; Birk, David E; Soslowsky, Louis J

    2016-06-14

    Recent advances in technology have allowed for the measurement of dynamic processes (re-alignment, crimp, deformation, sliding), but only a limited number of studies have investigated their relationship with mechanical properties. The overall objective of this study was to investigate the role of composition, structure, and the dynamic response to load in predicting tendon mechanical properties in a multi-level fashion mimicking native hierarchical collagen structure. Multiple linear regression models were investigated to determine the relationships between composition/structure, dynamic processes, and mechanical properties. Mediation was then used to determine if dynamic processes mediated structure-function relationships. Dynamic processes were strong predictors of mechanical properties. These predictions were location-dependent, with the insertion site utilizing all four dynamic responses and the midsubstance responding primarily with fibril deformation and sliding. In addition, dynamic processes were moderately predicted by composition and structure in a regionally-dependent manner. Finally, dynamic processes were partial mediators of the relationship between composition/structure and mechanical function, and results suggested that mediation is likely shared between multiple dynamic processes. In conclusion, the mechanical properties at the midsubstance of the tendon are controlled primarily by fibril structure and this region responds to load via fibril deformation and sliding. Conversely, the mechanical function at the insertion site is controlled by many other important parameters and the region responds to load via all four dynamic mechanisms. Overall, this study presents a strong foundation on which to design future experimental and modeling efforts in order to fully understand the complex structure-function relationships present in tendon. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Rapid performance modeling and parameter regression of geodynamic models

    NASA Astrophysics Data System (ADS)

    Brown, J.; Duplyakin, D.

    2016-12-01

    Geodynamic models run in a parallel environment have many parameters with complicated effects on performance and scientifically-relevant functionals. Manually choosing an efficient machine configuration and mapping out the parameter space requires a great deal of expert knowledge and time-consuming experiments. We propose an active learning technique based on Gaussion Process Regression to automatically select experiments to map out the performance landscape with respect to scientific and machine parameters. The resulting performance model is then used to select optimal experiments for improving the accuracy of a reduced order model per unit of computational cost. We present the framework and evaluate its quality and capability using popular lithospheric dynamics models.

  8. Approaches to stream solute load estimation for solutes with varying dynamics from five diverse small watershed

    USGS Publications Warehouse

    Aulenbach, Brent T.; Burns, Douglas A.; Shanley, James B.; Yanai, Ruth D.; Bae, Kikang; Wild, Adam; Yang, Yang; Yi, Dong

    2016-01-01

    Estimating streamwater solute loads is a central objective of many water-quality monitoring and research studies, as loads are used to compare with atmospheric inputs, to infer biogeochemical processes, and to assess whether water quality is improving or degrading. In this study, we evaluate loads and associated errors to determine the best load estimation technique among three methods (a period-weighted approach, the regression-model method, and the composite method) based on a solute's concentration dynamics and sampling frequency. We evaluated a broad range of varying concentration dynamics with stream flow and season using four dissolved solutes (sulfate, silica, nitrate, and dissolved organic carbon) at five diverse small watersheds (Sleepers River Research Watershed, VT; Hubbard Brook Experimental Forest, NH; Biscuit Brook Watershed, NY; Panola Mountain Research Watershed, GA; and Río Mameyes Watershed, PR) with fairly high-frequency sampling during a 10- to 11-yr period. Data sets with three different sampling frequencies were derived from the full data set at each site (weekly plus storm/snowmelt events, weekly, and monthly) and errors in loads were assessed for the study period, annually, and monthly. For solutes that had a moderate to strong concentration–discharge relation, the composite method performed best, unless the autocorrelation of the model residuals was <0.2, in which case the regression-model method was most appropriate. For solutes that had a nonexistent or weak concentration–discharge relation (modelR2 < about 0.3), the period-weighted approach was most appropriate. The lowest errors in loads were achieved for solutes with the strongest concentration–discharge relations. Sample and regression model diagnostics could be used to approximate overall accuracies and annual precisions. For the period-weighed approach, errors were lower when the variance in concentrations was lower, the degree of autocorrelation in the concentrations was higher, and sampling frequency was higher. The period-weighted approach was most sensitive to sampling frequency. For the regression-model and composite methods, errors were lower when the variance in model residuals was lower. For the composite method, errors were lower when the autocorrelation in the residuals was higher. Guidelines to determine the best load estimation method based on solute concentration–discharge dynamics and diagnostics are presented, and should be applicable to other studies.

  9. Guidelines and Procedures for Computing Time-Series Suspended-Sediment Concentrations and Loads from In-Stream Turbidity-Sensor and Streamflow Data

    USGS Publications Warehouse

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.

    2009-01-01

    In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.

  10. Penalized spline estimation for functional coefficient regression models.

    PubMed

    Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z; Yu, Yan

    2010-04-01

    The functional coefficient regression models assume that the regression coefficients vary with some "threshold" variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called "curse of dimensionality" in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application.

  11. Q-learning residual analysis: application to the effectiveness of sequences of antipsychotic medications for patients with schizophrenia.

    PubMed

    Ertefaie, Ashkan; Shortreed, Susan; Chakraborty, Bibhas

    2016-06-15

    Q-learning is a regression-based approach that uses longitudinal data to construct dynamic treatment regimes, which are sequences of decision rules that use patient information to inform future treatment decisions. An optimal dynamic treatment regime is composed of a sequence of decision rules that indicate how to optimally individualize treatment using the patients' baseline and time-varying characteristics to optimize the final outcome. Constructing optimal dynamic regimes using Q-learning depends heavily on the assumption that regression models at each decision point are correctly specified; yet model checking in the context of Q-learning has been largely overlooked in the current literature. In this article, we show that residual plots obtained from standard Q-learning models may fail to adequately check the quality of the model fit. We present a modified Q-learning procedure that accommodates residual analyses using standard tools. We present simulation studies showing the advantage of the proposed modification over standard Q-learning. We illustrate this new Q-learning approach using data collected from a sequential multiple assignment randomized trial of patients with schizophrenia. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Improving the long-lead predictability of El Niño using a novel forecasting scheme based on a dynamic components model

    NASA Astrophysics Data System (ADS)

    Petrova, Desislava; Koopman, Siem Jan; Ballester, Joan; Rodó, Xavier

    2017-02-01

    El Niño (EN) is a dominant feature of climate variability on inter-annual time scales driving changes in the climate throughout the globe, and having wide-spread natural and socio-economic consequences. In this sense, its forecast is an important task, and predictions are issued on a regular basis by a wide array of prediction schemes and climate centres around the world. This study explores a novel method for EN forecasting. In the state-of-the-art the advantageous statistical technique of unobserved components time series modeling, also known as structural time series modeling, has not been applied. Therefore, we have developed such a model where the statistical analysis, including parameter estimation and forecasting, is based on state space methods, and includes the celebrated Kalman filter. The distinguishing feature of this dynamic model is the decomposition of a time series into a range of stochastically time-varying components such as level (or trend), seasonal, cycles of different frequencies, irregular, and regression effects incorporated as explanatory covariates. These components are modeled separately and ultimately combined in a single forecasting scheme. Customary statistical models for EN prediction essentially use SST and wind stress in the equatorial Pacific. In addition to these, we introduce a new domain of regression variables accounting for the state of the subsurface ocean temperature in the western and central equatorial Pacific, motivated by our analysis, as well as by recent and classical research, showing that subsurface processes and heat accumulation there are fundamental for the genesis of EN. An important feature of the scheme is that different regression predictors are used at different lead months, thus capturing the dynamical evolution of the system and rendering more efficient forecasts. The new model has been tested with the prediction of all warm events that occurred in the period 1996-2015. Retrospective forecasts of these events were made for long lead times of at least two and a half years. Hence, the present study demonstrates that the theoretical limit of ENSO prediction should be sought much longer than the commonly accepted "Spring Barrier". The high correspondence between the forecasts and observations indicates that the proposed model outperforms all current operational statistical models, and behaves comparably to the best dynamical models used for EN prediction. Thus, the novel way in which the modeling scheme has been structured could also be used for improving other statistical and dynamical modeling systems.

  13. Shape selection in Landsat time series: A tool for monitoring forest dynamics

    Treesearch

    Gretchen G. Moisen; Mary C. Meyer; Todd A. Schroeder; Xiyue Liao; Karen G. Schleeweis; Elizabeth A. Freeman; Chris Toney

    2016-01-01

    We present a new methodology for fitting nonparametric shape-restricted regression splines to time series of Landsat imagery for the purpose of modeling, mapping, and monitoring annual forest disturbance dynamics over nearly three decades. For each pixel and spectral band or index of choice in temporal Landsat data, our method delivers a smoothed rendition of...

  14. A Strategy for Assessing the Impact of Time-Varying Family Risk Factors on High School Dropout

    ERIC Educational Resources Information Center

    Randolph, Karen A.; Fraser, Mark W.; Orthner, Dennis K.

    2006-01-01

    Human behavior is dynamic, influenced by changing situations over time. Yet the impact of the dynamic nature of important explanatory variables on outcomes has only recently begun to be estimated in developmental models. Using a risk factor perspective, this article demonstrates the potential benefits of regressing time-varying outcome measures on…

  15. Correlation and prediction of dynamic human isolated joint strength from lean body mass

    NASA Technical Reports Server (NTRS)

    Pandya, Abhilash K.; Hasson, Scott M.; Aldridge, Ann M.; Maida, James C.; Woolford, Barbara J.

    1992-01-01

    A relationship between a person's lean body mass and the amount of maximum torque that can be produced with each isolated joint of the upper extremity was investigated. The maximum dynamic isolated joint torque (upper extremity) on 14 subjects was collected using a dynamometer multi-joint testing unit. These data were reduced to a table of coefficients of second degree polynomials, computed using a least squares regression method. All the coefficients were then organized into look-up tables, a compact and convenient storage/retrieval mechanism for the data set. Data from each joint, direction and velocity, were normalized with respect to that joint's average and merged into files (one for each curve for a particular joint). Regression was performed on each one of these files to derive a table of normalized population curve coefficients for each joint axis, direction, and velocity. In addition, a regression table which included all upper extremity joints was built which related average torque to lean body mass for an individual. These two tables are the basis of the regression model which allows the prediction of dynamic isolated joint torques from an individual's lean body mass.

  16. Conditional Density Estimation with HMM Based Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Hu, Fasheng; Liu, Zhenqiu; Jia, Chunxin; Chen, Dechang

    Conditional density estimation is very important in financial engineer, risk management, and other engineering computing problem. However, most regression models have a latent assumption that the probability density is a Gaussian distribution, which is not necessarily true in many real life applications. In this paper, we give a framework to estimate or predict the conditional density mixture dynamically. Through combining the Input-Output HMM with SVM regression together and building a SVM model in each state of the HMM, we can estimate a conditional density mixture instead of a single gaussian. With each SVM in each node, this model can be applied for not only regression but classifications as well. We applied this model to denoise the ECG data. The proposed method has the potential to apply to other time series such as stock market return predictions.

  17. Structured Kernel Subspace Learning for Autonomous Robot Navigation.

    PubMed

    Kim, Eunwoo; Choi, Sungjoon; Oh, Songhwai

    2018-02-14

    This paper considers two important problems for autonomous robot navigation in a dynamic environment, where the goal is to predict pedestrian motion and control a robot with the prediction for safe navigation. While there are several methods for predicting the motion of a pedestrian and controlling a robot to avoid incoming pedestrians, it is still difficult to safely navigate in a dynamic environment due to challenges, such as the varying quality and complexity of training data with unwanted noises. This paper addresses these challenges simultaneously by proposing a robust kernel subspace learning algorithm based on the recent advances in nuclear-norm and l 1 -norm minimization. We model the motion of a pedestrian and the robot controller using Gaussian processes. The proposed method efficiently approximates a kernel matrix used in Gaussian process regression by learning low-rank structured matrix (with symmetric positive semi-definiteness) to find an orthogonal basis, which eliminates the effects of erroneous and inconsistent data. Based on structured kernel subspace learning, we propose a robust motion model and motion controller for safe navigation in dynamic environments. We evaluate the proposed robust kernel learning in various tasks, including regression, motion prediction, and motion control problems, and demonstrate that the proposed learning-based systems are robust against outliers and outperform existing regression and navigation methods.

  18. Data-driven discovery of partial differential equations.

    PubMed

    Rudy, Samuel H; Brunton, Steven L; Proctor, Joshua L; Kutz, J Nathan

    2017-04-01

    We propose a sparse regression method capable of discovering the governing partial differential equation(s) of a given system by time series measurements in the spatial domain. The regression framework relies on sparsity-promoting techniques to select the nonlinear and partial derivative terms of the governing equations that most accurately represent the data, bypassing a combinatorially large search through all possible candidate models. The method balances model complexity and regression accuracy by selecting a parsimonious model via Pareto analysis. Time series measurements can be made in an Eulerian framework, where the sensors are fixed spatially, or in a Lagrangian framework, where the sensors move with the dynamics. The method is computationally efficient, robust, and demonstrated to work on a variety of canonical problems spanning a number of scientific domains including Navier-Stokes, the quantum harmonic oscillator, and the diffusion equation. Moreover, the method is capable of disambiguating between potentially nonunique dynamical terms by using multiple time series taken with different initial data. Thus, for a traveling wave, the method can distinguish between a linear wave equation and the Korteweg-de Vries equation, for instance. The method provides a promising new technique for discovering governing equations and physical laws in parameterized spatiotemporal systems, where first-principles derivations are intractable.

  19. Can longitudinal generalized estimating equation models distinguish network influence and homophily? An agent-based modeling approach to measurement characteristics.

    PubMed

    Sauser Zachrison, Kori; Iwashyna, Theodore J; Gebremariam, Achamyeleh; Hutchins, Meghan; Lee, Joyce M

    2016-12-28

    Connected individuals (or nodes) in a network are more likely to be similar than two randomly selected nodes due to homophily and/or network influence. Distinguishing between these two influences is an important goal in network analysis, and generalized estimating equation (GEE) analyses of longitudinal dyadic network data are an attractive approach. It is not known to what extent such regressions can accurately extract underlying data generating processes. Therefore our primary objective is to determine to what extent, and under what conditions, does the GEE-approach recreate the actual dynamics in an agent-based model. We generated simulated cohorts with pre-specified network characteristics and attachments in both static and dynamic networks, and we varied the presence of homophily and network influence. We then used statistical regression and examined the GEE model performance in each cohort to determine whether the model was able to detect the presence of homophily and network influence. In cohorts with both static and dynamic networks, we find that the GEE models have excellent sensitivity and reasonable specificity for determining the presence or absence of network influence, but little ability to distinguish whether or not homophily is present. The GEE models are a valuable tool to examine for the presence of network influence in longitudinal data, but are quite limited with respect to homophily.

  20. Numerical simulations on unsteady operation processes of N2O/HTPB hybrid rocket motor with/without diaphragm

    NASA Astrophysics Data System (ADS)

    Zhang, Shuai; Hu, Fan; Wang, Donghui; Okolo. N, Patrick; Zhang, Weihua

    2017-07-01

    Numerical simulations on processes within a hybrid rocket motor were conducted in the past, where most of these simulations carried out majorly focused on steady state analysis. Solid fuel regression rate strongly depends on complicated physicochemical processes and internal fluid dynamic behavior within the rocket motor, which changes with both space and time during its operation, and are therefore more unsteady in characteristics. Numerical simulations on the unsteady operational processes of N2O/HTPB hybrid rocket motor with and without diaphragm are conducted within this research paper. A numerical model is established based on two dimensional axisymmetric unsteady Navier-Stokes equations having turbulence, combustion and coupled gas/solid phase formulations. Discrete phase model is used to simulate injection and vaporization of the liquid oxidizer. A dynamic mesh technique is applied to the non-uniform regression of fuel grain, while results of unsteady flow field, variation of regression rate distribution with time, regression process of burning surface and internal ballistics are all obtained. Due to presence of eddy flow, the diaphragm increases regression rate further downstream. Peak regression rates are observed close to flow reattachment regions, while these peak values decrease gradually, and peak position shift further downstream with time advancement. Motor performance is analyzed accordingly, and it is noticed that the case with diaphragm included results in combustion efficiency and specific impulse efficiency increase of roughly 10%, and ground thrust increase of 17.8%.

  1. Nonlinear System Identification for Aeroelastic Systems with Application to Experimental Data

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.

    2008-01-01

    Representation and identification of a nonlinear aeroelastic pitch-plunge system as a model of the Nonlinear AutoRegressive, Moving Average eXogenous (NARMAX) class is considered. A nonlinear difference equation describing this aircraft model is derived theoretically and shown to be of the NARMAX form. Identification methods for NARMAX models are applied to aeroelastic dynamics and its properties demonstrated via continuous-time simulations of experimental conditions. Simulation results show that (1) the outputs of the NARMAX model closely match those generated using continuous-time methods, and (2) NARMAX identification methods applied to aeroelastic dynamics provide accurate discrete-time parameter estimates. Application of NARMAX identification to experimental pitch-plunge dynamics data gives a high percent fit for cross-validated data.

  2. Modeling the coupled return-spread high frequency dynamics of large tick assets

    NASA Astrophysics Data System (ADS)

    Curato, Gianbiagio; Lillo, Fabrizio

    2015-01-01

    Large tick assets, i.e. assets where one tick movement is a significant fraction of the price and bid-ask spread is almost always equal to one tick, display a dynamics in which price changes and spread are strongly coupled. We present an approach based on the hidden Markov model, also known in econometrics as the Markov switching model, for the dynamics of price changes, where the latent Markov process is described by the transitions between spreads. We then use a finite Markov mixture of logit regressions on past squared price changes to describe temporal dependencies in the dynamics of price changes. The model can thus be seen as a double chain Markov model. We show that the model describes the shape of the price change distribution at different time scales, volatility clustering, and the anomalous decrease of kurtosis. We calibrate our models based on Nasdaq stocks and we show that this model reproduces remarkably well the statistical properties of real data.

  3. Two models for identification and predicting behaviour of an induction motor system

    NASA Astrophysics Data System (ADS)

    Kuo, Chien-Hsun

    2018-01-01

    System identification or modelling is the process of building mathematical models of dynamical systems based on the available input and output data from the systems. This paper introduces system identification by using ARX (Auto Regressive with eXogeneous input) and ARMAX (Auto Regressive Moving Average with eXogeneous input) models. Through the identified system model, the predicted output could be compared with the measured one to help prevent the motor faults from developing into a catastrophic machine failure and avoid unnecessary costs and delays caused by the need to carry out unscheduled repairs. The induction motor system is illustrated as an example. Numerical and experimental results are shown for the identified induction motor system.

  4. Bayesian quantile regression-based partially linear mixed-effects joint models for longitudinal data with multiple features.

    PubMed

    Zhang, Hanze; Huang, Yangxin; Wang, Wei; Chen, Henian; Langland-Orban, Barbara

    2017-01-01

    In longitudinal AIDS studies, it is of interest to investigate the relationship between HIV viral load and CD4 cell counts, as well as the complicated time effect. Most of common models to analyze such complex longitudinal data are based on mean-regression, which fails to provide efficient estimates due to outliers and/or heavy tails. Quantile regression-based partially linear mixed-effects models, a special case of semiparametric models enjoying benefits of both parametric and nonparametric models, have the flexibility to monitor the viral dynamics nonparametrically and detect the varying CD4 effects parametrically at different quantiles of viral load. Meanwhile, it is critical to consider various data features of repeated measurements, including left-censoring due to a limit of detection, covariate measurement error, and asymmetric distribution. In this research, we first establish a Bayesian joint models that accounts for all these data features simultaneously in the framework of quantile regression-based partially linear mixed-effects models. The proposed models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also conducted to assess the performance of the proposed methods under different scenarios.

  5. Simulation of land use change in the three gorges reservoir area based on CART-CA

    NASA Astrophysics Data System (ADS)

    Yuan, Min

    2018-05-01

    This study proposes a new method to simulate spatiotemporal complex multiple land uses by using classification and regression tree algorithm (CART) based CA model. In this model, we use classification and regression tree algorithm to calculate land class conversion probability, and combine neighborhood factor, random factor to extract cellular transformation rules. The overall Kappa coefficient is 0.8014 and the overall accuracy is 0.8821 in the land dynamic simulation results of the three gorges reservoir area from 2000 to 2010, and the simulation results are satisfactory.

  6. Forecasting influenza-like illness dynamics for military populations using neural networks and social media

    DOE PAGES

    Volkova, Svitlana; Ayton, Ellyn; Porterfield, Katherine; ...

    2017-12-15

    This work is the first to take advantage of recurrent neural networks to predict influenza-like-illness (ILI) dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data [1, 2] and the state-of-the-art machine learning models [3, 4], we build and evaluate the predictive power of Long Short Term Memory (LSTMs) architectures capable of nowcasting (predicting in \\real-time") and forecasting (predicting the future) ILI dynamics in the 2011 { 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, stylistic and syntactic patterns,more » emotions and opinions, and communication behavior. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks. Finally, we combine ILI and social media signals to build joint neural network models for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance [1], specifically for military rather than general populations [3] in 26 U.S. and six international locations. Our approach demonstrates several advantages: (a) Neural network models learned from social media data yield the best performance compared to previously used regression models. (b) Previously under-explored language and communication behavior features are more predictive of ILI dynamics than syntactic and stylistic signals expressed in social media. (c) Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus, signals from social media can be potentially used to accurately forecast ILI dynamics for the regions where ILI historical data is not available. (d) Neural network models learned from combined ILI and social media signals significantly outperform models that rely solely on ILI historical data, which adds to a great potential of alternative public sources for ILI dynamics prediction. (e) Location-specific models outperform previously used location-independent models e.g., U.S. only. (f) Prediction results significantly vary across geolocations depending on the amount of social media data available and ILI activity patterns.« less

  7. Forecasting influenza-like illness dynamics for military populations using neural networks and social media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volkova, Svitlana; Ayton, Ellyn; Porterfield, Katherine

    This work is the first to take advantage of recurrent neural networks to predict influenza-like-illness (ILI) dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data [1, 2] and the state-of-the-art machine learning models [3, 4], we build and evaluate the predictive power of Long Short Term Memory (LSTMs) architectures capable of nowcasting (predicting in \\real-time") and forecasting (predicting the future) ILI dynamics in the 2011 { 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, stylistic and syntactic patterns,more » emotions and opinions, and communication behavior. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks. Finally, we combine ILI and social media signals to build joint neural network models for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance [1], specifically for military rather than general populations [3] in 26 U.S. and six international locations. Our approach demonstrates several advantages: (a) Neural network models learned from social media data yield the best performance compared to previously used regression models. (b) Previously under-explored language and communication behavior features are more predictive of ILI dynamics than syntactic and stylistic signals expressed in social media. (c) Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus, signals from social media can be potentially used to accurately forecast ILI dynamics for the regions where ILI historical data is not available. (d) Neural network models learned from combined ILI and social media signals significantly outperform models that rely solely on ILI historical data, which adds to a great potential of alternative public sources for ILI dynamics prediction. (e) Location-specific models outperform previously used location-independent models e.g., U.S. only. (f) Prediction results significantly vary across geolocations depending on the amount of social media data available and ILI activity patterns.« less

  8. Dynamic elementary mode modelling of non-steady state flux data.

    PubMed

    Folch-Fortuny, Abel; Teusink, Bas; Hoefsloot, Huub C J; Smilde, Age K; Ferrer, Alberto

    2018-06-18

    A novel framework is proposed to analyse metabolic fluxes in non-steady state conditions, based on the new concept of dynamic elementary mode (dynEM): an elementary mode activated partially depending on the time point of the experiment. Two methods are introduced here: dynamic elementary mode analysis (dynEMA) and dynamic elementary mode regression discriminant analysis (dynEMR-DA). The former is an extension of the recently proposed principal elementary mode analysis (PEMA) method from steady state to non-steady state scenarios. The latter is a discriminant model that permits to identify which dynEMs behave strongly different depending on the experimental conditions. Two case studies of Saccharomyces cerevisiae, with fluxes derived from simulated and real concentration data sets, are presented to highlight the benefits of this dynamic modelling. This methodology permits to analyse metabolic fluxes at early stages with the aim of i) creating reduced dynamic models of flux data, ii) combining many experiments in a single biologically meaningful model, and iii) identifying the metabolic pathways that drive the organism from one state to another when changing the environmental conditions.

  9. Forest canopy growth dynamic modeling based on remote sensing prodcuts and meteorological data in Daxing'anling of Northeast China

    NASA Astrophysics Data System (ADS)

    Wu, Qiaoli; Song, Jinling; Wang, Jindi; Xiao, Zhiqiang

    2014-11-01

    Leaf Area Index (LAI) is an important biophysical variable for vegetation. Compared with vegetation indexes like NDVI and EVI, LAI is more capable of monitoring forest canopy growth quantitatively. GLASS LAI is a spatially complete and temporally continuous product derived from AVHRR and MODIS reflectance data. In this paper, we present the approach to build dynamic LAI growth models for young and mature Larix gmelinii forest in north Daxing'anling in Inner Mongolia of China using the Dynamic Harmonic Regression (DHR) model and Double Logistic (D-L) model respectively, based on the time series extracted from multi-temporal GLASS LAI data. Meanwhile we used the dynamic threshold method to attract the key phenological phases of Larix gmelinii forest from the simulated time series. Then, through the relationship analysis between phenological phases and the meteorological factors, we found that the annual peak LAI and the annual maximum temperature have a good correlation coefficient. The results indicate this forest canopy growth dynamic model to be very effective in predicting forest canopy LAI growth and extracting forest canopy LAI growth dynamic.

  10. Climate Induced Spillover and Implications for U.S. Security.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tidwell, Vincent C.; Naugle, Asmeret Bier; Backus, George A.

    Developing nations incur a greater risk to climate change than the developed world due to poorly managed human/natural resources, unreliable infrastructure and brittle governing/economic institutions. These vulnerabilities often give rise to a climate induced “domino effect” of reduced natural resource production-leading to economic hardship, social unrest, and humanitarian crises. Integral to this cascading set of events is increased human migration, leading to the “spillover” of impacts to adjoining areas with even broader impact on global markets and security. Given the complexity of factors influencing human migration and the resultant spill-over effect, quantitative tools are needed to aid policy analysis. Towardmore » this need, a series of migration models were developed along with a system dynamics model of the spillover effect. The migration decision models were structured according to two interacting paths, one that captured long-term “chronic” impacts related to protracted deteriorating quality of life and a second focused on short-term “acute” impacts of disaster and/or conflict. Chronic migration dynamics were modeled for two different cases; one that looked only at emigration but at a national level for the entire world; and a second that looked at both emigration and immigration but focused on a single nation. Model parameterization for each of the migration models was accomplished through regression analysis using decadal data spanning the period 1960-2010. A similar approach was taken with acute migration dynamics except regression analysis utilized annual data sets limited to a shorter time horizon (2001-2013). The system dynamics spillover model was organized around two broad modules, one simulating the decision dynamics of migration and a second module that treats the changing environmental conditions that influence the migration decision. The environmental module informs the migration decision, endogenously simulating interactions/changes in the economy, labor, population, conflict, water, and food. A regional model focused on Mali in western Africa was used as a test case to demonstrate the efficacy of the model.« less

  11. Accounting for exhaust gas transport dynamics in instantaneous emission models via smooth transition regression.

    PubMed

    Kamarianakis, Yiannis; Gao, H Oliver

    2010-02-15

    Collecting and analyzing high frequency emission measurements has become very usual during the past decade as significantly more information with respect to formation conditions can be collected than from regulated bag measurements. A challenging issue for researchers is the accurate time-alignment between tailpipe measurements and engine operating variables. An alignment procedure should take into account both the reaction time of the analyzers and the dynamics of gas transport in the exhaust and measurement systems. This paper discusses a statistical modeling framework that compensates for variable exhaust transport delay while relating tailpipe measurements with engine operating covariates. Specifically it is shown that some variants of the smooth transition regression model allow for transport delays that vary smoothly as functions of the exhaust flow rate. These functions are characterized by a pair of coefficients that can be estimated via a least-squares procedure. The proposed models can be adapted to encompass inherent nonlinearities that were implicit in previous instantaneous emissions modeling efforts. This article describes the methodology and presents an illustrative application which uses data collected from a diesel bus under real-world driving conditions.

  12. The validation of a human force model to predict dynamic forces resulting from multi-joint motions

    NASA Technical Reports Server (NTRS)

    Pandya, Abhilash K.; Maida, James C.; Aldridge, Ann M.; Hasson, Scott M.; Woolford, Barbara J.

    1992-01-01

    The development and validation is examined of a dynamic strength model for humans. This model is based on empirical data. The shoulder, elbow, and wrist joints were characterized in terms of maximum isolated torque, or position and velocity, in all rotational planes. This data was reduced by a least squares regression technique into a table of single variable second degree polynomial equations determining torque as a function of position and velocity. The isolated joint torque equations were then used to compute forces resulting from a composite motion, in this case, a ratchet wrench push and pull operation. A comparison of the predicted results of the model with the actual measured values for the composite motion indicates that forces derived from a composite motion of joints (ratcheting) can be predicted from isolated joint measures. Calculated T values comparing model versus measured values for 14 subjects were well within the statistically acceptable limits and regression analysis revealed coefficient of variation between actual and measured to be within 0.72 and 0.80.

  13. Spatiotemporal variability of urban growth factors: A global and local perspective on the megacity of Mumbai

    NASA Astrophysics Data System (ADS)

    Shafizadeh-Moghadam, Hossein; Helbich, Marco

    2015-03-01

    The rapid growth of megacities requires special attention among urban planners worldwide, and particularly in Mumbai, India, where growth is very pronounced. To cope with the planning challenges this will bring, developing a retrospective understanding of urban land-use dynamics and the underlying driving-forces behind urban growth is a key prerequisite. This research uses regression-based land-use change models - and in particular non-spatial logistic regression models (LR) and auto-logistic regression models (ALR) - for the Mumbai region over the period 1973-2010, in order to determine the drivers behind spatiotemporal urban expansion. Both global models are complemented by a local, spatial model, the so-called geographically weighted logistic regression (GWLR) model, one that explicitly permits variations in driving-forces across space. The study comes to two main conclusions. First, both global models suggest similar driving-forces behind urban growth over time, revealing that LRs and ALRs result in estimated coefficients with comparable magnitudes. Second, all the local coefficients show distinctive temporal and spatial variations. It is therefore concluded that GWLR aids our understanding of urban growth processes, and so can assist context-related planning and policymaking activities when seeking to secure a sustainable urban future.

  14. Estimating carbon and showing impacts of drought using satellite data in regression-tree models

    USGS Publications Warehouse

    Boyte, Stephen; Wylie, Bruce K.; Howard, Danny; Dahal, Devendra; Gilmanov, Tagir G.

    2018-01-01

    Integrating spatially explicit biogeophysical and remotely sensed data into regression-tree models enables the spatial extrapolation of training data over large geographic spaces, allowing a better understanding of broad-scale ecosystem processes. The current study presents annual gross primary production (GPP) and annual ecosystem respiration (RE) for 2000–2013 in several short-statured vegetation types using carbon flux data from towers that are located strategically across the conterminous United States (CONUS). We calculate carbon fluxes (annual net ecosystem production [NEP]) for each year in our study period, which includes 2012 when drought and higher-than-normal temperatures influence vegetation productivity in large parts of the study area. We present and analyse carbon flux dynamics in the CONUS to better understand how drought affects GPP, RE, and NEP. Model accuracy metrics show strong correlation coefficients (r) (r ≥ 94%) between training and estimated data for both GPP and RE. Overall, average annual GPP, RE, and NEP are relatively constant throughout the study period except during 2012 when almost 60% less carbon is sequestered than normal. These results allow us to conclude that this modelling method effectively estimates carbon dynamics through time and allows the exploration of impacts of meteorological anomalies and vegetation types on carbon dynamics.

  15. Mapping and spatial-temporal modeling of Bromus tectorum invasion in central Utah

    NASA Astrophysics Data System (ADS)

    Jin, Zhenyu

    Cheatgrass, or Downy Brome, is an exotic winter annual weed native to the Mediterranean region. Since its introduction to the U.S., it has become a significant weed and aggressive invader of sagebrush, pinion-juniper, and other shrub communities, where it can completely out-compete native grasses and shrubs. In this research, remotely sensed data combined with field collected data are used to investigate the distribution of the cheatgrass in Central Utah, to characterize the trend of the NDVI time-series of cheatgrass, and to construct a spatially explicit population-based model to simulate the spatial-temporal dynamics of the cheatgrass. This research proposes a method for mapping the canopy closure of invasive species using remotely sensed data acquired at different dates. Different invasive species have their own distinguished phenologies and the satellite images in different dates could be used to capture the phenology. The results of cheatgrass abundance prediction have a good fit with the field data for both linear regression and regression tree models, although the regression tree model has better performance than the linear regression model. To characterize the trend of NDVI time-series of cheatgrass, a novel smoothing algorithm named RMMEH is presented in this research to overcome some drawbacks of many other algorithms. By comparing the performance of RMMEH in smoothing a 16-day composite of the MODIS NDVI time-series with that of two other methods, which are the 4253EH, twice and the MVI, we have found that RMMEH not only keeps the original valid NDVI points, but also effectively removes the spurious spikes. The reconstructed NDVI time-series of different land covers are of higher quality and have smoother temporal trend. To simulate the spatial-temporal dynamics of cheatgrass, a spatially explicit population-based model is built applying remotely sensed data. The comparison between the model output and the ground truth of cheatgrass closure demonstrates that the model could successfully simulate the spatial-temporal dynamics of cheatgrass in a simple cheatgrass-dominant environment. The simulation of the functional response of different prescribed fire rates also shows that this model is helpful to answer management questions like, "What are the effects of prescribed fire to invasive species?" It demonstrates that a medium fire rate of 10% can successfully prevent cheatgrass invasion.

  16. Bioinactivation: Software for modelling dynamic microbial inactivation.

    PubMed

    Garre, Alberto; Fernández, Pablo S; Lindqvist, Roland; Egea, Jose A

    2017-03-01

    This contribution presents the bioinactivation software, which implements functions for the modelling of isothermal and non-isothermal microbial inactivation. This software offers features such as user-friendliness, modelling of dynamic conditions, possibility to choose the fitting algorithm and generation of prediction intervals. The software is offered in two different formats: Bioinactivation core and Bioinactivation SE. Bioinactivation core is a package for the R programming language, which includes features for the generation of predictions and for the fitting of models to inactivation experiments using non-linear regression or a Markov Chain Monte Carlo algorithm (MCMC). The calculations are based on inactivation models common in academia and industry (Bigelow, Peleg, Mafart and Geeraerd). Bioinactivation SE supplies a user-friendly interface to selected functions of Bioinactivation core, namely the model fitting of non-isothermal experiments and the generation of prediction intervals. The capabilities of bioinactivation are presented in this paper through a case study, modelling the non-isothermal inactivation of Bacillus sporothermodurans. This study has provided a full characterization of the response of the bacteria to dynamic temperature conditions, including confidence intervals for the model parameters and a prediction interval of the survivor curve. We conclude that the MCMC algorithm produces a better characterization of the biological uncertainty and variability than non-linear regression. The bioinactivation software can be relevant to the food and pharmaceutical industry, as well as to regulatory agencies, as part of a (quantitative) microbial risk assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Modeling temporal and spatial variability of traffic-related air pollution: Hourly land use regression models for black carbon

    NASA Astrophysics Data System (ADS)

    Dons, Evi; Van Poppel, Martine; Kochan, Bruno; Wets, Geert; Int Panis, Luc

    2013-08-01

    Land use regression (LUR) modeling is a statistical technique used to determine exposure to air pollutants in epidemiological studies. Time-activity diaries can be combined with LUR models, enabling detailed exposure estimation and limiting exposure misclassification, both in shorter and longer time lags. In this study, the traffic related air pollutant black carbon was measured with μ-aethalometers on a 5-min time base at 63 locations in Flanders, Belgium. The measurements show that hourly concentrations vary between different locations, but also over the day. Furthermore the diurnal pattern is different for street and background locations. This suggests that annual LUR models are not sufficient to capture all the variation. Hourly LUR models for black carbon are developed using different strategies: by means of dummy variables, with dynamic dependent variables and/or with dynamic and static independent variables. The LUR model with 48 dummies (weekday hours and weekend hours) performs not as good as the annual model (explained variance of 0.44 compared to 0.77 in the annual model). The dataset with hourly concentrations of black carbon can be used to recalibrate the annual model, resulting in many of the original explaining variables losing their statistical significance, and certain variables having the wrong direction of effect. Building new independent hourly models, with static or dynamic covariates, is proposed as the best solution to solve these issues. R2 values for hourly LUR models are mostly smaller than the R2 of the annual model, ranging from 0.07 to 0.8. Between 6 a.m. and 10 p.m. on weekdays the R2 approximates the annual model R2. Even though models of consecutive hours are developed independently, similar variables turn out to be significant. Using dynamic covariates instead of static covariates, i.e. hourly traffic intensities and hourly population densities, did not significantly improve the models' performance.

  18. Estimation of Unsteady Aerodynamic Models from Dynamic Wind Tunnel Data

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick; Klein, Vladislav

    2011-01-01

    Demanding aerodynamic modelling requirements for military and civilian aircraft have motivated researchers to improve computational and experimental techniques and to pursue closer collaboration in these areas. Model identification and validation techniques are key components for this research. This paper presents mathematical model structures and identification techniques that have been used successfully to model more general aerodynamic behaviours in single-degree-of-freedom dynamic testing. Model parameters, characterizing aerodynamic properties, are estimated using linear and nonlinear regression methods in both time and frequency domains. Steps in identification including model structure determination, parameter estimation, and model validation, are addressed in this paper with examples using data from one-degree-of-freedom dynamic wind tunnel and water tunnel experiments. These techniques offer a methodology for expanding the utility of computational methods in application to flight dynamics, stability, and control problems. Since flight test is not always an option for early model validation, time history comparisons are commonly made between computational and experimental results and model adequacy is inferred by corroborating results. An extension is offered to this conventional approach where more general model parameter estimates and their standard errors are compared.

  19. Modeling Fire Occurrence at the City Scale: A Comparison between Geographically Weighted Regression and Global Linear Regression.

    PubMed

    Song, Chao; Kwan, Mei-Po; Zhu, Jiping

    2017-04-08

    An increasing number of fires are occurring with the rapid development of cities, resulting in increased risk for human beings and the environment. This study compares geographically weighted regression-based models, including geographically weighted regression (GWR) and geographically and temporally weighted regression (GTWR), which integrates spatial and temporal effects and global linear regression models (LM) for modeling fire risk at the city scale. The results show that the road density and the spatial distribution of enterprises have the strongest influences on fire risk, which implies that we should focus on areas where roads and enterprises are densely clustered. In addition, locations with a large number of enterprises have fewer fire ignition records, probably because of strict management and prevention measures. A changing number of significant variables across space indicate that heterogeneity mainly exists in the northern and eastern rural and suburban areas of Hefei city, where human-related facilities or road construction are only clustered in the city sub-centers. GTWR can capture small changes in the spatiotemporal heterogeneity of the variables while GWR and LM cannot. An approach that integrates space and time enables us to better understand the dynamic changes in fire risk. Thus governments can use the results to manage fire safety at the city scale.

  20. Modeling Fire Occurrence at the City Scale: A Comparison between Geographically Weighted Regression and Global Linear Regression

    PubMed Central

    Song, Chao; Kwan, Mei-Po; Zhu, Jiping

    2017-01-01

    An increasing number of fires are occurring with the rapid development of cities, resulting in increased risk for human beings and the environment. This study compares geographically weighted regression-based models, including geographically weighted regression (GWR) and geographically and temporally weighted regression (GTWR), which integrates spatial and temporal effects and global linear regression models (LM) for modeling fire risk at the city scale. The results show that the road density and the spatial distribution of enterprises have the strongest influences on fire risk, which implies that we should focus on areas where roads and enterprises are densely clustered. In addition, locations with a large number of enterprises have fewer fire ignition records, probably because of strict management and prevention measures. A changing number of significant variables across space indicate that heterogeneity mainly exists in the northern and eastern rural and suburban areas of Hefei city, where human-related facilities or road construction are only clustered in the city sub-centers. GTWR can capture small changes in the spatiotemporal heterogeneity of the variables while GWR and LM cannot. An approach that integrates space and time enables us to better understand the dynamic changes in fire risk. Thus governments can use the results to manage fire safety at the city scale. PMID:28397745

  1. A model of the human in a cognitive prediction task.

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.

    1973-01-01

    The human decision maker's behavior when predicting future states of discrete linear dynamic systems driven by zero-mean Gaussian processes is modeled. The task is on a slow enough time scale that physiological constraints are insignificant compared with cognitive limitations. The model is basically a linear regression system identifier with a limited memory and noisy observations. Experimental data are presented and compared to the model.

  2. Modeling the dynamics of urban growth using multinomial logistic regression: a case study of Jiayu County, Hubei Province, China

    NASA Astrophysics Data System (ADS)

    Nong, Yu; Du, Qingyun; Wang, Kun; Miao, Lei; Zhang, Weiwei

    2008-10-01

    Urban growth modeling, one of the most important aspects of land use and land cover change study, has attracted substantial attention because it helps to comprehend the mechanisms of land use change thus helps relevant policies made. This study applied multinomial logistic regression to model urban growth in the Jiayu county of Hubei province, China to discover the relationship between urban growth and the driving forces of which biophysical and social-economic factors are selected as independent variables. This type of regression is similar to binary logistic regression, but it is more general because the dependent variable is not restricted to two categories, as those previous studies did. The multinomial one can simulate the process of multiple land use competition between urban land, bare land, cultivated land and orchard land. Taking the land use type of Urban as reference category, parameters could be estimated with odds ratio. A probability map is generated from the model to predict where urban growth will occur as a result of the computation.

  3. Improving precision of glomerular filtration rate estimating model by ensemble learning.

    PubMed

    Liu, Xun; Li, Ningshan; Lv, Linsheng; Fu, Yongmei; Cheng, Cailian; Wang, Caixia; Ye, Yuqiu; Li, Shaomin; Lou, Tanqi

    2017-11-09

    Accurate assessment of kidney function is clinically important, but estimates of glomerular filtration rate (GFR) by regression are imprecise. We hypothesized that ensemble learning could improve precision. A total of 1419 participants were enrolled, with 1002 in the development dataset and 417 in the external validation dataset. GFR was independently estimated from age, sex and serum creatinine using an artificial neural network (ANN), support vector machine (SVM), regression, and ensemble learning. GFR was measured by 99mTc-DTPA renal dynamic imaging calibrated with dual plasma sample 99mTc-DTPA GFR. Mean measured GFRs were 70.0 ml/min/1.73 m 2 in the developmental and 53.4 ml/min/1.73 m 2 in the external validation cohorts. In the external validation cohort, precision was better in the ensemble model of the ANN, SVM and regression equation (IQR = 13.5 ml/min/1.73 m 2 ) than in the new regression model (IQR = 14.0 ml/min/1.73 m 2 , P < 0.001). The precision of ensemble learning was the best of the three models, but the models had similar bias and accuracy. The median difference ranged from 2.3 to 3.7 ml/min/1.73 m 2 , 30% accuracy ranged from 73.1 to 76.0%, and P was > 0.05 for all comparisons of the new regression equation and the other new models. An ensemble learning model including three variables, the average ANN, SVM, and regression equation values, was more precise than the new regression model. A more complex ensemble learning strategy may further improve GFR estimates.

  4. A regression-based 3-D shoulder rhythm.

    PubMed

    Xu, Xu; Lin, Jia-hua; McGorry, Raymond W

    2014-03-21

    In biomechanical modeling of the shoulder, it is important to know the orientation of each bone in the shoulder girdle when estimating the loads on each musculoskeletal element. However, because of the soft tissue overlying the bones, it is difficult to accurately derive the orientation of the clavicle and scapula using surface markers during dynamic movement. The purpose of this study is to develop two regression models which predict the orientation of the clavicle and the scapula. The first regression model uses humerus orientation and individual factors such as age, gender, and anthropometry data as the predictors. The second regression model includes only the humerus orientation as the predictor. Thirty-eight participants performed 118 static postures covering the volume of the right hand reach. The orientation of the thorax, clavicle, scapula and humerus were measured with a motion tracking system. Regression analysis was performed on the Euler angles decomposed from the orientation of each bone from 26 randomly selected participants. The regression models were then validated with the remaining 12 participants. The results indicate that for the first model, the r(2) of the predicted orientation of the clavicle and the scapula ranged between 0.31 and 0.65, and the RMSE obtained from the validation dataset ranged from 6.92° to 10.39°. For the second model, the r(2) ranged between 0.19 and 0.57, and the RMSE obtained from the validation dataset ranged from 6.62° and 11.13°. The derived regression-based shoulder rhythm could be useful in future biomechanical modeling of the shoulder. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Carbon emissions risk map from deforestation in the tropical Amazon

    NASA Astrophysics Data System (ADS)

    Ometto, J.; Soler, L. S.; Assis, T. D.; Oliveira, P. V.; Aguiar, A. P.

    2011-12-01

    Assis, Pedro Valle This work aims to estimate the carbon emissions from tropical deforestation in the Brazilian Amazon associated to the risk assessment of future land use change. The emissions are estimated by incorporating temporal deforestation dynamics, accounting for the biophysical and socioeconomic heterogeneity in the region, as well secondary forest growth dynamic in abandoned areas. The land cover change model that supported the risk assessment of deforestation, was run based on linear regressions. This method takes into account spatial heterogeneity of deforestation as the spatial variables adopted to fit the final regression model comprise: environmental aspects, economic attractiveness, accessibility and land tenure structure. After fitting a suitable regression models for each land cover category, the potential of each cell to be deforested (25x25km and 5x5 km of resolution) in the near future was used to calculate the risk assessment of land cover change. The carbon emissions model combines high-resolution new forest clear-cut mapping and four alternative sources of spatial information on biomass distribution for different vegetation types. The risk assessment map of CO2 emissions, was obtained by crossing the simulation results of the historical land cover changes to a map of aboveground biomass contained in the remaining forest. This final map represents the risk of CO2 emissions at 25x25km and 5x5 km until 2020, under a scenario of carbon emission reduction target.

  6. Data-driven discovery of partial differential equations

    PubMed Central

    Rudy, Samuel H.; Brunton, Steven L.; Proctor, Joshua L.; Kutz, J. Nathan

    2017-01-01

    We propose a sparse regression method capable of discovering the governing partial differential equation(s) of a given system by time series measurements in the spatial domain. The regression framework relies on sparsity-promoting techniques to select the nonlinear and partial derivative terms of the governing equations that most accurately represent the data, bypassing a combinatorially large search through all possible candidate models. The method balances model complexity and regression accuracy by selecting a parsimonious model via Pareto analysis. Time series measurements can be made in an Eulerian framework, where the sensors are fixed spatially, or in a Lagrangian framework, where the sensors move with the dynamics. The method is computationally efficient, robust, and demonstrated to work on a variety of canonical problems spanning a number of scientific domains including Navier-Stokes, the quantum harmonic oscillator, and the diffusion equation. Moreover, the method is capable of disambiguating between potentially nonunique dynamical terms by using multiple time series taken with different initial data. Thus, for a traveling wave, the method can distinguish between a linear wave equation and the Korteweg–de Vries equation, for instance. The method provides a promising new technique for discovering governing equations and physical laws in parameterized spatiotemporal systems, where first-principles derivations are intractable. PMID:28508044

  7. Unsteady hovering wake parameters identified from dynamic model tests, part 1

    NASA Technical Reports Server (NTRS)

    Hohenemser, K. H.; Crews, S. T.

    1977-01-01

    The development of a 4-bladed model rotor is reported that can be excited with a simple eccentric mechanism in progressing and regressing modes with either harmonic or transient inputs. Parameter identification methods were applied to the problem of extracting parameters for linear perturbation models, including rotor dynamic inflow effects, from the measured blade flapping responses to transient pitch stirring excitations. These perturbation models were then used to predict blade flapping response to other pitch stirring transient inputs, and rotor wake and blade flapping responses to harmonic inputs. The viability and utility of using parameter identification methods for extracting the perturbation models from transients are demonstrated through these combined analytical and experimental studies.

  8. A dynamic factor model of the evaluation of the financial crisis in Turkey.

    PubMed

    Sezgin, F; Kinay, B

    2010-01-01

    Factor analysis has been widely used in economics and finance in situations where a relatively large number of variables are believed to be driven by few common causes of variation. Dynamic factor analysis (DFA) which is a combination of factor and time series analysis, involves autocorrelation matrices calculated from multivariate time series. Dynamic factor models were traditionally used to construct economic indicators, macroeconomic analysis, business cycles and forecasting. In recent years, dynamic factor models have become more popular in empirical macroeconomics. They have more advantages than other methods in various respects. Factor models can for instance cope with many variables without running into scarce degrees of freedom problems often faced in regression-based analysis. In this study, a model which determines the effect of the global crisis on Turkey is proposed. The main aim of the paper is to analyze how several macroeconomic quantities show an alteration before the evolution of the crisis and to decide if a crisis can be forecasted or not.

  9. Modeling Stationary Lithium-Ion Batteries for Optimization and Predictive Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri A; Shi, Ying; Christensen, Dane T

    Accurately modeling stationary battery storage behavior is crucial to understand and predict its limitations in demand-side management scenarios. In this paper, a lithium-ion battery model was derived to estimate lifetime and state-of-charge for building-integrated use cases. The proposed battery model aims to balance speed and accuracy when modeling battery behavior for real-time predictive control and optimization. In order to achieve these goals, a mixed modeling approach was taken, which incorporates regression fits to experimental data and an equivalent circuit to model battery behavior. A comparison of the proposed battery model output to actual data from the manufacturer validates the modelingmore » approach taken in the paper. Additionally, a dynamic test case demonstrates the effects of using regression models to represent internal resistance and capacity fading.« less

  10. Modeling Stationary Lithium-Ion Batteries for Optimization and Predictive Control: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raszmann, Emma; Baker, Kyri; Shi, Ying

    Accurately modeling stationary battery storage behavior is crucial to understand and predict its limitations in demand-side management scenarios. In this paper, a lithium-ion battery model was derived to estimate lifetime and state-of-charge for building-integrated use cases. The proposed battery model aims to balance speed and accuracy when modeling battery behavior for real-time predictive control and optimization. In order to achieve these goals, a mixed modeling approach was taken, which incorporates regression fits to experimental data and an equivalent circuit to model battery behavior. A comparison of the proposed battery model output to actual data from the manufacturer validates the modelingmore » approach taken in the paper. Additionally, a dynamic test case demonstrates the effects of using regression models to represent internal resistance and capacity fading.« less

  11. Demand-supply dynamics in tourism systems: A spatio-temporal GIS analysis. The Alberta ski industry case study

    NASA Astrophysics Data System (ADS)

    Bertazzon, Stefania

    The present research focuses on the interaction of supply and demand of down-hill ski tourism in the province of Alberta. The main hypothesis is that the demand for skiing depends on the socio-economic and demographic characteristics of the population living in the province and outside it. A second, consequent hypothesis is that the development of ski resorts (supply) is a response to the demand for skiing. From the latter derives the hypothesis of a dynamic interaction between supply (ski resorts) and demand (skiers). Such interaction occurs in space, within a range determined by physical distance and the means available to overcome it. The above hypotheses implicitly define interactions that take place in space and evolve over time. The hypotheses are tested by temporal, spatial, and spatio-temporal regression models, using the best available data and the latest commercially available software. The main purpose of this research is to explore analytical techniques to model spatial, temporal, and spatio-temporal dynamics in the context of regional science. The completion of the present research has produced more significant contributions than was originally expected. Many of the unexpected contributions resulted from theoretical and applied needs arising from the application of spatial regression models. Spatial regression models are a new and largely under-applied technique. The models are fairly complex and a considerable amount of preparatory work is needed, prior to their specification and estimation. Most of this work is specific to the field of application. The originality of the solutions devised is increased by the lack of applications in the field of tourism. The scarcity of applications in other fields adds to their value for other applications. The estimation of spatio-temporal models has been only partially attained in the present research. This apparent limitation is due to the novelty and complexity of the analytical methods applied. This opens new directions for further work in the field of spatial analysis, in conjunction with the development of specific software.

  12. Poverty dynamics, poverty thresholds and mortality: An age-stage Markovian model

    PubMed Central

    Rehkopf, David; Tuljapurkar, Shripad; Horvitz, Carol C.

    2018-01-01

    Recent studies have examined the risk of poverty throughout the life course, but few have considered how transitioning in and out of poverty shape the dynamic heterogeneity and mortality disparities of a cohort at each age. Here we use state-by-age modeling to capture individual heterogeneity in crossing one of three different poverty thresholds (defined as 1×, 2× or 3× the “official” poverty threshold) at each age. We examine age-specific state structure, the remaining life expectancy, its variance, and cohort simulations for those above and below each threshold. Survival and transitioning probabilities are statistically estimated by regression analyses of data from the Health and Retirement Survey RAND data-set, and the National Longitudinal Survey of Youth. Using the results of these regression analyses, we parameterize discrete state, discrete age matrix models. We found that individuals above all three thresholds have higher annual survival than those in poverty, especially for mid-ages to about age 80. The advantage is greatest when we classify individuals based on 1× the “official” poverty threshold. The greatest discrepancy in average remaining life expectancy and its variance between those above and in poverty occurs at mid-ages for all three thresholds. And fewer individuals are in poverty between ages 40-60 for all three thresholds. Our findings are consistent with results based on other data sets, but also suggest that dynamic heterogeneity in poverty and the transience of the poverty state is associated with income-related mortality disparities (less transience, especially of those above poverty, more disparities). This paper applies the approach of age-by-stage matrix models to human demography and individual poverty dynamics. In so doing we extend the literature on individual poverty dynamics across the life course. PMID:29768416

  13. Forecasting influenza-like illness dynamics for military populations using neural networks and social media

    PubMed Central

    Ayton, Ellyn; Porterfield, Katherine; Corley, Courtney D.

    2017-01-01

    This work is the first to take advantage of recurrent neural networks to predict influenza-like illness (ILI) dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data and the state-of-the-art machine learning models, we build and evaluate the predictive power of neural network architectures based on Long Short Term Memory (LSTMs) units capable of nowcasting (predicting in “real-time”) and forecasting (predicting the future) ILI dynamics in the 2011 – 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, embeddings, word ngrams, stylistic patterns, and communication behavior using hashtags and mentions. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks using a diverse set of evaluation metrics. Finally, we combine ILI and social media signals to build a joint neural network model for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance, specifically for military rather than general populations in 26 U.S. and six international locations., and analyze how model performance depends on the amount of social media data available per location. Our approach demonstrates several advantages: (a) Neural network architectures that rely on LSTM units trained on social media data yield the best performance compared to previously used regression models. (b) Previously under-explored language and communication behavior features are more predictive of ILI dynamics than stylistic and topic signals expressed in social media. (c) Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus, signals from social media can be potentially used to accurately forecast ILI dynamics for the regions where ILI historical data is not available. (d) Neural network models learned from combined ILI and social media signals significantly outperform models that rely solely on ILI historical data, which adds to a great potential of alternative public sources for ILI dynamics prediction. (e) Location-specific models outperform previously used location-independent models e.g., U.S. only. (f) Prediction results significantly vary across geolocations depending on the amount of social media data available and ILI activity patterns. (g) Model performance improves with more tweets available per geo-location e.g., the error gets lower and the Pearson score gets higher for locations with more tweets. PMID:29244814

  14. Forecasting influenza-like illness dynamics for military populations using neural networks and social media.

    PubMed

    Volkova, Svitlana; Ayton, Ellyn; Porterfield, Katherine; Corley, Courtney D

    2017-01-01

    This work is the first to take advantage of recurrent neural networks to predict influenza-like illness (ILI) dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data and the state-of-the-art machine learning models, we build and evaluate the predictive power of neural network architectures based on Long Short Term Memory (LSTMs) units capable of nowcasting (predicting in "real-time") and forecasting (predicting the future) ILI dynamics in the 2011 - 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, embeddings, word ngrams, stylistic patterns, and communication behavior using hashtags and mentions. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks using a diverse set of evaluation metrics. Finally, we combine ILI and social media signals to build a joint neural network model for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance, specifically for military rather than general populations in 26 U.S. and six international locations., and analyze how model performance depends on the amount of social media data available per location. Our approach demonstrates several advantages: (a) Neural network architectures that rely on LSTM units trained on social media data yield the best performance compared to previously used regression models. (b) Previously under-explored language and communication behavior features are more predictive of ILI dynamics than stylistic and topic signals expressed in social media. (c) Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus, signals from social media can be potentially used to accurately forecast ILI dynamics for the regions where ILI historical data is not available. (d) Neural network models learned from combined ILI and social media signals significantly outperform models that rely solely on ILI historical data, which adds to a great potential of alternative public sources for ILI dynamics prediction. (e) Location-specific models outperform previously used location-independent models e.g., U.S. only. (f) Prediction results significantly vary across geolocations depending on the amount of social media data available and ILI activity patterns. (g) Model performance improves with more tweets available per geo-location e.g., the error gets lower and the Pearson score gets higher for locations with more tweets.

  15. Tuning stochastic matrix models with hydrologic data to predict the population dynamics of a riverine fish.

    PubMed

    Sakaris, Peter C; Irwin, Elise R

    2010-03-01

    We developed stochastic matrix models to evaluate the effects of hydrologic alteration and variable mortality on the population dynamics of a lotic fish in a regulated river system. Models were applied to a representative lotic fish species, the flathead catfish (Pylodictis olivaris), for which two populations were examined: a native population from a regulated reach of the Coosa River (Alabama, USA) and an introduced population from an unregulated section of the Ocmulgee River (Georgia, USA). Size-classified matrix models were constructed for both populations, and residuals from catch-curve regressions were used as indices of year class strength (i.e., recruitment). A multiple regression model indicated that recruitment of flathead catfish in the Coosa River was positively related to the frequency of spring pulses between 283 and 566 m3/s. For the Ocmulgee River population, multiple regression models indicated that year class strength was negatively related to mean March discharge and positively related to June low flow. When the Coosa population was modeled to experience five consecutive years of favorable hydrologic conditions during a 50-year projection period, it exhibited a substantial spike in size and increased at an overall 0.2% annual rate. When modeled to experience five years of unfavorable hydrologic conditions, the Coosa population initially exhibited a decrease in size but later stabilized and increased at a 0.4% annual rate following the decline. When the Ocmulgee River population was modeled to experience five years of favorable conditions, it exhibited a substantial spike in size and increased at an overall 0.4% annual rate. After the Ocmulgee population experienced five years of unfavorable conditions, a sharp decline in population size was predicted. However, the population quickly recovered, with population size increasing at a 0.3% annual rate following the decline. In general, stochastic population growth in the Ocmulgee River was more erratic and variable than population growth in the Coosa River. We encourage ecologists to develop similar models for other lotic species, particularly in regulated river systems. Successful management of fish populations in regulated systems requires that we are able to predict how hydrology affects recruitment and will ultimately influence the population dynamics of fishes.

  16. Developing a Dynamic SPARROW Water Quality Decision Support System Using NASA Remotely-Sensed Products

    NASA Astrophysics Data System (ADS)

    Al-Hamdan, M. Z.; Smith, R. A.; Hoos, A.; Schwarz, G. E.; Alexander, R. B.; Crosson, W. L.; Srikishen, J.; Estes, M., Jr.; Cruise, J.; Al-Hamdan, A.; Ellenburg, W. L., II; Flores, A.; Sanford, W. E.; Zell, W.; Reitz, M.; Miller, M. P.; Journey, C. A.; Befus, K. M.; Swann, R.; Herder, T.; Sherwood, E.; Leverone, J.; Shelton, M.; Smith, E. T.; Anastasiou, C. J.; Seachrist, J.; Hughes, A.; Graves, D.

    2017-12-01

    The USGS Spatially Referenced Regression on Watershed Attributes (SPARROW) surface water quality modeling system has been widely used for long term, steady state water quality analysis. However, users have increasingly requested a dynamic version of SPARROW that can provide seasonal estimates of nutrients and suspended sediment to receiving waters. The goal of this NASA-funded project is to develop a dynamic decision support system to enhance the southeast SPARROW water quality model and finer-scale dynamic models for selected coastal watersheds through the use of remotely-sensed data and other NASA Land Information System (LIS) products. The spatial and temporal scale of satellite remote sensing products and LIS modeling data make these sources ideal for the purposes of development and operation of the dynamic SPARROW model. Remote sensing products including MODIS vegetation indices, SMAP surface soil moisture, and OMI atmospheric chemistry along with LIS-derived evapotranspiration (ET) and soil temperature and moisture products will be included in model development and operation. MODIS data will also be used to map annual land cover/land use in the study areas and in conjunction with Landsat and Sentinel to identify disturbed areas that might be sources of sediment and increased phosphorus loading through exposure of the bare soil. These data and others constitute the independent variables in a regression analysis whose dependent variables are the water quality constituents total nitrogen, total phosphorus, and suspended sediment. Remotely-sensed variables such as vegetation indices and ET can be proxies for nutrient uptake by vegetation; MODIS Leaf Area Index can indicate sources of phosphorus from vegetation; soil moisture and temperature are known to control rates of denitrification; and bare soil areas serve as sources of enhanced nutrient and sediment production. The enhanced SPARROW dynamic models will provide improved tools for end users to manage water quality in near real time and for the formulation of future scenarios to inform strategic planning. Time-varying SPARROW outputs will aid water managers in decision making regarding allocation of resources in protecting aquatic habitats, planning for harmful algal blooms, and restoration of degraded habitats, stream segments, or lakes.

  17. Using exploratory regression to identify optimal driving factors for cellular automaton modeling of land use change.

    PubMed

    Feng, Yongjiu; Tong, Xiaohua

    2017-09-22

    Defining transition rules is an important issue in cellular automaton (CA)-based land use modeling because these models incorporate highly correlated driving factors. Multicollinearity among correlated driving factors may produce negative effects that must be eliminated from the modeling. Using exploratory regression under pre-defined criteria, we identified all possible combinations of factors from the candidate factors affecting land use change. Three combinations that incorporate five driving factors meeting pre-defined criteria were assessed. With the selected combinations of factors, three logistic regression-based CA models were built to simulate dynamic land use change in Shanghai, China, from 2000 to 2015. For comparative purposes, a CA model with all candidate factors was also applied to simulate the land use change. Simulations using three CA models with multicollinearity eliminated performed better (with accuracy improvements about 3.6%) than the model incorporating all candidate factors. Our results showed that not all candidate factors are necessary for accurate CA modeling and the simulations were not sensitive to changes in statistically non-significant driving factors. We conclude that exploratory regression is an effective method to search for the optimal combinations of driving factors, leading to better land use change models that are devoid of multicollinearity. We suggest identification of dominant factors and elimination of multicollinearity before building land change models, making it possible to simulate more realistic outcomes.

  18. Stream temperature investigations: field and analytic methods

    USGS Publications Warehouse

    Bartholow, J.M.

    1989-01-01

    Alternative public domain stream and reservoir temperature models are contrasted with SNTEMP. A distinction is made between steady-flow and dynamic-flow models and their respective capabilities. Regression models are offered as an alternative approach for some situations, with appropriate mathematical formulas suggested. Appendices provide information on State and Federal agencies that are good data sources, vendors for field instrumentation, and small computer programs useful in data reduction.

  19. Developing a dengue forecast model using machine learning: A case study in China.

    PubMed

    Guo, Pi; Liu, Tao; Zhang, Qin; Wang, Li; Xiao, Jianpeng; Zhang, Qingying; Luo, Ganfeng; Li, Zhihao; He, Jianfeng; Zhang, Yonghui; Ma, Wenjun

    2017-10-01

    In China, dengue remains an important public health issue with expanded areas and increased incidence recently. Accurate and timely forecasts of dengue incidence in China are still lacking. We aimed to use the state-of-the-art machine learning algorithms to develop an accurate predictive model of dengue. Weekly dengue cases, Baidu search queries and climate factors (mean temperature, relative humidity and rainfall) during 2011-2014 in Guangdong were gathered. A dengue search index was constructed for developing the predictive models in combination with climate factors. The observed year and week were also included in the models to control for the long-term trend and seasonality. Several machine learning algorithms, including the support vector regression (SVR) algorithm, step-down linear regression model, gradient boosted regression tree algorithm (GBM), negative binomial regression model (NBM), least absolute shrinkage and selection operator (LASSO) linear regression model and generalized additive model (GAM), were used as candidate models to predict dengue incidence. Performance and goodness of fit of the models were assessed using the root-mean-square error (RMSE) and R-squared measures. The residuals of the models were examined using the autocorrelation and partial autocorrelation function analyses to check the validity of the models. The models were further validated using dengue surveillance data from five other provinces. The epidemics during the last 12 weeks and the peak of the 2014 large outbreak were accurately forecasted by the SVR model selected by a cross-validation technique. Moreover, the SVR model had the consistently smallest prediction error rates for tracking the dynamics of dengue and forecasting the outbreaks in other areas in China. The proposed SVR model achieved a superior performance in comparison with other forecasting techniques assessed in this study. The findings can help the government and community respond early to dengue epidemics.

  20. A statistical model of the human core-temperature circadian rhythm

    NASA Technical Reports Server (NTRS)

    Brown, E. N.; Choe, Y.; Luithardt, H.; Czeisler, C. A.

    2000-01-01

    We formulate a statistical model of the human core-temperature circadian rhythm in which the circadian signal is modeled as a van der Pol oscillator, the thermoregulatory response is represented as a first-order autoregressive process, and the evoked effect of activity is modeled with a function specific for each circadian protocol. The new model directly links differential equation-based simulation models and harmonic regression analysis methods and permits statistical analysis of both static and dynamical properties of the circadian pacemaker from experimental data. We estimate the model parameters by using numerically efficient maximum likelihood algorithms and analyze human core-temperature data from forced desynchrony, free-run, and constant-routine protocols. By representing explicitly the dynamical effects of ambient light input to the human circadian pacemaker, the new model can estimate with high precision the correct intrinsic period of this oscillator ( approximately 24 h) from both free-run and forced desynchrony studies. Although the van der Pol model approximates well the dynamical features of the circadian pacemaker, the optimal dynamical model of the human biological clock may have a harmonic structure different from that of the van der Pol oscillator.

  1. Construction of robust dynamic genome-scale metabolic model structures of Saccharomyces cerevisiae through iterative re-parameterization.

    PubMed

    Sánchez, Benjamín J; Pérez-Correa, José R; Agosin, Eduardo

    2014-09-01

    Dynamic flux balance analysis (dFBA) has been widely employed in metabolic engineering to predict the effect of genetic modifications and environmental conditions in the cell׳s metabolism during dynamic cultures. However, the importance of the model parameters used in these methodologies has not been properly addressed. Here, we present a novel and simple procedure to identify dFBA parameters that are relevant for model calibration. The procedure uses metaheuristic optimization and pre/post-regression diagnostics, fixing iteratively the model parameters that do not have a significant role. We evaluated this protocol in a Saccharomyces cerevisiae dFBA framework calibrated for aerobic fed-batch and anaerobic batch cultivations. The model structures achieved have only significant, sensitive and uncorrelated parameters and are able to calibrate different experimental data. We show that consumption, suboptimal growth and production rates are more useful for calibrating dynamic S. cerevisiae metabolic models than Boolean gene expression rules, biomass requirements and ATP maintenance. Copyright © 2014 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  2. Adaptive data-driven models for estimating carbon fluxes in the Northern Great Plains

    USGS Publications Warehouse

    Wylie, B.K.; Fosnight, E.A.; Gilmanov, T.G.; Frank, A.B.; Morgan, J.A.; Haferkamp, Marshall R.; Meyers, T.P.

    2007-01-01

    Rangeland carbon fluxes are highly variable in both space and time. Given the expansive areas of rangelands, how rangelands respond to climatic variation, management, and soil potential is important to understanding carbon dynamics. Rangeland carbon fluxes associated with Net Ecosystem Exchange (NEE) were measured from multiple year data sets at five flux tower locations in the Northern Great Plains. These flux tower measurements were combined with 1-km2 spatial data sets of Photosynthetically Active Radiation (PAR), Normalized Difference Vegetation Index (NDVI), temperature, precipitation, seasonal NDVI metrics, and soil characteristics. Flux tower measurements were used to train and select variables for a rule-based piece-wise regression model. The accuracy and stability of the model were assessed through random cross-validation and cross-validation by site and year. Estimates of NEE were produced for each 10-day period during each growing season from 1998 to 2001. Growing season carbon flux estimates were combined with winter flux estimates to derive and map annual estimates of NEE. The rule-based piece-wise regression model is a dynamic, adaptive model that captures the relationships of the spatial data to NEE as conditions evolve throughout the growing season. The carbon dynamics in the Northern Great Plains proved to be in near equilibrium, serving as a small carbon sink in 1999 and as a small carbon source in 1998, 2000, and 2001. Patterns of carbon sinks and sources are very complex, with the carbon dynamics tilting toward sources in the drier west and toward sinks in the east and near the mountains in the extreme west. Significant local variability exists, which initial investigations suggest are likely related to local climate variability, soil properties, and management.

  3. An evaluation of dynamic mutuality measurements and methods in cyclic time series

    NASA Astrophysics Data System (ADS)

    Xia, Xiaohua; Huang, Guitian; Duan, Na

    2010-12-01

    Several measurements and techniques have been developed to detect dynamic mutuality and synchronicity of time series in econometrics. This study aims to compare the performances of five methods, i.e., linear regression, dynamic correlation, Markov switching models, concordance index and recurrence quantification analysis, through numerical simulations. We evaluate the abilities of these methods to capture structure changing and cyclicity in time series and the findings of this paper would offer guidance to both academic and empirical researchers. Illustration examples are also provided to demonstrate the subtle differences of these techniques.

  4. A spectral-spatial-dynamic hierarchical Bayesian (SSD-HB) model for estimating soybean yield

    NASA Astrophysics Data System (ADS)

    Kazama, Yoriko; Kujirai, Toshihiro

    2014-10-01

    A method called a "spectral-spatial-dynamic hierarchical-Bayesian (SSD-HB) model," which can deal with many parameters (such as spectral and weather information all together) by reducing the occurrence of multicollinearity, is proposed. Experiments conducted on soybean yields in Brazil fields with a RapidEye satellite image indicate that the proposed SSD-HB model can predict soybean yield with a higher degree of accuracy than other estimation methods commonly used in remote-sensing applications. In the case of the SSD-HB model, the mean absolute error between estimated yield of the target area and actual yield is 0.28 t/ha, compared to 0.34 t/ha when conventional PLS regression was applied, showing the potential effectiveness of the proposed model.

  5. Distributed Monitoring of the R(sup 2) Statistic for Linear Regression

    NASA Technical Reports Server (NTRS)

    Bhaduri, Kanishka; Das, Kamalika; Giannella, Chris R.

    2011-01-01

    The problem of monitoring a multivariate linear regression model is relevant in studying the evolving relationship between a set of input variables (features) and one or more dependent target variables. This problem becomes challenging for large scale data in a distributed computing environment when only a subset of instances is available at individual nodes and the local data changes frequently. Data centralization and periodic model recomputation can add high overhead to tasks like anomaly detection in such dynamic settings. Therefore, the goal is to develop techniques for monitoring and updating the model over the union of all nodes data in a communication-efficient fashion. Correctness guarantees on such techniques are also often highly desirable, especially in safety-critical application scenarios. In this paper we develop DReMo a distributed algorithm with very low resource overhead, for monitoring the quality of a regression model in terms of its coefficient of determination (R2 statistic). When the nodes collectively determine that R2 has dropped below a fixed threshold, the linear regression model is recomputed via a network-wide convergecast and the updated model is broadcast back to all nodes. We show empirically, using both synthetic and real data, that our proposed method is highly communication-efficient and scalable, and also provide theoretical guarantees on correctness.

  6. A data-driven dynamics simulation framework for railway vehicles

    NASA Astrophysics Data System (ADS)

    Nie, Yinyu; Tang, Zhao; Liu, Fengjia; Chang, Jian; Zhang, Jianjun

    2018-03-01

    The finite element (FE) method is essential for simulating vehicle dynamics with fine details, especially for train crash simulations. However, factors such as the complexity of meshes and the distortion involved in a large deformation would undermine its calculation efficiency. An alternative method, the multi-body (MB) dynamics simulation provides satisfying time efficiency but limited accuracy when highly nonlinear dynamic process is involved. To maintain the advantages of both methods, this paper proposes a data-driven simulation framework for dynamics simulation of railway vehicles. This framework uses machine learning techniques to extract nonlinear features from training data generated by FE simulations so that specific mesh structures can be formulated by a surrogate element (or surrogate elements) to replace the original mechanical elements, and the dynamics simulation can be implemented by co-simulation with the surrogate element(s) embedded into a MB model. This framework consists of a series of techniques including data collection, feature extraction, training data sampling, surrogate element building, and model evaluation and selection. To verify the feasibility of this framework, we present two case studies, a vertical dynamics simulation and a longitudinal dynamics simulation, based on co-simulation with MATLAB/Simulink and Simpack, and a further comparison with a popular data-driven model (the Kriging model) is provided. The simulation result shows that using the legendre polynomial regression model in building surrogate elements can largely cut down the simulation time without sacrifice in accuracy.

  7. Theory Can Help Structure Regression Models for Projecting Stream Conditions Under Alternative Land Use Scenarios

    NASA Astrophysics Data System (ADS)

    van Sickle, J.; Baker, J.; Herlihy, A.

    2005-05-01

    We built multiple regression models for Emphemeroptera/ Plecoptera/ Tricoptera (EPT) taxon richness and other indicators of biological condition in streams of the Willamette River Basin, Oregon, USA. The models were used to project the changes in condition that would be expected in all 2-4th order streams of the 30000 sq km basin under alternative scenarios of future land use. In formulating the models, we invoked the theory of limiting factors to express the interactive effects of stream power and watershed land use on EPT richness. The resulting models were parsimonious, and they fit the data in our wedge-shaped scatterplots slightly better than did a naive additive-effects model. Just as theory helped formulate our regression models, the models in turn helped us identify a new research need for the Basin's streams. Our future scenarios project that conversions of agricultural to urban uses may dominate landscape dynamics in the basin over the next 50 years. But our models could not detect any difference between the effects of agricultural and urban development in watersheds on stream biota. This result points to an increased need for understanding how agricultural and urban land uses in the Basin differentially influence stream ecosystems.

  8. Dynamic linear models using the Kalman filter for early detection and early warning of malaria outbreaks

    NASA Astrophysics Data System (ADS)

    Merkord, C. L.; Liu, Y.; DeVos, M.; Wimberly, M. C.

    2015-12-01

    Malaria early detection and early warning systems are important tools for public health decision makers in regions where malaria transmission is seasonal and varies from year to year with fluctuations in rainfall and temperature. Here we present a new data-driven dynamic linear model based on the Kalman filter with time-varying coefficients that are used to identify malaria outbreaks as they occur (early detection) and predict the location and timing of future outbreaks (early warning). We fit linear models of malaria incidence with trend and Fourier form seasonal components using three years of weekly malaria case data from 30 districts in the Amhara Region of Ethiopia. We identified past outbreaks by comparing the modeled prediction envelopes with observed case data. Preliminary results demonstrated the potential for improved accuracy and timeliness over commonly-used methods in which thresholds are based on simpler summary statistics of historical data. Other benefits of the dynamic linear modeling approach include robustness to missing data and the ability to fit models with relatively few years of training data. To predict future outbreaks, we started with the early detection model for each district and added a regression component based on satellite-derived environmental predictor variables including precipitation data from the Tropical Rainfall Measuring Mission (TRMM) and land surface temperature (LST) and spectral indices from the Moderate Resolution Imaging Spectroradiometer (MODIS). We included lagged environmental predictors in the regression component of the model, with lags chosen based on cross-correlation of the one-step-ahead forecast errors from the first model. Our results suggest that predictions of future malaria outbreaks can be improved by incorporating lagged environmental predictors.

  9. Structural features that predict real-value fluctuations of globular proteins

    PubMed Central

    Jamroz, Michal; Kolinski, Andrzej; Kihara, Daisuke

    2012-01-01

    It is crucial to consider dynamics for understanding the biological function of proteins. We used a large number of molecular dynamics trajectories of non-homologous proteins as references and examined static structural features of proteins that are most relevant to fluctuations. We examined correlation of individual structural features with fluctuations and further investigated effective combinations of features for predicting the real-value of residue fluctuations using the support vector regression. It was found that some structural features have higher correlation than crystallographic B-factors with fluctuations observed in molecular dynamics trajectories. Moreover, support vector regression that uses combinations of static structural features showed accurate prediction of fluctuations with an average Pearson’s correlation coefficient of 0.669 and a root mean square error of 1.04 Å. This correlation coefficient is higher than the one observed for the prediction by the Gaussian network model. An advantage of the developed method over the Gaussian network models is that the former predicts the real-value of fluctuation. The results help improve our understanding of relationships between protein structure and fluctuation. Furthermore, the developed method provides a convienient practial way to predict fluctuations of proteins using easily computed static structural features of proteins. PMID:22328193

  10. The dynamic relationships between economic status and health measures among working-age adults in the United States.

    PubMed

    Meraya, Abdulkarim M; Dwibedi, Nilanjana; Tan, Xi; Innes, Kim; Mitra, Sophie; Sambamoorthi, Usha

    2018-04-18

    We examine the dynamic relationships between economic status and health measures using data from 8 waves of the Panel Study of Income Dynamics from 1999 to 2013. Health measures are self-rated health (SRH) and functional limitations; economic status measures are labor income (earnings), family income, and net wealth. We use 3 different types of models: (a) ordinary least squares regression, (b) first-difference, and (c) system-generalized method of moment (GMM). Using ordinary least squares regression and first difference models, we find that higher levels of economic status are associated with better SRH and functional status among both men and women, although declines in income and wealth are associated with a decline in health for men only. Using system-GMM estimators, we find evidence of a causal link from labor income to SRH and functional status for both genders. Among men only, system-GMM results indicate that there is a causal link from net wealth to SRH and functional status. Results overall highlight the need for integrated economic and health policies, and for policies that mitigate the potential adverse health effects of short-term changes in economic status. Copyright © 2018 John Wiley & Sons, Ltd.

  11. Using Quartile-Quartile Lines as Linear Models

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2015-01-01

    This article introduces the notion of the quartile-quartile line as an alternative to the regression line and the median-median line to produce a linear model based on a set of data. It is based on using the first and third quartiles of a set of (x, y) data. Dynamic spreadsheets are used as exploratory tools to compare the different approaches and…

  12. ShapeSelectForest: a new r package for modeling landsat time series

    Treesearch

    Mary Meyer; Xiyue Liao; Gretchen Moisen; Elizabeth Freeman

    2015-01-01

    We present a new R package called ShapeSelectForest recently posted to the Comprehensive R Archival Network. The package was developed to fit nonparametric shape-restricted regression splines to time series of Landsat imagery for the purpose of modeling, mapping, and monitoring annual forest disturbance dynamics over nearly three decades. For each pixel and spectral...

  13. Two-dimensional advective transport in ground-water flow parameter estimation

    USGS Publications Warehouse

    Anderman, E.R.; Hill, M.C.; Poeter, E.P.

    1996-01-01

    Nonlinear regression is useful in ground-water flow parameter estimation, but problems of parameter insensitivity and correlation often exist given commonly available hydraulic-head and head-dependent flow (for example, stream and lake gain or loss) observations. To address this problem, advective-transport observations are added to the ground-water flow, parameter-estimation model MODFLOWP using particle-tracking methods. The resulting model is used to investigate the importance of advective-transport observations relative to head-dependent flow observations when either or both are used in conjunction with hydraulic-head observations in a simulation of the sewage-discharge plume at Otis Air Force Base, Cape Cod, Massachusetts, USA. The analysis procedure for evaluating the probable effect of new observations on the regression results consists of two steps: (1) parameter sensitivities and correlations calculated at initial parameter values are used to assess the model parameterization and expected relative contributions of different types of observations to the regression; and (2) optimal parameter values are estimated by nonlinear regression and evaluated. In the Cape Cod parameter-estimation model, advective-transport observations did not significantly increase the overall parameter sensitivity; however: (1) inclusion of advective-transport observations decreased parameter correlation enough for more unique parameter values to be estimated by the regression; (2) realistic uncertainties in advective-transport observations had a small effect on parameter estimates relative to the precision with which the parameters were estimated; and (3) the regression results and sensitivity analysis provided insight into the dynamics of the ground-water flow system, especially the importance of accurate boundary conditions. In this work, advective-transport observations improved the calibration of the model and the estimation of ground-water flow parameters, and use of regression and related techniques produced significant insight into the physical system.

  14. Kernel Density Estimation as a Measure of Environmental Exposure Related to Insulin Resistance in Breast Cancer Survivors.

    PubMed

    Jankowska, Marta M; Natarajan, Loki; Godbole, Suneeta; Meseck, Kristin; Sears, Dorothy D; Patterson, Ruth E; Kerr, Jacqueline

    2017-07-01

    Background: Environmental factors may influence breast cancer; however, most studies have measured environmental exposure in neighborhoods around home residences (static exposure). We hypothesize that tracking environmental exposures over time and space (dynamic exposure) is key to assessing total exposure. This study compares breast cancer survivors' exposure to walkable and recreation-promoting environments using dynamic Global Positioning System (GPS) and static home-based measures of exposure in relation to insulin resistance. Methods: GPS data from 249 breast cancer survivors living in San Diego County were collected for one week along with fasting blood draw. Exposure to recreation spaces and walkability was measured for each woman's home address within an 800 m buffer (static), and using a kernel density weight of GPS tracks (dynamic). Participants' exposure estimates were related to insulin resistance (using the homeostatic model assessment of insulin resistance, HOMA-IR) controlled by age and body mass index (BMI) in linear regression models. Results: The dynamic measurement method resulted in greater variability in built environment exposure values than did the static method. Regression results showed no association between HOMA-IR and home-based, static measures of walkability and recreation area exposure. GPS-based dynamic measures of both walkability and recreation area were significantly associated with lower HOMA-IR ( P < 0.05). Conclusions: Dynamic exposure measurements may provide important evidence for community- and individual-level interventions that can address cancer risk inequities arising from environments wherein breast cancer survivors live and engage. Impact: This is the first study to compare associations of dynamic versus static built environment exposure measures with insulin outcomes in breast cancer survivors. Cancer Epidemiol Biomarkers Prev; 26(7); 1078-84. ©2017 AACR . ©2017 American Association for Cancer Research.

  15. Development of an empirically based dynamic biomechanical strength model

    NASA Technical Reports Server (NTRS)

    Pandya, A.; Maida, J.; Aldridge, A.; Hasson, S.; Woolford, B.

    1992-01-01

    The focus here is on the development of a dynamic strength model for humans. Our model is based on empirical data. The shoulder, elbow, and wrist joints are characterized in terms of maximum isolated torque, position, and velocity in all rotational planes. This information is reduced by a least squares regression technique into a table of single variable second degree polynomial equations determining the torque as a function of position and velocity. The isolated joint torque equations are then used to compute forces resulting from a composite motion, which in this case is a ratchet wrench push and pull operation. What is presented here is a comparison of the computed or predicted results of the model with the actual measured values for the composite motion.

  16. Model-based prediction of nephropathia epidemica outbreaks based on climatological and vegetation data and bank vole population dynamics.

    PubMed

    Haredasht, S Amirpour; Taylor, C J; Maes, P; Verstraeten, W W; Clement, J; Barrios, M; Lagrou, K; Van Ranst, M; Coppin, P; Berckmans, D; Aerts, J-M

    2013-11-01

    Wildlife-originated zoonotic diseases in general are a major contributor to emerging infectious diseases. Hantaviruses more specifically cause thousands of human disease cases annually worldwide, while understanding and predicting human hantavirus epidemics pose numerous unsolved challenges. Nephropathia epidemica (NE) is a human infection caused by Puumala virus, which is naturally carried and shed by bank voles (Myodes glareolus). The objective of this study was to develop a method that allows model-based predicting 3 months ahead of the occurrence of NE epidemics. Two data sets were utilized to develop and test the models. These data sets were concerned with NE cases in Finland and Belgium. In this study, we selected the most relevant inputs from all the available data for use in a dynamic linear regression (DLR) model. The number of NE cases in Finland were modelled using data from 1996 to 2008. The NE cases were predicted based on the time series data of average monthly air temperature (°C) and bank voles' trapping index using a DLR model. The bank voles' trapping index data were interpolated using a related dynamic harmonic regression model (DHR). Here, the DLR and DHR models used time-varying parameters. Both the DHR and DLR models were based on a unified state-space estimation framework. For the Belgium case, no time series of the bank voles' population dynamics were available. Several studies, however, have suggested that the population of bank voles is related to the variation in seed production of beech and oak trees in Northern Europe. Therefore, the NE occurrence pattern in Belgium was predicted based on a DLR model by using remotely sensed phenology parameters of broad-leaved forests, together with the oak and beech seed categories and average monthly air temperature (°C) using data from 2001 to 2009. Our results suggest that even without any knowledge about hantavirus dynamics in the host population, the time variation in NE outbreaks in Finland could be predicted 3 months ahead with a 34% mean relative prediction error (MRPE). This took into account solely the population dynamics of the carrier species (bank voles). The time series analysis also revealed that climate change, as represented by the vegetation index, changes in forest phenology derived from satellite images and directly measured air temperature, may affect the mechanics of NE transmission. NE outbreaks in Belgium were predicted 3 months ahead with a 40% MRPE, based only on the climatological and vegetation data, in this case, without any knowledge of the bank vole's population dynamics. In this research, we demonstrated that NE outbreaks can be predicted using climate and vegetation data or the bank vole's population dynamics, by using dynamic data-based models with time-varying parameters. Such a predictive modelling approach might be used as a step towards the development of new tools for the prevention of future NE outbreaks. © 2012 Blackwell Verlag GmbH.

  17. Reduced-order modelling of parameter-dependent, linear and nonlinear dynamic partial differential equation models.

    PubMed

    Shah, A A; Xing, W W; Triantafyllidis, V

    2017-04-01

    In this paper, we develop reduced-order models for dynamic, parameter-dependent, linear and nonlinear partial differential equations using proper orthogonal decomposition (POD). The main challenges are to accurately and efficiently approximate the POD bases for new parameter values and, in the case of nonlinear problems, to efficiently handle the nonlinear terms. We use a Bayesian nonlinear regression approach to learn the snapshots of the solutions and the nonlinearities for new parameter values. Computational efficiency is ensured by using manifold learning to perform the emulation in a low-dimensional space. The accuracy of the method is demonstrated on a linear and a nonlinear example, with comparisons with a global basis approach.

  18. Reduced-order modelling of parameter-dependent, linear and nonlinear dynamic partial differential equation models

    PubMed Central

    Xing, W. W.; Triantafyllidis, V.

    2017-01-01

    In this paper, we develop reduced-order models for dynamic, parameter-dependent, linear and nonlinear partial differential equations using proper orthogonal decomposition (POD). The main challenges are to accurately and efficiently approximate the POD bases for new parameter values and, in the case of nonlinear problems, to efficiently handle the nonlinear terms. We use a Bayesian nonlinear regression approach to learn the snapshots of the solutions and the nonlinearities for new parameter values. Computational efficiency is ensured by using manifold learning to perform the emulation in a low-dimensional space. The accuracy of the method is demonstrated on a linear and a nonlinear example, with comparisons with a global basis approach. PMID:28484327

  19. [Research of prevalence of schistosomiasis in Hunan province, 1984-2015].

    PubMed

    Li, F Y; Tan, H Z; Ren, G H; Jiang, Q; Wang, H L

    2017-03-10

    Objective: To analyze the prevalence of schistosomiasis in Hunan province, and provide scientific evidence for the control and elimination of schistosomiasis. Methods: The changes of infection rates of Schistosoma ( S .) japonicum among residents and cattle in Hunan from 1984 to 2015 were analyzed by using dynamic trend diagram; and the time regression model was used to fit the infection rates of S. japonicum , and predict the recent infection rate. Results: The overall infection rates of S. japonicum in Hunan from 1984 to 2015 showed downward trend (95.29% in residents and 95.16% in cattle). By using the linear regression model, the actual values of infection rates in residents and cattle were all in the 95% confidence intervals of the value predicted; and the prediction showed that the infection rates in the residents and cattle would continue to decrease from 2016 to 2020. Conclusion: The prevalence of schistosomiasis was in decline in Hunan. The regression model has a good effect in the short-term prediction of schistosomiasis prevalence.

  20. Tuning stochastic matrix models with hydrologic data to predict the population dynamics of a riverine fish

    USGS Publications Warehouse

    Sakaris, P.C.; Irwin, E.R.

    2010-01-01

    We developed stochastic matrix models to evaluate the effects of hydrologic alteration and variable mortality on the population dynamics of a lotie fish in a regulated river system. Models were applied to a representative lotic fish species, the flathead catfish (Pylodictis olivaris), for which two populations were examined: a native population from a regulated reach of the Coosa River (Alabama, USA) and an introduced population from an unregulated section of the Ocmulgee River (Georgia, USA). Size-classified matrix models were constructed for both populations, and residuals from catch-curve regressions were used as indices of year class strength (i.e., recruitment). A multiple regression model indicated that recruitment of flathead catfish in the Coosa River was positively related to the frequency of spring pulses between 283 and 566 m3/s. For the Ocmulgee River population, multiple regression models indicated that year class strength was negatively related to mean March discharge and positively related to June low flow. When the Coosa population was modeled to experience five consecutive years of favorable hydrologic conditions during a 50-year projection period, it exhibited a substantial spike in size and increased at an overall 0.2% annual rate. When modeled to experience five years of unfavorable hydrologic conditions, the Coosa population initially exhibited a decrease in size but later stabilized and increased at a 0.4% annual rate following the decline. When the Ocmulgee River population was modeled to experience five years of favorable conditions, it exhibited a substantial spike in size and increased at an overall 0.4% annual rate. After the Ocmulgee population experienced five years of unfavorable conditions, a sharp decline in population size was predicted. However, the population quickly recovered, with population size increasing at a 0.3% annual rate following the decline. In general, stochastic population growth in the Ocmulgee River was more erratic and variable than population growth in the Coosa River. We encourage ecologists to develop similar models for other lotic species, particularly in regulated river systems. Successful management of fish populations in regulated systems requires that we are able to predict how hydrology affects recruitment and will ultimately influence the population dynamics of fishes. ?? 2010 by the Ecological Society of America.

  1. Integration of Multiple Data Sources to Simulate the Dynamics of Land Systems

    PubMed Central

    Deng, Xiangzheng; Su, Hongbo; Zhan, Jinyan

    2008-01-01

    In this paper we present and develop a new model, which we have called Dynamics of Land Systems (DLS). The DLS model is capable of integrating multiple data sources to simulate the dynamics of a land system. Three main modules are incorporated in DLS: a spatial regression module, to explore the relationship between land uses and influencing factors, a scenario analysis module of the land uses of a region during the simulation period and a spatial disaggregation module, to allocate land use changes from a regional level to disaggregated grid cells. A case study on Taips County in North China is incorporated in this paper to test the functionality of DLS. The simulation results under the baseline, economic priority and environmental scenarios help to understand the land system dynamics and project near future land-use trajectories of a region, in order to focus management decisions on land uses and land use planning. PMID:27879726

  2. System identification principles in studies of forest dynamics.

    Treesearch

    Rolfe A. Leary

    1970-01-01

    Shows how it is possible to obtain governing equation parameter estimates on the basis of observed system states. The approach used represents a constructive alternative to regression techniques for models expressed as differential equations. This approach allows scientists to more completely quantify knowledge of forest development processes, to express theories in...

  3. Design and analysis of forward and reverse models for predicting defect accumulation, defect energetics, and irradiation conditions

    DOE PAGES

    Stewart, James A.; Kohnert, Aaron A.; Capolungo, Laurent; ...

    2018-03-06

    The complexity of radiation effects in a material’s microstructure makes developing predictive models a difficult task. In principle, a complete list of all possible reactions between defect species being considered can be used to elucidate damage evolution mechanisms and its associated impact on microstructure evolution. However, a central limitation is that many models use a limited and incomplete catalog of defect energetics and associated reactions. Even for a given model, estimating its input parameters remains a challenge, especially for complex material systems. Here, we present a computational analysis to identify the extent to which defect accumulation, energetics, and irradiation conditionsmore » can be determined via forward and reverse regression models constructed and trained from large data sets produced by cluster dynamics simulations. A global sensitivity analysis, via Sobol’ indices, concisely characterizes parameter sensitivity and demonstrates how this can be connected to variability in defect evolution. Based on this analysis and depending on the definition of what constitutes the input and output spaces, forward and reverse regression models are constructed and allow for the direct calculation of defect accumulation, defect energetics, and irradiation conditions. Here, this computational analysis, exercised on a simplified cluster dynamics model, demonstrates the ability to design predictive surrogate and reduced-order models, and provides guidelines for improving model predictions within the context of forward and reverse engineering of mathematical models for radiation effects in a materials’ microstructure.« less

  4. Design and analysis of forward and reverse models for predicting defect accumulation, defect energetics, and irradiation conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, James A.; Kohnert, Aaron A.; Capolungo, Laurent

    The complexity of radiation effects in a material’s microstructure makes developing predictive models a difficult task. In principle, a complete list of all possible reactions between defect species being considered can be used to elucidate damage evolution mechanisms and its associated impact on microstructure evolution. However, a central limitation is that many models use a limited and incomplete catalog of defect energetics and associated reactions. Even for a given model, estimating its input parameters remains a challenge, especially for complex material systems. Here, we present a computational analysis to identify the extent to which defect accumulation, energetics, and irradiation conditionsmore » can be determined via forward and reverse regression models constructed and trained from large data sets produced by cluster dynamics simulations. A global sensitivity analysis, via Sobol’ indices, concisely characterizes parameter sensitivity and demonstrates how this can be connected to variability in defect evolution. Based on this analysis and depending on the definition of what constitutes the input and output spaces, forward and reverse regression models are constructed and allow for the direct calculation of defect accumulation, defect energetics, and irradiation conditions. Here, this computational analysis, exercised on a simplified cluster dynamics model, demonstrates the ability to design predictive surrogate and reduced-order models, and provides guidelines for improving model predictions within the context of forward and reverse engineering of mathematical models for radiation effects in a materials’ microstructure.« less

  5. Advancing understanding of affect labeling with dynamic causal modeling

    PubMed Central

    Torrisi, Salvatore J.; Lieberman, Matthew D.; Bookheimer, Susan Y.; Altshuler, Lori L.

    2013-01-01

    Mechanistic understandings of forms of incidental emotion regulation have implications for basic and translational research in the affective sciences. In this study we applied Dynamic Causal Modeling (DCM) for fMRI to a common paradigm of labeling facial affect to elucidate prefrontal to subcortical influences. Four brain regions were used to model affect labeling, including right ventrolateral prefrontal cortex (vlPFC), amygdala and Broca’s area. 64 models were compared, for each of 45 healthy subjects. Family level inference split the model space to a likely driving input and Bayesian Model Selection within the winning family of 32 models revealed a strong pattern of endogenous network connectivity. Modulatory effects of labeling were most prominently observed following Bayesian Model Averaging, with the dampening influence on amygdala originating from Broca’s area but much more strongly from right vlPFC. These results solidify and extend previous correlation and regression-based estimations of negative corticolimbic coupling. PMID:23774393

  6. The application of neural network model to the simulation nitrous oxide emission in the hydro-fluctuation belt of Three Gorges Reservoir

    NASA Astrophysics Data System (ADS)

    Song, Lanlan

    2017-04-01

    Nitrous oxide is much more potent greenhouse gas than carbon dioxide. However, the estimation of N2O flux is usually clouded with uncertainty, mainly due to high spatial and temporal variations. This hampers the development of general mechanistic models for N2O emission as well, as most previously developed models were empirical or exhibited low predictability with numerous assumptions. In this study, we tested General Regression Neural Networks (GRNN) as an alternative to classic empirical models for simulating N2O emission in riparian zones of Reservoirs. GRNN and nonlinear regression (NLR) were applied to estimate the N2O flux of 1-year observations in riparian zones of Three Gorge Reservoir. NLR resulted in lower prediction power and higher residuals compared to GRNN. Although nonlinear regression model estimated similar average values of N2O, it could not capture the fluctuation patterns accurately. In contrast, GRNN model achieved a fairly high predictability, with an R2 of 0.59 for model validation, 0.77 for model calibration (training), and a low root mean square error (RMSE), indicating a high capacity to simulate the dynamics of N2O flux. According to a sensitivity analysis of the GRNN, nonlinear relationships between input variables and N2O flux were well explained. Our results suggest that the GRNN developed in this study has a greater performance in simulating variations in N2O flux than nonlinear regressions.

  7. Dynamic RSA: Examining parasympathetic regulatory dynamics via vector-autoregressive modeling of time-varying RSA and heart period.

    PubMed

    Fisher, Aaron J; Reeves, Jonathan W; Chi, Cyrus

    2016-07-01

    Expanding on recently published methods, the current study presents an approach to estimating the dynamic, regulatory effect of the parasympathetic nervous system on heart period on a moment-to-moment basis. We estimated second-to-second variation in respiratory sinus arrhythmia (RSA) in order to estimate the contemporaneous and time-lagged relationships among RSA, interbeat interval (IBI), and respiration rate via vector autoregression. Moreover, we modeled these relationships at lags of 1 s to 10 s, in order to evaluate the optimal latency for estimating dynamic RSA effects. The IBI (t) on RSA (t-n) regression parameter was extracted from individual models as an operationalization of the regulatory effect of RSA on IBI-referred to as dynamic RSA (dRSA). Dynamic RSA positively correlated with standard averages of heart rate and negatively correlated with standard averages of RSA. We propose that dRSA reflects the active downregulation of heart period by the parasympathetic nervous system and thus represents a novel metric that provides incremental validity in the measurement of autonomic cardiac control-specifically, a method by which parasympathetic regulatory effects can be measured in process. © 2016 Society for Psychophysiological Research.

  8. Dynamic Parameter Identification of Subject-Specific Body Segment Parameters Using Robotics Formalism: Case Study Head Complex.

    PubMed

    Díaz-Rodríguez, Miguel; Valera, Angel; Page, Alvaro; Besa, Antonio; Mata, Vicente

    2016-05-01

    Accurate knowledge of body segment inertia parameters (BSIP) improves the assessment of dynamic analysis based on biomechanical models, which is of paramount importance in fields such as sport activities or impact crash test. Early approaches for BSIP identification rely on the experiments conducted on cadavers or through imaging techniques conducted on living subjects. Recent approaches for BSIP identification rely on inverse dynamic modeling. However, most of the approaches are focused on the entire body, and verification of BSIP for dynamic analysis for distal segment or chain of segments, which has proven to be of significant importance in impact test studies, is rarely established. Previous studies have suggested that BSIP should be obtained by using subject-specific identification techniques. To this end, our paper develops a novel approach for estimating subject-specific BSIP based on static and dynamics identification models (SIM, DIM). We test the validity of SIM and DIM by comparing the results using parameters obtained from a regression model proposed by De Leva (1996, "Adjustments to Zatsiorsky-Seluyanov's Segment Inertia Parameters," J. Biomech., 29(9), pp. 1223-1230). Both SIM and DIM are developed considering robotics formalism. First, the static model allows the mass and center of gravity (COG) to be estimated. Second, the results from the static model are included in the dynamics equation allowing us to estimate the moment of inertia (MOI). As a case study, we applied the approach to evaluate the dynamics modeling of the head complex. Findings provide some insight into the validity not only of the proposed method but also of the application proposed by De Leva (1996, "Adjustments to Zatsiorsky-Seluyanov's Segment Inertia Parameters," J. Biomech., 29(9), pp. 1223-1230) for dynamic modeling of body segments.

  9. Shrinkage Estimation of Varying Covariate Effects Based On Quantile Regression

    PubMed Central

    Peng, Limin; Xu, Jinfeng; Kutner, Nancy

    2013-01-01

    Varying covariate effects often manifest meaningful heterogeneity in covariate-response associations. In this paper, we adopt a quantile regression model that assumes linearity at a continuous range of quantile levels as a tool to explore such data dynamics. The consideration of potential non-constancy of covariate effects necessitates a new perspective for variable selection, which, under the assumed quantile regression model, is to retain variables that have effects on all quantiles of interest as well as those that influence only part of quantiles considered. Current work on l1-penalized quantile regression either does not concern varying covariate effects or may not produce consistent variable selection in the presence of covariates with partial effects, a practical scenario of interest. In this work, we propose a shrinkage approach by adopting a novel uniform adaptive LASSO penalty. The new approach enjoys easy implementation without requiring smoothing. Moreover, it can consistently identify the true model (uniformly across quantiles) and achieve the oracle estimation efficiency. We further extend the proposed shrinkage method to the case where responses are subject to random right censoring. Numerical studies confirm the theoretical results and support the utility of our proposals. PMID:25332515

  10. Developing a dengue forecast model using machine learning: A case study in China

    PubMed Central

    Zhang, Qin; Wang, Li; Xiao, Jianpeng; Zhang, Qingying; Luo, Ganfeng; Li, Zhihao; He, Jianfeng; Zhang, Yonghui; Ma, Wenjun

    2017-01-01

    Background In China, dengue remains an important public health issue with expanded areas and increased incidence recently. Accurate and timely forecasts of dengue incidence in China are still lacking. We aimed to use the state-of-the-art machine learning algorithms to develop an accurate predictive model of dengue. Methodology/Principal findings Weekly dengue cases, Baidu search queries and climate factors (mean temperature, relative humidity and rainfall) during 2011–2014 in Guangdong were gathered. A dengue search index was constructed for developing the predictive models in combination with climate factors. The observed year and week were also included in the models to control for the long-term trend and seasonality. Several machine learning algorithms, including the support vector regression (SVR) algorithm, step-down linear regression model, gradient boosted regression tree algorithm (GBM), negative binomial regression model (NBM), least absolute shrinkage and selection operator (LASSO) linear regression model and generalized additive model (GAM), were used as candidate models to predict dengue incidence. Performance and goodness of fit of the models were assessed using the root-mean-square error (RMSE) and R-squared measures. The residuals of the models were examined using the autocorrelation and partial autocorrelation function analyses to check the validity of the models. The models were further validated using dengue surveillance data from five other provinces. The epidemics during the last 12 weeks and the peak of the 2014 large outbreak were accurately forecasted by the SVR model selected by a cross-validation technique. Moreover, the SVR model had the consistently smallest prediction error rates for tracking the dynamics of dengue and forecasting the outbreaks in other areas in China. Conclusion and significance The proposed SVR model achieved a superior performance in comparison with other forecasting techniques assessed in this study. The findings can help the government and community respond early to dengue epidemics. PMID:29036169

  11. Predicting Subnational Ebola Virus Disease Epidemic Dynamics from Sociodemographic Indicators

    PubMed Central

    Valeri, Linda; Patterson-Lomba, Oscar; Gurmu, Yared; Ablorh, Akweley; Bobb, Jennifer; Townes, F. William; Harling, Guy

    2016-01-01

    Background The recent Ebola virus disease (EVD) outbreak in West Africa has spread wider than any previous human EVD epidemic. While individual-level risk factors that contribute to the spread of EVD have been studied, the population-level attributes of subnational regions associated with outbreak severity have not yet been considered. Methods To investigate the area-level predictors of EVD dynamics, we integrated time series data on cumulative reported cases of EVD from the World Health Organization and covariate data from the Demographic and Health Surveys. We first estimated the early growth rates of epidemics in each second-level administrative district (ADM2) in Guinea, Sierra Leone and Liberia using exponential, logistic and polynomial growth models. We then evaluated how these growth rates, as well as epidemic size within ADM2s, were ecologically associated with several demographic and socio-economic characteristics of the ADM2, using bivariate correlations and multivariable regression models. Results The polynomial growth model appeared to best fit the ADM2 epidemic curves, displaying the lowest residual standard error. Each outcome was associated with various regional characteristics in bivariate models, however in stepwise multivariable models only mean education levels were consistently associated with a worse local epidemic. Discussion By combining two common methods—estimation of epidemic parameters using mathematical models, and estimation of associations using ecological regression models—we identified some factors predicting rapid and severe EVD epidemics in West African subnational regions. While care should be taken interpreting such results as anything more than correlational, we suggest that our approach of using data sources that were publicly available in advance of the epidemic or in real-time provides an analytic framework that may assist countries in understanding the dynamics of future outbreaks as they occur. PMID:27732614

  12. Linear and evolutionary polynomial regression models to forecast coastal dynamics: Comparison and reliability assessment

    NASA Astrophysics Data System (ADS)

    Bruno, Delia Evelina; Barca, Emanuele; Goncalves, Rodrigo Mikosz; de Araujo Queiroz, Heithor Alexandre; Berardi, Luigi; Passarella, Giuseppe

    2018-01-01

    In this paper, the Evolutionary Polynomial Regression data modelling strategy has been applied to study small scale, short-term coastal morphodynamics, given its capability for treating a wide database of known information, non-linearly. Simple linear and multilinear regression models were also applied to achieve a balance between the computational load and reliability of estimations of the three models. In fact, even though it is easy to imagine that the more complex the model, the more the prediction improves, sometimes a "slight" worsening of estimations can be accepted in exchange for the time saved in data organization and computational load. The models' outcomes were validated through a detailed statistical, error analysis, which revealed a slightly better estimation of the polynomial model with respect to the multilinear model, as expected. On the other hand, even though the data organization was identical for the two models, the multilinear one required a simpler simulation setting and a faster run time. Finally, the most reliable evolutionary polynomial regression model was used in order to make some conjecture about the uncertainty increase with the extension of extrapolation time of the estimation. The overlapping rate between the confidence band of the mean of the known coast position and the prediction band of the estimated position can be a good index of the weakness in producing reliable estimations when the extrapolation time increases too much. The proposed models and tests have been applied to a coastal sector located nearby Torre Colimena in the Apulia region, south Italy.

  13. Numerical Modeling of Earthquake-Induced Landslide Using an Improved Discontinuous Deformation Analysis Considering Dynamic Friction Degradation of Joints

    NASA Astrophysics Data System (ADS)

    Huang, Da; Song, Yixiang; Cen, Duofeng; Fu, Guoyang

    2016-12-01

    Discontinuous deformation analysis (DDA) as an efficient technique has been extensively applied in the dynamic simulation of discontinuous rock mass. In the original DDA (ODDA), the Mohr-Coulomb failure criterion is employed as the judgment principle of failure between contact blocks, and the friction coefficient is assumed to be constant in the whole calculation process. However, it has been confirmed by a host of shear tests that the dynamic friction of rock joints degrades. Therefore, the friction coefficient should be gradually reduced during the numerical simulation of an earthquake-induced rockslide. In this paper, based on the experimental results of cyclic shear tests on limestone joints, exponential regression formulas are fitted for dynamic friction degradation, which is a function of the relative velocity, the amplitude of cyclic shear displacement and the number of its cycles between blocks with an edge-to-edge contact. Then, an improved DDA (IDDA) is developed by implementing the fitting regression formulas and a modified removing technique of joint cohesion, in which the cohesion is removed once the `sliding' or `open' state between blocks appears for the first time, into the ODDA. The IDDA is first validated by comparing with the theoretical solutions of the kinematic behaviors of a sliding block on an inclined plane under dynamic loading. Then, the program is applied to model the Donghekou landslide triggered by the 2008 Wenchuan earthquake in China. The simulation results demonstrate that the dynamic friction degradation of joints has great influences on the runout and velocity of sliding mass. Moreover, the friction coefficient possesses higher impact than the cohesion of joints on the kinematic behaviors of the sliding mass.

  14. Inferring microbial interaction networks from metagenomic data using SgLV-EKF algorithm.

    PubMed

    Alshawaqfeh, Mustafa; Serpedin, Erchin; Younes, Ahmad Bani

    2017-03-27

    Inferring the microbial interaction networks (MINs) and modeling their dynamics are critical in understanding the mechanisms of the bacterial ecosystem and designing antibiotic and/or probiotic therapies. Recently, several approaches were proposed to infer MINs using the generalized Lotka-Volterra (gLV) model. Main drawbacks of these models include the fact that these models only consider the measurement noise without taking into consideration the uncertainties in the underlying dynamics. Furthermore, inferring the MIN is characterized by the limited number of observations and nonlinearity in the regulatory mechanisms. Therefore, novel estimation techniques are needed to address these challenges. This work proposes SgLV-EKF: a stochastic gLV model that adopts the extended Kalman filter (EKF) algorithm to model the MIN dynamics. In particular, SgLV-EKF employs a stochastic modeling of the MIN by adding a noise term to the dynamical model to compensate for modeling uncertainties. This stochastic modeling is more realistic than the conventional gLV model which assumes that the MIN dynamics are perfectly governed by the gLV equations. After specifying the stochastic model structure, we propose the EKF to estimate the MIN. SgLV-EKF was compared with two similarity-based algorithms, one algorithm from the integral-based family and two regression-based algorithms, in terms of the achieved performance on two synthetic data-sets and two real data-sets. The first data-set models the randomness in measurement data, whereas, the second data-set incorporates uncertainties in the underlying dynamics. The real data-sets are provided by a recent study pertaining to an antibiotic-mediated Clostridium difficile infection. The experimental results demonstrate that SgLV-EKF outperforms the alternative methods in terms of robustness to measurement noise, modeling errors, and tracking the dynamics of the MIN. Performance analysis demonstrates that the proposed SgLV-EKF algorithm represents a powerful and reliable tool to infer MINs and track their dynamics.

  15. Design an optimum safety policy for personnel safety management - A system dynamic approach

    NASA Astrophysics Data System (ADS)

    Balaji, P.

    2014-10-01

    Personnel safety management (PSM) ensures that employee's work conditions are healthy and safe by various proactive and reactive approaches. Nowadays it is a complex phenomenon because of increasing dynamic nature of organisations which results in an increase of accidents. An important part of accident prevention is to understand the existing system properly and make safety strategies for that system. System dynamics modelling appears to be an appropriate methodology to explore and make strategy for PSM. Many system dynamics models of industrial systems have been built entirely for specific host firms. This thesis illustrates an alternative approach. The generic system dynamics model of Personnel safety management was developed and tested in a host firm. The model was undergone various structural, behavioural and policy tests. The utility and effectiveness of model was further explored through modelling a safety scenario. In order to create effective safety policy under resource constraint, DOE (Design of experiment) was used. DOE uses classic designs, namely, fractional factorials and central composite designs. It used to make second order regression equation which serve as an objective function. That function was optimized under budget constraint and optimum value used for safety policy which shown greatest improvement in overall PSM. The outcome of this research indicates that personnel safety management model has the capability for acting as instruction tool to improve understanding of safety management and also as an aid to policy making.

  16. A comparison of the performances of an artificial neural network and a regression model for GFR estimation.

    PubMed

    Liu, Xun; Li, Ning-shan; Lv, Lin-sheng; Huang, Jian-hua; Tang, Hua; Chen, Jin-xia; Ma, Hui-juan; Wu, Xiao-ming; Lou, Tan-qi

    2013-12-01

    Accurate estimation of glomerular filtration rate (GFR) is important in clinical practice. Current models derived from regression are limited by the imprecision of GFR estimates. We hypothesized that an artificial neural network (ANN) might improve the precision of GFR estimates. A study of diagnostic test accuracy. 1,230 patients with chronic kidney disease were enrolled, including the development cohort (n=581), internal validation cohort (n=278), and external validation cohort (n=371). Estimated GFR (eGFR) using a new ANN model and a new regression model using age, sex, and standardized serum creatinine level derived in the development and internal validation cohort, and the CKD-EPI (Chronic Kidney Disease Epidemiology Collaboration) 2009 creatinine equation. Measured GFR (mGFR). GFR was measured using a diethylenetriaminepentaacetic acid renal dynamic imaging method. Serum creatinine was measured with an enzymatic method traceable to isotope-dilution mass spectrometry. In the external validation cohort, mean mGFR was 49±27 (SD) mL/min/1.73 m2 and biases (median difference between mGFR and eGFR) for the CKD-EPI, new regression, and new ANN models were 0.4, 1.5, and -0.5 mL/min/1.73 m2, respectively (P<0.001 and P=0.02 compared to CKD-EPI and P<0.001 comparing the new regression and ANN models). Precisions (IQRs for the difference) were 22.6, 14.9, and 15.6 mL/min/1.73 m2, respectively (P<0.001 for both compared to CKD-EPI and P<0.001 comparing the new ANN and new regression models). Accuracies (proportions of eGFRs not deviating >30% from mGFR) were 50.9%, 77.4%, and 78.7%, respectively (P<0.001 for both compared to CKD-EPI and P=0.5 comparing the new ANN and new regression models). Different methods for measuring GFR were a source of systematic bias in comparisons of new models to CKD-EPI, and both the derivation and validation cohorts consisted of a group of patients who were referred to the same institution. An ANN model using 3 variables did not perform better than a new regression model. Whether ANN can improve GFR estimation using more variables requires further investigation. Copyright © 2013 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  17. Estimating the effects of wages on obesity.

    PubMed

    Kim, DaeHwan; Leigh, John Paul

    2010-05-01

    To estimate the effects of wages on obesity and body mass. Data on household heads, aged 20 to 65 years, with full-time jobs, were drawn from the Panel Study of Income Dynamics for 2003 to 2007. The Panel Study of Income Dynamics is a nationally representative sample. Instrumental variables (IV) for wages were created using knowledge of computer software and state legal minimum wages. Least squares (linear regression) with corrected standard errors were used to estimate the equations. Statistical tests revealed both instruments were strong and tests for over-identifying restrictions were favorable. Wages were found to be predictive (P < 0.05) of obesity and body mass in regressions both before and after applying IVs. Coefficient estimates suggested stronger effects in the IV models. Results are consistent with the hypothesis that low wages increase obesity prevalence and body mass.

  18. On the of neural modeling of some dynamic parameters of earthquakes and fire safety in high-rise construction

    NASA Astrophysics Data System (ADS)

    Haritonova, Larisa

    2018-03-01

    The recent change in the correlation of the number of man-made and natural catastrophes is presented in the paper. Some recommendations are proposed to increase the firefighting efficiency in the high-rise buildings. The article analyzes the methodology of modeling seismic effects. The prospectivity of applying the neural modeling and artificial neural networks to analyze a such dynamic parameters of the earthquake foci as the value of dislocation (or the average rupture slip) is shown. The following two input signals were used: the power class and the number of earthquakes. The regression analysis has been carried out for the predicted results and the target outputs. The equations of the regression for the outputs and target are presented in the work as well as the correlation coefficients in training, validation, testing, and the total (All) for the network structure 2-5-5-1for the average rupture slip. The application of the results obtained in the article for the seismic design for the newly constructed buildings and structures and the given recommendations will provide the additional protection from fire and earthquake risks, reduction of their negative economic and environmental consequences.

  19. A comparison of large-scale climate signals and the North American Multi-Model Ensemble (NMME) for drought prediction in China

    NASA Astrophysics Data System (ADS)

    Xu, Lei; Chen, Nengcheng; Zhang, Xiang

    2018-02-01

    Drought is an extreme natural disaster that can lead to huge socioeconomic losses. Drought prediction ahead of months is helpful for early drought warning and preparations. In this study, we developed a statistical model, two weighted dynamic models and a statistical-dynamic (hybrid) model for 1-6 month lead drought prediction in China. Specifically, statistical component refers to climate signals weighting by support vector regression (SVR), dynamic components consist of the ensemble mean (EM) and Bayesian model averaging (BMA) of the North American Multi-Model Ensemble (NMME) climatic models, and the hybrid part denotes a combination of statistical and dynamic components by assigning weights based on their historical performances. The results indicate that the statistical and hybrid models show better rainfall predictions than NMME-EM and NMME-BMA models, which have good predictability only in southern China. In the 2011 China winter-spring drought event, the statistical model well predicted the spatial extent and severity of drought nationwide, although the severity was underestimated in the mid-lower reaches of Yangtze River (MLRYR) region. The NMME-EM and NMME-BMA models largely overestimated rainfall in northern and western China in 2011 drought. In the 2013 China summer drought, the NMME-EM model forecasted the drought extent and severity in eastern China well, while the statistical and hybrid models falsely detected negative precipitation anomaly (NPA) in some areas. Model ensembles such as multiple statistical approaches, multiple dynamic models or multiple hybrid models for drought predictions were highlighted. These conclusions may be helpful for drought prediction and early drought warnings in China.

  20. [Hyperspectral Estimation of Apple Tree Canopy LAI Based on SVM and RF Regression].

    PubMed

    Han, Zhao-ying; Zhu, Xi-cun; Fang, Xian-yi; Wang, Zhuo-yuan; Wang, Ling; Zhao, Geng-Xing; Jiang, Yuan-mao

    2016-03-01

    Leaf area index (LAI) is the dynamic index of crop population size. Hyperspectral technology can be used to estimate apple canopy LAI rapidly and nondestructively. It can be provide a reference for monitoring the tree growing and yield estimation. The Red Fuji apple trees of full bearing fruit are the researching objects. Ninety apple trees canopies spectral reflectance and LAI values were measured by the ASD Fieldspec3 spectrometer and LAI-2200 in thirty orchards in constant two years in Qixia research area of Shandong Province. The optimal vegetation indices were selected by the method of correlation analysis of the original spectral reflectance and vegetation indices. The models of predicting the LAI were built with the multivariate regression analysis method of support vector machine (SVM) and random forest (RF). The new vegetation indices, GNDVI527, ND-VI676, RVI682, FD-NVI656 and GRVI517 and the previous two main vegetation indices, NDVI670 and NDVI705, are in accordance with LAI. In the RF regression model, the calibration set decision coefficient C-R2 of 0.920 and validation set decision coefficient V-R2 of 0.889 are higher than the SVM regression model by 0.045 and 0.033 respectively. The root mean square error of calibration set C-RMSE of 0.249, the root mean square error validation set V-RMSE of 0.236 are lower than that of the SVM regression model by 0.054 and 0.058 respectively. Relative analysis of calibrating error C-RPD and relative analysis of validation set V-RPD reached 3.363 and 2.520, 0.598 and 0.262, respectively, which were higher than the SVM regression model. The measured and predicted the scatterplot trend line slope of the calibration set and validation set C-S and V-S are close to 1. The estimation result of RF regression model is better than that of the SVM. RF regression model can be used to estimate the LAI of red Fuji apple trees in full fruit period.

  1. On neural networks in identification and control of dynamic systems

    NASA Technical Reports Server (NTRS)

    Phan, Minh; Juang, Jer-Nan; Hyland, David C.

    1993-01-01

    This paper presents a discussion of the applicability of neural networks in the identification and control of dynamic systems. Emphasis is placed on the understanding of how the neural networks handle linear systems and how the new approach is related to conventional system identification and control methods. Extensions of the approach to nonlinear systems are then made. The paper explains the fundamental concepts of neural networks in their simplest terms. Among the topics discussed are feed forward and recurrent networks in relation to the standard state-space and observer models, linear and nonlinear auto-regressive models, linear, predictors, one-step ahead control, and model reference adaptive control for linear and nonlinear systems. Numerical examples are presented to illustrate the application of these important concepts.

  2. PRESS-based EFOR algorithm for the dynamic parametrical modeling of nonlinear MDOF systems

    NASA Astrophysics Data System (ADS)

    Liu, Haopeng; Zhu, Yunpeng; Luo, Zhong; Han, Qingkai

    2017-09-01

    In response to the identification problem concerning multi-degree of freedom (MDOF) nonlinear systems, this study presents the extended forward orthogonal regression (EFOR) based on predicted residual sums of squares (PRESS) to construct a nonlinear dynamic parametrical model. The proposed parametrical model is based on the non-linear autoregressive with exogenous inputs (NARX) model and aims to explicitly reveal the physical design parameters of the system. The PRESS-based EFOR algorithm is proposed to identify such a model for MDOF systems. By using the algorithm, we built a common-structured model based on the fundamental concept of evaluating its generalization capability through cross-validation. The resulting model aims to prevent over-fitting with poor generalization performance caused by the average error reduction ratio (AERR)-based EFOR algorithm. Then, a functional relationship is established between the coefficients of the terms and the design parameters of the unified model. Moreover, a 5-DOF nonlinear system is taken as a case to illustrate the modeling of the proposed algorithm. Finally, a dynamic parametrical model of a cantilever beam is constructed from experimental data. Results indicate that the dynamic parametrical model of nonlinear systems, which depends on the PRESS-based EFOR, can accurately predict the output response, thus providing a theoretical basis for the optimal design of modeling methods for MDOF nonlinear systems.

  3. Elbow joint angle and elbow movement velocity estimation using NARX-multiple layer perceptron neural network model with surface EMG time domain parameters.

    PubMed

    Raj, Retheep; Sivanandan, K S

    2017-01-01

    Estimation of elbow dynamics has been the object of numerous investigations. In this work a solution is proposed for estimating elbow movement velocity and elbow joint angle from Surface Electromyography (SEMG) signals. Here the Surface Electromyography signals are acquired from the biceps brachii muscle of human hand. Two time-domain parameters, Integrated EMG (IEMG) and Zero Crossing (ZC), are extracted from the Surface Electromyography signal. The relationship between the time domain parameters, IEMG and ZC with elbow angular displacement and elbow angular velocity during extension and flexion of the elbow are studied. A multiple input-multiple output model is derived for identifying the kinematics of elbow. A Nonlinear Auto Regressive with eXogenous inputs (NARX) structure based multiple layer perceptron neural network (MLPNN) model is proposed for the estimation of elbow joint angle and elbow angular velocity. The proposed NARX MLPNN model is trained using Levenberg-marquardt based algorithm. The proposed model is estimating the elbow joint angle and elbow movement angular velocity with appreciable accuracy. The model is validated using regression coefficient value (R). The average regression coefficient value (R) obtained for elbow angular displacement prediction is 0.9641 and for the elbow anglular velocity prediction is 0.9347. The Nonlinear Auto Regressive with eXogenous inputs (NARX) structure based multiple layer perceptron neural networks (MLPNN) model can be used for the estimation of angular displacement and movement angular velocity of the elbow with good accuracy.

  4. Calibration of the maximum carboxylation velocity (Vcmax) using data mining techniques and ecophysiological data from the Brazilian semiarid region, for use in Dynamic Global Vegetation Models.

    PubMed

    Rezende, L F C; Arenque-Musa, B C; Moura, M S B; Aidar, S T; Von Randow, C; Menezes, R S C; Ometto, J P B H

    2016-06-01

    The semiarid region of northeastern Brazil, the Caatinga, is extremely important due to its biodiversity and endemism. Measurements of plant physiology are crucial to the calibration of Dynamic Global Vegetation Models (DGVMs) that are currently used to simulate the responses of vegetation in face of global changes. In a field work realized in an area of preserved Caatinga forest located in Petrolina, Pernambuco, measurements of carbon assimilation (in response to light and CO2) were performed on 11 individuals of Poincianella microphylla, a native species that is abundant in this region. These data were used to calibrate the maximum carboxylation velocity (Vcmax) used in the INLAND model. The calibration techniques used were Multiple Linear Regression (MLR), and data mining techniques as the Classification And Regression Tree (CART) and K-MEANS. The results were compared to the UNCALIBRATED model. It was found that simulated Gross Primary Productivity (GPP) reached 72% of observed GPP when using the calibrated Vcmax values, whereas the UNCALIBRATED approach accounted for 42% of observed GPP. Thus, this work shows the benefits of calibrating DGVMs using field ecophysiological measurements, especially in areas where field data is scarce or non-existent, such as in the Caatinga.

  5. The allometry of coarse root biomass: log-transformed linear regression or nonlinear regression?

    PubMed

    Lai, Jiangshan; Yang, Bo; Lin, Dunmei; Kerkhoff, Andrew J; Ma, Keping

    2013-01-01

    Precise estimation of root biomass is important for understanding carbon stocks and dynamics in forests. Traditionally, biomass estimates are based on allometric scaling relationships between stem diameter and coarse root biomass calculated using linear regression (LR) on log-transformed data. Recently, it has been suggested that nonlinear regression (NLR) is a preferable fitting method for scaling relationships. But while this claim has been contested on both theoretical and empirical grounds, and statistical methods have been developed to aid in choosing between the two methods in particular cases, few studies have examined the ramifications of erroneously applying NLR. Here, we use direct measurements of 159 trees belonging to three locally dominant species in east China to compare the LR and NLR models of diameter-root biomass allometry. We then contrast model predictions by estimating stand coarse root biomass based on census data from the nearby 24-ha Gutianshan forest plot and by testing the ability of the models to predict known root biomass values measured on multiple tropical species at the Pasoh Forest Reserve in Malaysia. Based on likelihood estimates for model error distributions, as well as the accuracy of extrapolative predictions, we find that LR on log-transformed data is superior to NLR for fitting diameter-root biomass scaling models. More importantly, inappropriately using NLR leads to grossly inaccurate stand biomass estimates, especially for stands dominated by smaller trees.

  6. Dynamic strains for earthquake source characterization

    USGS Publications Warehouse

    Barbour, Andrew J.; Crowell, Brendan W

    2017-01-01

    Strainmeters measure elastodynamic deformation associated with earthquakes over a broad frequency band, with detection characteristics that complement traditional instrumentation, but they are commonly used to study slow transient deformation along active faults and at subduction zones, for example. Here, we analyze dynamic strains at Plate Boundary Observatory (PBO) borehole strainmeters (BSM) associated with 146 local and regional earthquakes from 2004–2014, with magnitudes from M 4.5 to 7.2. We find that peak values in seismic strain can be predicted from a general regression against distance and magnitude, with improvements in accuracy gained by accounting for biases associated with site–station effects and source–path effects, the latter exhibiting the strongest influence on the regression coefficients. To account for the influence of these biases in a general way, we include crustal‐type classifications from the CRUST1.0 global velocity model, which demonstrates that high‐frequency strain data from the PBO BSM network carry information on crustal structure and fault mechanics: earthquakes nucleating offshore on the Blanco fracture zone, for example, generate consistently lower dynamic strains than earthquakes around the Sierra Nevada microplate and in the Salton trough. Finally, we test our dynamic strain prediction equations on the 2011 M 9 Tohoku‐Oki earthquake, specifically continuous strain records derived from triangulation of 137 high‐rate Global Navigation Satellite System Earth Observation Network stations in Japan. Moment magnitudes inferred from these data and the strain model are in agreement when Global Positioning System subnetworks are unaffected by spatial aliasing.

  7. Statistical-dynamical modeling of the cloud-to-ground lightning activity in Portugal

    NASA Astrophysics Data System (ADS)

    Sousa, J. F.; Fragoso, M.; Mendes, S.; Corte-Real, J.; Santos, J. A.

    2013-10-01

    The present study employs a dataset of cloud-to-ground discharges over Portugal, collected by the Portuguese lightning detection network in the period of 2003-2009, to identify dynamically coherent lightning regimes in Portugal and to implement a statistical-dynamical modeling of the daily discharges over the country. For this purpose, the high-resolution MERRA reanalysis is used. Three lightning regimes are then identified for Portugal: WREG, WREM and SREG. WREG is a typical cold-core cut-off low. WREM is connected to strong frontal systems driven by remote low pressure systems at higher latitudes over the North Atlantic. SREG is a combination of an inverted trough and a mid-tropospheric cold-core nearby Portugal. The statistical-dynamical modeling is based on logistic regressions (statistical component) developed for each regime separately (dynamical component). It is shown that the strength of the lightning activity (either strong or weak) for each regime is consistently modeled by a set of suitable dynamical predictors (65-70% of efficiency). The difference of the equivalent potential temperature in the 700-500 hPa layer is the best predictor for the three regimes, while the best 4-layer lifted index is still important for all regimes, but with much weaker significance. Six other predictors are more suitable for a specific regime. For the purpose of validating the modeling approach, a regional-scale climate model simulation is carried out under a very intense lightning episode.

  8. System Identification Applied to Dynamic CFD Simulation and Wind Tunnel Data

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.; Vicroy, Dan D.

    2011-01-01

    Demanding aerodynamic modeling requirements for military and civilian aircraft have provided impetus for researchers to improve computational and experimental techniques. Model validation is a key component for these research endeavors so this study is an initial effort to extend conventional time history comparisons by comparing model parameter estimates and their standard errors using system identification methods. An aerodynamic model of an aircraft performing one-degree-of-freedom roll oscillatory motion about its body axes is developed. The model includes linear aerodynamics and deficiency function parameters characterizing an unsteady effect. For estimation of unknown parameters two techniques, harmonic analysis and two-step linear regression, were applied to roll-oscillatory wind tunnel data and to computational fluid dynamics (CFD) simulated data. The model used for this study is a highly swept wing unmanned aerial combat vehicle. Differences in response prediction, parameters estimates, and standard errors are compared and discussed

  9. Semiparametric Identification of Human Arm Dynamics for Flexible Control of a Functional Electrical Stimulation Neuroprosthesis

    PubMed Central

    Schearer, Eric M.; Liao, Yu-Wei; Perreault, Eric J.; Tresch, Matthew C.; Memberg, William D.; Kirsch, Robert F.; Lynch, Kevin M.

    2016-01-01

    We present a method to identify the dynamics of a human arm controlled by an implanted functional electrical stimulation neuroprosthesis. The method uses Gaussian process regression to predict shoulder and elbow torques given the shoulder and elbow joint positions and velocities and the electrical stimulation inputs to muscles. We compare the accuracy of torque predictions of nonparametric, semiparametric, and parametric model types. The most accurate of the three model types is a semiparametric Gaussian process model that combines the flexibility of a black box function approximator with the generalization power of a parameterized model. The semiparametric model predicted torques during stimulation of multiple muscles with errors less than 20% of the total muscle torque and passive torque needed to drive the arm. The identified model allows us to define an arbitrary reaching trajectory and approximately determine the muscle stimulations required to drive the arm along that trajectory. PMID:26955041

  10. Mutual Group Hypnosis: A Social Interaction Analysis.

    ERIC Educational Resources Information Center

    Sanders, Shirley

    Mutual Group Hypnosis is discussed in terms of its similarity to group dynamics in general and in terms of its similarity to a social interaction program (Role Modeling) designed to foster the expression of warmth and acceptance among group members. Hypnosis also fosters a regression to prelogical thought processes in the service of the ego. Group…

  11. Radioecological modelling of Polonium-210 and Caesium-137 in lichen-reindeer-man and top predators.

    PubMed

    Persson, Bertil R R; Gjelsvik, Runhild; Holm, Elis

    2018-06-01

    This work deals with analysis and modelling of the radionuclides 210 Pb and 210 Po in the food-chain lichen-reindeer-man in addition to 210 Po and 137 Cs in top predators. By using the methods of Partial Least Square Regression (PLSR) the atmospheric deposition of 210 Pb and 210 Po is predicted at the sample locations. Dynamic modelling of the activity concentration with differential equations is fitted to the sample data. Reindeer lichen consumption, gastrointestinal absorption, organ distribution and elimination is derived from information in the literature. Dynamic modelling of transfer of 210 Pb and 210 Po to reindeer meat, liver and bone from lichen consumption, fitted well with data from Sweden and Finland from 1966 to 1971. The activity concentration of 210 Pb in the skeleton in man is modelled by using the results of studying the kinetics of lead in skeleton and blood in lead-workers after end of occupational exposure. The result of modelling 210 Pb and 210 Po activity in skeleton matched well with concentrations of 210 Pb and 210 Po in teeth from reindeer-breeders and autopsy bone samples in Finland. The results of 210 Po and 137 Cs in different tissues of wolf, wolverine and lynx previously published, are analysed with multivariate data processing methods such as Principal Component Analysis PCA, and modelled with the method of Projection to Latent Structures, PLS, or Partial Least Square Regression PLSR. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Injury risk functions based on population-based finite element model responses: Application to femurs under dynamic three-point bending.

    PubMed

    Park, Gwansik; Forman, Jason; Kim, Taewung; Panzer, Matthew B; Crandall, Jeff R

    2018-02-28

    The goal of this study was to explore a framework for developing injury risk functions (IRFs) in a bottom-up approach based on responses of parametrically variable finite element (FE) models representing exemplar populations. First, a parametric femur modeling tool was developed and validated using a subject-specific (SS)-FE modeling approach. Second, principal component analysis and regression were used to identify parametric geometric descriptors of the human femur and the distribution of those factors for 3 target occupant sizes (5th, 50th, and 95th percentile males). Third, distributions of material parameters of cortical bone were obtained from the literature for 3 target occupant ages (25, 50, and 75 years) using regression analysis. A Monte Carlo method was then implemented to generate populations of FE models of the femur for target occupants, using a parametric femur modeling tool. Simulations were conducted with each of these models under 3-point dynamic bending. Finally, model-based IRFs were developed using logistic regression analysis, based on the moment at fracture observed in the FE simulation. In total, 100 femur FE models incorporating the variation in the population of interest were generated, and 500,000 moments at fracture were observed (applying 5,000 ultimate strains for each synthesized 100 femur FE models) for each target occupant characteristics. Using the proposed framework on this study, the model-based IRFs for 3 target male occupant sizes (5th, 50th, and 95th percentiles) and ages (25, 50, and 75 years) were developed. The model-based IRF was located in the 95% confidence interval of the test-based IRF for the range of 15 to 70% injury risks. The 95% confidence interval of the developed IRF was almost in line with the mean curve due to a large number of data points. The framework proposed in this study would be beneficial for developing the IRFs in a bottom-up manner, whose range of variabilities is informed by the population-based FE model responses. Specifically, this method mitigates the uncertainties in applying empirical scaling and may improve IRF fidelity when a limited number of experimental specimens are available.

  13. Regionalization of meso-scale physically based nitrogen modeling outputs to the macro-scale by the use of regression trees

    NASA Astrophysics Data System (ADS)

    Künne, A.; Fink, M.; Kipka, H.; Krause, P.; Flügel, W.-A.

    2012-06-01

    In this paper, a method is presented to estimate excess nitrogen on large scales considering single field processes. The approach was implemented by using the physically based model J2000-S to simulate the nitrogen balance as well as the hydrological dynamics within meso-scale test catchments. The model input data, the parameterization, the results and a detailed system understanding were used to generate the regression tree models with GUIDE (Loh, 2002). For each landscape type in the federal state of Thuringia a regression tree was calibrated and validated using the model data and results of excess nitrogen from the test catchments. Hydrological parameters such as precipitation and evapotranspiration were also used to predict excess nitrogen by the regression tree model. Hence they had to be calculated and regionalized as well for the state of Thuringia. Here the model J2000g was used to simulate the water balance on the macro scale. With the regression trees the excess nitrogen was regionalized for each landscape type of Thuringia. The approach allows calculating the potential nitrogen input into the streams of the drainage area. The results show that the applied methodology was able to transfer the detailed model results of the meso-scale catchments to the entire state of Thuringia by low computing time without losing the detailed knowledge from the nitrogen transport modeling. This was validated with modeling results from Fink (2004) in a catchment lying in the regionalization area. The regionalized and modeled excess nitrogen correspond with 94%. The study was conducted within the framework of a project in collaboration with the Thuringian Environmental Ministry, whose overall aim was to assess the effect of agro-environmental measures regarding load reduction in the water bodies of Thuringia to fulfill the requirements of the European Water Framework Directive (Bäse et al., 2007; Fink, 2006; Fink et al., 2007).

  14. A springy pendulum could describe the swing leg kinetics of human walking.

    PubMed

    Song, Hyunggwi; Park, Heewon; Park, Sukyung

    2016-06-14

    The dynamics of human walking during various walking conditions could be qualitatively captured by the springy legged dynamics, which have been used as a theoretical framework for bipedal robotics applications. However, the spring-loaded inverted pendulum model describes the motion of the center of mass (CoM), which combines the torso, swing and stance legs together and does not explicitly inform us as to whether the inter-limb dynamics share the springy legged dynamics characteristics of the CoM. In this study, we examined whether the swing leg dynamics could also be represented by springy mechanics and whether the swing leg stiffness shows a dependence on gait speed, as has been observed in CoM mechanics during walking. The swing leg was modeled as a spring-loaded pendulum hinged at the hip joint, which is under forward motion. The model parameters of the loaded mass were adopted from body parameters and anthropometric tables, whereas the free model parameters for the rest length of the spring and its stiffness were estimated to best match the data for the swing leg joint forces. The joint forces of the swing leg were well represented by the springy pendulum model at various walking speeds with a regression coefficient of R(2)>0.8. The swing leg stiffness increased with walking speed and was correlated with the swing frequency, which is consistent with previous observations from CoM dynamics described using the compliant leg. These results suggest that the swing leg also shares the springy dynamics, and the compliant walking model could be extended to better present swing leg dynamics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Study on the dynamic recrystallization model and mechanism of nuclear grade 316LN austenitic stainless steel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Shenglong; Zhang, Mingxian; Wu, Huanchun

    In this study, the dynamic recrystallization behaviors of a nuclear grade 316LN austenitic stainless steel were researched through hot compression experiment performed on a Gleeble-1500 simulator at temperatures of 900–1250 °C and strain rates of 0.01–1 s{sup −1}. By multiple linear regressions of the flow stress-strain data, the dynamic recrystallization mathematical models of this steel as functions of strain rate, strain and temperature were developed. Then these models were verified in a real experiment. Furthermore, the dynamic recrystallization mechanism of the steel was determined. The results indicated that the subgrains in this steel are formed through dislocations polygonization and thenmore » grow up through subgrain boundaries migration towards high density dislocation areas and subgrain coalescence mechanism. Dynamic recrystallization nucleation performs in grain boundary bulging mechanism and subgrain growth mechanism. The nuclei grow up through high angle grain boundaries migration. - Highlights: •Establish the DRX mathematical models of nuclear grade 316LN stainless steel •Determine the DRX mechanism of this steel •Subgrains are formed through dislocations polygonization. •Subgrains grow up through subgrain boundaries migration and coalescence mechanism. •DRX nucleation performs in grain boundary bulging mechanism and subgrain growth mechanism.« less

  16. High-Order Model and Dynamic Filtering for Frame Rate Up-Conversion.

    PubMed

    Bao, Wenbo; Zhang, Xiaoyun; Chen, Li; Ding, Lianghui; Gao, Zhiyong

    2018-08-01

    This paper proposes a novel frame rate up-conversion method through high-order model and dynamic filtering (HOMDF) for video pixels. Unlike the constant brightness and linear motion assumptions in traditional methods, the intensity and position of the video pixels are both modeled with high-order polynomials in terms of time. Then, the key problem of our method is to estimate the polynomial coefficients that represent the pixel's intensity variation, velocity, and acceleration. We propose to solve it with two energy objectives: one minimizes the auto-regressive prediction error of intensity variation by its past samples, and the other minimizes video frame's reconstruction error along the motion trajectory. To efficiently address the optimization problem for these coefficients, we propose the dynamic filtering solution inspired by video's temporal coherence. The optimal estimation of these coefficients is reformulated into a dynamic fusion of the prior estimate from pixel's temporal predecessor and the maximum likelihood estimate from current new observation. Finally, frame rate up-conversion is implemented using motion-compensated interpolation by pixel-wise intensity variation and motion trajectory. Benefited from the advanced model and dynamic filtering, the interpolated frame has much better visual quality. Extensive experiments on the natural and synthesized videos demonstrate the superiority of HOMDF over the state-of-the-art methods in both subjective and objective comparisons.

  17. Dynamical Cognitive Models of Social Issues in Russia

    NASA Astrophysics Data System (ADS)

    Mitina, Olga; Abraham, Fred; Petrenko, Victor

    We examine and model dynamics in three areas of social cognition: (1) political transformations within Russia, (2) evaluation of political trends in other countries by Russians, and (3) evaluation of Russian stereotypes concerning women. We try to represent consciousness as vectorfields and trajectories in a cognitive state space. We use psychosemantic techniques that allow definition of the state space and the systematic construction of these vectorfields and trajectories and their portrait from research data. Then we construct models to fit them, using multiple regression methods to obtain linear differential equations. These dynamical models of social cognition fit the data quite well. (1) The political transformations were modeled by a spiral repellor in a two-dimensional space of a democratic-totalitarian factor and social depression-optimism factor. (2) The evaluation of alien political trends included a flow away from a saddle toward more stable and moderate political regimes in a 2D space, of democratic-totalitarian and unstable-stable cognitive dimensions. (3) The gender study showed expectations (attractors) for more liberated, emancipated roles for women in the future.

  18. Predicting in ungauged basins using a parsimonious rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Skaugen, Thomas; Olav Peerebom, Ivar; Nilsson, Anna

    2015-04-01

    Prediction in ungauged basins is a demanding, but necessary test for hydrological model structures. Ideally, the relationship between model parameters and catchment characteristics (CC) should be hydrologically justifiable. Many studies, however, report on failure to obtain significant correlations between model parameters and CCs. Under the hypothesis that the lack of correlations stems from non-identifiability of model parameters caused by overparameterization, the relatively new parameter parsimonious DDD (Distance Distribution Dynamics) model was tested for predictions in ungauged basins in Norway. In DDD, the capacity of the subsurface water reservoir M is the only parameter to be calibrated whereas the runoff dynamics is completely parameterised from observed characteristics derived from GIS and runoff recession analysis. Water is conveyed through the soils to the river network by waves with celerities determined by the level of saturation in the catchment. The distributions of distances between points in the catchment to the nearest river reach and of the river network give, together with the celerities, distributions of travel times, and, consequently unit hydrographs. DDD has 6 parameters less to calibrate in the runoff module than, for example, the well-known Swedish HBV model. In this study, multiple regression equations relating CCs and model parameters were trained from 84 calibrated catchments located all over Norway and all model parameters showed significant correlations with catchment characteristics. The significant correlation coefficients (with p- value < 0.05) ranged from 0.22-0.55. The suitability of DDD for predictions in ungauged basins was tested for 17 catchments not used to estimate the multiple regression equations. For 10 of the 17 catchments, deviations in Nash-Suthcliffe Efficiency (NSE) criteria between the calibrated and regionalised model were less than 0.1. The median NSE for the regionalised DDD for the 17 catchments, for two different time series was 0.66 and 0.72. Deviations in NSE between calibrated and regionalised models are well explained by the deviations between calibrated and regressed parameters describing spatial snow distribution and snowmelt, respectively. This latter result indicates the topic for further improvements in the model structure of DDD.

  19. Self-organising mixture autoregressive model for non-stationary time series modelling.

    PubMed

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  20. Forecast of severe fever with thrombocytopenia syndrome incidence with meteorological factors.

    PubMed

    Sun, Ji-Min; Lu, Liang; Liu, Ke-Ke; Yang, Jun; Wu, Hai-Xia; Liu, Qi-Yong

    2018-06-01

    Severe fever with thrombocytopenia syndrome (SFTS) is emerging and some studies reported that SFTS incidence was associated with meteorological factors, while no report on SFTS forecast models was reported up to date. In this study, we constructed and compared three forecast models using autoregressive integrated moving average (ARIMA) model, negative binomial regression model (NBM), and quasi-Poisson generalized additive model (GAM). The dataset from 2011 to 2015 were used for model construction and the dataset in 2016 were used for external validity assessment. All the three models fitted the SFTS cases reasonably well during the training process and forecast process, while the NBM model forecasted better than other two models. Moreover, we demonstrated that temperature and relative humidity played key roles in explaining the temporal dynamics of SFTS occurrence. Our study contributes to better understanding of SFTS dynamics and provides predictive tools for the control and prevention of SFTS. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Friendship Network and Dental Brushing Behavior among Middle School Students: An Agent Based Modeling Approach.

    PubMed

    Sadeghipour, Maryam; Khoshnevisan, Mohammad Hossein; Jafari, Afshin; Shariatpanahi, Seyed Peyman

    2017-01-01

    By using a standard questionnaire, the level of dental brushing frequency was assessed among 201 adolescent female middle school students in Tehran. The initial assessment was repeated after 5 months, in order to observe the dynamics in dental health behavior level. Logistic Regression model was used to evaluate the correlation among individuals' dental health behavior in their social network. A significant correlation on dental brushing habits was detected among groups of friends. This correlation was further spread over the network within the 5 months period. Moreover, it was identified that the average brushing level was improved within the 5 months period. Given that there was a significant correlation between social network's nodes' in-degree value, and brushing level, it was suggested that the observed improvement was partially due to more popularity of individuals with better tooth brushing habit. Agent Based Modeling (ABM) was used to demonstrate the dynamics of dental brushing frequency within a sample of friendship network. Two models with static and dynamic assumptions for the network structure were proposed. The model with dynamic network structure successfully described the dynamics of dental health behavior. Based on this model, on average, every 43 weeks a student changes her brushing habit due to learning from her friends. Finally, three training scenarios were tested by these models in order to evaluate their effectiveness. When training more popular students, considerable improvement in total students' brushing frequency was demonstrated by simulation results.

  2. Non-linear dynamics in muscle fatigue and strength model during maximal self-perceived elbow extensors training.

    PubMed

    Gacesa, Jelena Popadic; Ivancevic, Tijana; Ivancevic, Nik; Paljic, Feodora Popic; Grujic, Nikola

    2010-08-26

    Our aim was to determine the dynamics in muscle strength increase and fatigue development during repetitive maximal contraction in specific maximal self-perceived elbow extensors training program. We will derive our functional model for m. triceps brachii in spirit of traditional Hill's two-component muscular model and after fitting our data, develop a prediction tool for this specific training system. Thirty-six healthy young men (21 +/- 1.0 y, BMI 25.4 +/- 7.2 kg/m(2)), who did not take part in any formal resistance exercise regime, volunteered for this study. The training protocol was performed on the isoacceleration dynamometer, lasted for 12 weeks, with a frequency of five sessions per week. Each training session included five sets of 10 maximal contractions (elbow extensions) with a 1 min resting period between each set. The non-linear dynamic system model was used for fitting our data in conjunction with the Levenberg-Marquardt regression algorithm. As a proper dynamical system, our functional model of m. triceps brachii can be used for prediction and control. The model can be used for the predictions of muscular fatigue in a single series, the cumulative daily muscular fatigue and the muscular growth throughout the training process. In conclusion, the application of non-linear dynamics in this particular training model allows us to mathematically explain some functional changes in the skeletal muscle as a result of its adaptation to programmed physical activity-training. 2010 Elsevier Ltd. All rights reserved.

  3. Fluctuations in air pollution give risk warning signals of asthma hospitalization

    NASA Astrophysics Data System (ADS)

    Hsieh, Nan-Hung; Liao, Chung-Min

    2013-08-01

    Recent studies have implicated that air pollution has been associated with asthma exacerbations. However, the key link between specific air pollutant and the consequent impact on asthma has not been shown. The purpose of this study was to quantify the fluctuations in air pollution time-series dynamics to correlate the relationships between statistical indicators and age-specific asthma hospital admissions. An indicators-based regression model was developed to predict the time-trend of asthma hospital admissions in Taiwan in the period 1998-2010. Five major pollutants such as particulate matters with aerodynamic diameter less than 10 μm (PM10), ozone (O3), nitrogen dioxide (NO2), sulfur dioxide (SO2), and carbon monoxide (CO) were included. We used Spearman's rank correlation to detect the relationships between time-series based statistical indicators of standard deviation, coefficient of variation, skewness, and kurtosis and monthly asthma hospitalization. We further used the indicators-guided Poisson regression model to test and predict the impact of target air pollutants on asthma incidence. Here we showed that standard deviation of PM10 data was the most correlated indicators for asthma hospitalization for all age groups, particularly for elderly. The skewness of O3 data gives the highest correlation to adult asthmatics. The proposed regression model shows a better predictability in annual asthma hospitalization trends for pediatrics. Our results suggest that a set of statistical indicators inferred from time-series information of major air pollutants can provide advance risk warning signals in complex air pollution-asthma systems and aid in asthma management that depends heavily on monitoring the dynamics of asthma incidence and environmental stimuli.

  4. Consistent response of vegetation dynamics to recent climate change in tropical mountain regions.

    PubMed

    Krishnaswamy, Jagdish; John, Robert; Joseph, Shijo

    2014-01-01

    Global climate change has emerged as a major driver of ecosystem change. Here, we present evidence for globally consistent responses in vegetation dynamics to recent climate change in the world's mountain ecosystems located in the pan-tropical belt (30°N-30°S). We analyzed decadal-scale trends and seasonal cycles of vegetation greenness using monthly time series of satellite greenness (Normalized Difference Vegetation Index) and climate data for the period 1982-2006 for 47 mountain protected areas in five biodiversity hotspots. The time series of annual maximum NDVI for each of five continental regions shows mild greening trends followed by reversal to stronger browning trends around the mid-1990s. During the same period we found increasing trends in temperature but only marginal change in precipitation. The amplitude of the annual greenness cycle increased with time, and was strongly associated with the observed increase in temperature amplitude. We applied dynamic models with time-dependent regression parameters to study the time evolution of NDVI-climate relationships. We found that the relationship between vegetation greenness and temperature weakened over time or was negative. Such loss of positive temperature sensitivity has been documented in other regions as a response to temperature-induced moisture stress. We also used dynamic models to extract the trends in vegetation greenness that remain after accounting for the effects of temperature and precipitation. We found residual browning and greening trends in all regions, which indicate that factors other than temperature and precipitation also influence vegetation dynamics. Browning rates became progressively weaker with increase in elevation as indicated by quantile regression models. Tropical mountain vegetation is considered sensitive to climatic changes, so these consistent vegetation responses across widespread regions indicate persistent global-scale effects of climate warming and associated moisture stresses. © 2013 John Wiley & Sons Ltd.

  5. Sparse learning of stochastic dynamical equations

    NASA Astrophysics Data System (ADS)

    Boninsegna, Lorenzo; Nüske, Feliks; Clementi, Cecilia

    2018-06-01

    With the rapid increase of available data for complex systems, there is great interest in the extraction of physically relevant information from massive datasets. Recently, a framework called Sparse Identification of Nonlinear Dynamics (SINDy) has been introduced to identify the governing equations of dynamical systems from simulation data. In this study, we extend SINDy to stochastic dynamical systems which are frequently used to model biophysical processes. We prove the asymptotic correctness of stochastic SINDy in the infinite data limit, both in the original and projected variables. We discuss algorithms to solve the sparse regression problem arising from the practical implementation of SINDy and show that cross validation is an essential tool to determine the right level of sparsity. We demonstrate the proposed methodology on two test systems, namely, the diffusion in a one-dimensional potential and the projected dynamics of a two-dimensional diffusion process.

  6. Learning-based deformable image registration for infant MR images in the first year of life.

    PubMed

    Hu, Shunbo; Wei, Lifang; Gao, Yaozong; Guo, Yanrong; Wu, Guorong; Shen, Dinggang

    2017-01-01

    Many brain development studies have been devoted to investigate dynamic structural and functional changes in the first year of life. To quantitatively measure brain development in such a dynamic period, accurate image registration for different infant subjects with possible large age gap is of high demand. Although many state-of-the-art image registration methods have been proposed for young and elderly brain images, very few registration methods work for infant brain images acquired in the first year of life, because of (a) large anatomical changes due to fast brain development and (b) dynamic appearance changes due to white-matter myelination. To address these two difficulties, we propose a learning-based registration method to not only align the anatomical structures but also alleviate the appearance differences between two arbitrary infant MR images (with large age gap) by leveraging the regression forest to predict both the initial displacement vector and appearance changes. Specifically, in the training stage, two regression models are trained separately, with (a) one model learning the relationship between local image appearance (of one development phase) and its displacement toward the template (of another development phase) and (b) another model learning the local appearance changes between the two brain development phases. Then, in the testing stage, to register a new infant image to the template, we first predict both its voxel-wise displacement and appearance changes by the two learned regression models. Since such initializations can alleviate significant appearance and shape differences between new infant image and the template, it is easy to just use a conventional registration method to refine the remaining registration. We apply our proposed registration method to align 24 infant subjects at five different time points (i.e., 2-week-old, 3-month-old, 6-month-old, 9-month-old, and 12-month-old), and achieve more accurate and robust registration results, compared to the state-of-the-art registration methods. The proposed learning-based registration method addresses the challenging task of registering infant brain images and achieves higher registration accuracy compared with other counterpart registration methods. © 2016 American Association of Physicists in Medicine.

  7. Estimation of continuous multi-DOF finger joint kinematics from surface EMG using a multi-output Gaussian Process.

    PubMed

    Ngeo, Jimson; Tamei, Tomoya; Shibata, Tomohiro

    2014-01-01

    Surface electromyographic (EMG) signals have often been used in estimating upper and lower limb dynamics and kinematics for the purpose of controlling robotic devices such as robot prosthesis and finger exoskeletons. However, in estimating multiple and a high number of degrees-of-freedom (DOF) kinematics from EMG, output DOFs are usually estimated independently. In this study, we estimate finger joint kinematics from EMG signals using a multi-output convolved Gaussian Process (Multi-output Full GP) that considers dependencies between outputs. We show that estimation of finger joints from muscle activation inputs can be improved by using a regression model that considers inherent coupling or correlation within the hand and finger joints. We also provide a comparison of estimation performance between different regression methods, such as Artificial Neural Networks (ANN) which is used by many of the related studies. We show that using a multi-output GP gives improved estimation compared to multi-output ANN and even dedicated or independent regression models.

  8. [Radiotherapy and chaos theory: the tit bird and the butterfly...].

    PubMed

    Denis, F; Letellier, C

    2012-09-01

    Although the same simple laws govern cancer outcome (cell division repeated again and again), each tumour has a different outcome before as well as after irradiation therapy. The linear-quadratic radiosensitivity model allows an assessment of tumor sensitivity to radiotherapy. This model presents some limitations in clinical practice because it does not take into account the interactions between tumour cells and non-tumoral bystander cells (such as endothelial cells, fibroblasts, immune cells...) that modulate radiosensitivity and tumor growth dynamics. These interactions can lead to non-linear and complex tumor growth which appears to be random but that is not since there is not so many tumors spontaneously regressing. In this paper we propose to develop a deterministic approach for tumour growth dynamics using chaos theory. Various characteristics of cancer dynamics and tumor radiosensitivity can be explained using mathematical models of competing cell species. Copyright © 2012 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.

  9. Artificial neural networks and multiple linear regression model using principal components to estimate rainfall over South America

    NASA Astrophysics Data System (ADS)

    Soares dos Santos, T.; Mendes, D.; Rodrigues Torres, R.

    2016-01-01

    Several studies have been devoted to dynamic and statistical downscaling for analysis of both climate variability and climate change. This paper introduces an application of artificial neural networks (ANNs) and multiple linear regression (MLR) by principal components to estimate rainfall in South America. This method is proposed for downscaling monthly precipitation time series over South America for three regions: the Amazon; northeastern Brazil; and the La Plata Basin, which is one of the regions of the planet that will be most affected by the climate change projected for the end of the 21st century. The downscaling models were developed and validated using CMIP5 model output and observed monthly precipitation. We used general circulation model (GCM) experiments for the 20th century (RCP historical; 1970-1999) and two scenarios (RCP 2.6 and 8.5; 2070-2100). The model test results indicate that the ANNs significantly outperform the MLR downscaling of monthly precipitation variability.

  10. Combustion performance and scale effect from N2O/HTPB hybrid rocket motor simulations

    NASA Astrophysics Data System (ADS)

    Shan, Fanli; Hou, Lingyun; Piao, Ying

    2013-04-01

    HRM code for the simulation of N2O/HTPB hybrid rocket motor operation and scale effect analysis has been developed. This code can be used to calculate motor thrust and distributions of physical properties inside the combustion chamber and nozzle during the operational phase by solving the unsteady Navier-Stokes equations using a corrected compressible difference scheme and a two-step, five species combustion model. A dynamic fuel surface regression technique and a two-step calculation method together with the gas-solid coupling are applied in the calculation of fuel regression and the determination of combustion chamber wall profile as fuel regresses. Both the calculated motor thrust from start-up to shut-down mode and the combustion chamber wall profile after motor operation are in good agreements with experimental data. The fuel regression rate equation and the relation between fuel regression rate and axial distance have been derived. Analysis of results suggests improvements in combustion performance to the current hybrid rocket motor design and explains scale effects in the variation of fuel regression rate with combustion chamber diameter.

  11. Future disability projections could be improved by connecting to the theory of a dynamic equilibrium.

    PubMed

    Klijs, Bart; Mackenbach, Johan P; Kunst, Anton E

    2011-04-01

    Projections of future trends in the burden of disability could be guided by models linking disability to life expectancy, such as the dynamic equilibrium theory. This article tests the key assumption of this theory that severe disability is associated with proximity to death, whereas mild disability is not. Using data from the GLOBE study (Gezondheid en Levensomstandigheden Bevolking Eindhoven en omstreken), the association of three levels of self-reported disabilities in activities of daily living with age and proximity to death was studied using logistic regression models. Regression estimates were used to estimate the number of life years with disability for life spans of 75 and 85 years. Odds ratios of 0.976 (not significant) for mild disability, 1.137 for moderate disability, and 1.231 for severe disability showed a stronger effect of proximity to death for more severe levels of disability. A 10-year increase of life span was estimated to result in a substantial expansion of mild disability (4.6 years) compared with a small expansion of moderate (0.7 years) and severe (0.9 years) disability. These findings support the theory of a dynamic equilibrium. Projections of the future burden of disability could be substantially improved by connecting to this theory and incorporating information on proximity to death. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Application of Multiregressive Linear Models, Dynamic Kriging Models and Neural Network Models to Predictive Maintenance of Hydroelectric Power Systems

    NASA Astrophysics Data System (ADS)

    Lucifredi, A.; Mazzieri, C.; Rossi, M.

    2000-05-01

    Since the operational conditions of a hydroelectric unit can vary within a wide range, the monitoring system must be able to distinguish between the variations of the monitored variable caused by variations of the operation conditions and those due to arising and progressing of failures and misoperations. The paper aims to identify the best technique to be adopted for the monitoring system. Three different methods have been implemented and compared. Two of them use statistical techniques: the first, the linear multiple regression, expresses the monitored variable as a linear function of the process parameters (independent variables), while the second, the dynamic kriging technique, is a modified technique of multiple linear regression representing the monitored variable as a linear combination of the process variables in such a way as to minimize the variance of the estimate error. The third is based on neural networks. Tests have shown that the monitoring system based on the kriging technique is not affected by some problems common to the other two models e.g. the requirement of a large amount of data for their tuning, both for training the neural network and defining the optimum plane for the multiple regression, not only in the system starting phase but also after a trivial operation of maintenance involving the substitution of machinery components having a direct impact on the observed variable. Or, in addition, the necessity of different models to describe in a satisfactory way the different ranges of operation of the plant. The monitoring system based on the kriging statistical technique overrides the previous difficulties: it does not require a large amount of data to be tuned and is immediately operational: given two points, the third can be immediately estimated; in addition the model follows the system without adapting itself to it. The results of the experimentation performed seem to indicate that a model based on a neural network or on a linear multiple regression is not optimal, and that a different approach is necessary to reduce the amount of work during the learning phase using, when available, all the information stored during the initial phase of the plant to build the reference baseline, elaborating, if it is the case, the raw information available. A mixed approach using the kriging statistical technique and neural network techniques could optimise the result.

  13. An evaluation of supervised classifiers for indirectly detecting salt-affected areas at irrigation scheme level

    NASA Astrophysics Data System (ADS)

    Muller, Sybrand Jacobus; van Niekerk, Adriaan

    2016-07-01

    Soil salinity often leads to reduced crop yield and quality and can render soils barren. Irrigated areas are particularly at risk due to intensive cultivation and secondary salinization caused by waterlogging. Regular monitoring of salt accumulation in irrigation schemes is needed to keep its negative effects under control. The dynamic spatial and temporal characteristics of remote sensing can provide a cost-effective solution for monitoring salt accumulation at irrigation scheme level. This study evaluated a range of pan-fused SPOT-5 derived features (spectral bands, vegetation indices, image textures and image transformations) for classifying salt-affected areas in two distinctly different irrigation schemes in South Africa, namely Vaalharts and Breede River. The relationship between the input features and electro conductivity measurements were investigated using regression modelling (stepwise linear regression, partial least squares regression, curve fit regression modelling) and supervised classification (maximum likelihood, nearest neighbour, decision tree analysis, support vector machine and random forests). Classification and regression trees and random forest were used to select the most important features for differentiating salt-affected and unaffected areas. The results showed that the regression analyses produced weak models (<0.4 R squared). Better results were achieved using the supervised classifiers, but the algorithms tend to over-estimate salt-affected areas. A key finding was that none of the feature sets or classification algorithms stood out as being superior for monitoring salt accumulation at irrigation scheme level. This was attributed to the large variations in the spectral responses of different crops types at different growing stages, coupled with their individual tolerances to saline conditions.

  14. Understanding African Swine Fever infection dynamics in Sardinia using a spatially explicit transmission model in domestic pig farms.

    PubMed

    Mur, L; Sánchez-Vizcaíno, J M; Fernández-Carrión, E; Jurado, C; Rolesu, S; Feliziani, F; Laddomada, A; Martínez-López, B

    2018-02-01

    African swine fever virus (ASFV) has been endemic in Sardinia since 1978, resulting in severe losses for local pig producers and creating important problems for the island's veterinary authorities. This study used a spatially explicit stochastic transmission model followed by two regression models to investigate the dynamics of ASFV spread amongst domestic pig farms, to identify geographic areas at highest risk and determine the role of different susceptible pig populations (registered domestic pigs, non-registered domestic pigs [brado] and wild boar) in ASF occurrence. We simulated transmission within and between farms using an adapted version of the previously described model known as Be-FAST. Results from the model revealed a generally low diffusion of ASF in Sardinia, with only 24% of the simulations resulting in disease spread, and for each simulated outbreak on average only four farms and 66 pigs were affected. Overall, local spread (indirect transmission between farms within a 2 km radius through fomites) was the most common route of transmission, being responsible for 98.6% of secondary cases. The risk of ASF occurrence for each domestic pig farm was estimated from the spread model results and integrated in two regression models together with available data for brado and wild boar populations. There was a significant association between the density of all three populations (domestic pigs, brado, and wild boar) and ASF occurrence in Sardinia. The most significant risk factors were the high densities of brado (OR = 2.2) and wild boar (OR = 2.1). The results of both analyses demonstrated that ASF epidemiology and infection dynamics in Sardinia create a complex and multifactorial disease situation, where all susceptible populations play an important role. To stop ASF transmission in Sardinia, three main factors (improving biosecurity on domestic pig farms, eliminating brado practices and better management of wild boars) need to be addressed. © 2017 Blackwell Verlag GmbH.

  15. Adjustments to de Leva-anthropometric regression data for the changes in body proportions in elderly humans.

    PubMed

    Ho Hoang, Khai-Long; Mombaur, Katja

    2015-10-15

    Dynamic modeling of the human body is an important tool to investigate the fundamentals of the biomechanics of human movement. To model the human body in terms of a multi-body system, it is necessary to know the anthropometric parameters of the body segments. For young healthy subjects, several data sets exist that are widely used in the research community, e.g. the tables provided by de Leva. None such comprehensive anthropometric parameter sets exist for elderly people. It is, however, well known that body proportions change significantly during aging, e.g. due to degenerative effects in the spine, such that parameters for young people cannot be used for realistically simulating the dynamics of elderly people. In this study, regression equations are derived from the inertial parameters, center of mass positions, and body segment lengths provided by de Leva to be adjustable to the changes in proportion of the body parts of male and female humans due to aging. Additional adjustments are made to the reference points of the parameters for the upper body segments as they are chosen in a more practicable way in the context of creating a multi-body model in a chain structure with the pelvis representing the most proximal segment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Adaptive Anchoring Model: How Static and Dynamic Presentations of Time Series Influence Judgments and Predictions.

    PubMed

    Kusev, Petko; van Schaik, Paul; Tsaneva-Atanasova, Krasimira; Juliusson, Asgeir; Chater, Nick

    2018-01-01

    When attempting to predict future events, people commonly rely on historical data. One psychological characteristic of judgmental forecasting of time series, established by research, is that when people make forecasts from series, they tend to underestimate future values for upward trends and overestimate them for downward ones, so-called trend-damping (modeled by anchoring on, and insufficient adjustment from, the average of recent time series values). Events in a time series can be experienced sequentially (dynamic mode), or they can also be retrospectively viewed simultaneously (static mode), not experienced individually in real time. In one experiment, we studied the influence of presentation mode (dynamic and static) on two sorts of judgment: (a) predictions of the next event (forecast) and (b) estimation of the average value of all the events in the presented series (average estimation). Participants' responses in dynamic mode were anchored on more recent events than in static mode for all types of judgment but with different consequences; hence, dynamic presentation improved prediction accuracy, but not estimation. These results are not anticipated by existing theoretical accounts; we develop and present an agent-based model-the adaptive anchoring model (ADAM)-to account for the difference between processing sequences of dynamically and statically presented stimuli (visually presented data). ADAM captures how variation in presentation mode produces variation in responses (and the accuracy of these responses) in both forecasting and judgment tasks. ADAM's model predictions for the forecasting and judgment tasks fit better with the response data than a linear-regression time series model. Moreover, ADAM outperformed autoregressive-integrated-moving-average (ARIMA) and exponential-smoothing models, while neither of these models accounts for people's responses on the average estimation task. Copyright © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.

  17. Diagnosing and Reconstructing Real-World Hydroclimatic Dynamics from Time Sequenced Data: The Case of Saltwater Intrusion into Coastal Wetlands in Everglades National Park

    NASA Astrophysics Data System (ADS)

    Huffaker, R.; Munoz-Carpena, R.

    2016-12-01

    There are increasing calls to audit decision-support models used for environmental policy to ensure that they correspond with the reality facing policy makers. Modelers can establish correspondence by providing empirical evidence of real-world dynamic behavior that their models skillfully simulate. We present a pre-modeling diagnostic framework—based on nonlinear dynamic analysis—for detecting and reconstructing real-world environmental dynamics from observed time-sequenced data. Phenomenological (data-driven) modeling—based on machine learning regression techniques—extracts a set of ordinary differential equations governing empirically-diagnosed system dynamics from a single time series, or from multiple time series on causally-interacting variables. We apply the framework to investigate saltwater intrusion into coastal wetlands in Everglades National Park, Florida, USA. We test the following hypotheses posed in the literature linking regional hydrologic variables with global climatic teleconnections: (1) Sea level in Florida Bay drives well level and well salinity in the coastal Everglades; (2) Atlantic Multidecadal Oscillation (AMO) drives sea level, well level and well salinity; and (3) AMO and (El Niño Southern Oscillation) ENSO bi-causally interact. The thinking is that salt water intrusion links ocean-surface salinity with salinity of inland water sources, and sea level with inland water; that AMO and ENSO share a teleconnective relationship (perhaps through the atmosphere); and that AMO and ENSO both influence inland precipitation and thus well levels. Our results support these hypotheses, and we successfully construct a parsimonious phenomenological model that reproduces diagnosed nonlinear dynamics and system interactions. We propose that reconstructed data dynamics be used, along with other expert information, as a rigorous benchmark to guide specification and testing of hydrologic decision support models corresponding with real-world behavior.

  18. Quantitative monitoring of sucrose, reducing sugar and total sugar dynamics for phenotyping of water-deficit stress tolerance in rice through spectroscopy and chemometrics

    NASA Astrophysics Data System (ADS)

    Das, Bappa; Sahoo, Rabi N.; Pargal, Sourabh; Krishna, Gopal; Verma, Rakesh; Chinnusamy, Viswanathan; Sehgal, Vinay K.; Gupta, Vinod K.; Dash, Sushanta K.; Swain, Padmini

    2018-03-01

    In the present investigation, the changes in sucrose, reducing and total sugar content due to water-deficit stress in rice leaves were modeled using visible, near infrared (VNIR) and shortwave infrared (SWIR) spectroscopy. The objectives of the study were to identify the best vegetation indices and suitable multivariate technique based on precise analysis of hyperspectral data (350 to 2500 nm) and sucrose, reducing sugar and total sugar content measured at different stress levels from 16 different rice genotypes. Spectral data analysis was done to identify suitable spectral indices and models for sucrose estimation. Novel spectral indices in near infrared (NIR) range viz. ratio spectral index (RSI) and normalised difference spectral indices (NDSI) sensitive to sucrose, reducing sugar and total sugar content were identified which were subsequently calibrated and validated. The RSI and NDSI models had R2 values of 0.65, 0.71 and 0.67; RPD values of 1.68, 1.95 and 1.66 for sucrose, reducing sugar and total sugar, respectively for validation dataset. Different multivariate spectral models such as artificial neural network (ANN), multivariate adaptive regression splines (MARS), multiple linear regression (MLR), partial least square regression (PLSR), random forest regression (RFR) and support vector machine regression (SVMR) were also evaluated. The best performing multivariate models for sucrose, reducing sugars and total sugars were found to be, MARS, ANN and MARS, respectively with respect to RPD values of 2.08, 2.44, and 1.93. Results indicated that VNIR and SWIR spectroscopy combined with multivariate calibration can be used as a reliable alternative to conventional methods for measurement of sucrose, reducing sugars and total sugars of rice under water-deficit stress as this technique is fast, economic, and noninvasive.

  19. Longitudinal methods to investigate the role of health determinants in the dynamics of income-related health inequality☆

    PubMed Central

    Allanson, Paul; Petrie, Dennis

    2013-01-01

    The usual starting point for understanding changes in income-related health inequality (IRHI) over time has been regression-based decomposition procedures for the health concentration index. However the reliance on repeated cross-sectional analysis for this purpose prevents both the appropriate specification of the health function as a dynamic model and the identification of important determinants of the transition processes underlying IRHI changes such as those relating to mortality. This paper overcomes these limitations by developing alternative longitudinal procedures to analyse the role of health determinants in driving changes in IRHI through both morbidity changes and mortality, with our dynamic modelling framework also serving to identify their contribution to long-run or structural IRHI. The approach is illustrated by an empirical analysis of the causes of the increase in IRHI in Great Britain between 1999 and 2004. PMID:24036199

  20. Elucidation of chemosensitization effect of acridones in cancer cell lines: Combined pharmacophore modeling, 3D QSAR, and molecular dynamics studies.

    PubMed

    Gade, Deepak Reddy; Makkapati, Amareswararao; Yarlagadda, Rajesh Babu; Peters, Godefridus J; Sastry, B S; Rajendra Prasad, V V S

    2018-06-01

    Overexpression of P-glycoprotein (P-gp) leads to the emergence of multidrug resistance (MDR) in cancer treatment. Acridones have the potential to reverse MDR and sensitize cells. In the present study, we aimed to elucidate the chemosensitization potential of acridones by employing various molecular modelling techniques. Pharmacophore modeling was performed for the dataset of chemosensitizing acridones earlier proved for cytotoxic activity against MCF7 breast cancer cell line. Gaussian-based QSAR studies also performed to predict the favored and disfavored region of the acridone molecules. Molecular dynamics simulations were performed for compound 10 and human P-glycoprotein (obtained from Homology modeling). An efficient pharmacophore containing 2 hydrogen bond acceptors and 3 aromatic rings (AARRR.14) was identified. NCI 2012 chemical database was screened against AARRR.14 CPH and identified 25 best-fit molecules. Potential regions of the compound were identified through Field (Gaussian) based QSAR. Regression analysis of atom-based QSAR resulted in r 2 of 0.95 and q 2 of 0.72, whereas, regression analysis of field-based QSAR resulted in r 2 of 0.92 and q 2 of 0.87 along with r 2 cv as 0.71. The fate of the acridone molecule (compound 10) in the P-glycoprotein environment is analyzed through analyzing the conformational changes occurring during the molecular dynamics simulations. Combined data of different in silico techniques provided basis for deeper understanding of structural and mechanistic insights of interaction phenomenon of acridones with P-glycoprotein and also as strategic basis for designing more potent molecules for anti-cancer and multidrug resistance reversal activities. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Use of statistical study methods for the analysis of the results of the imitation modeling of radiation transfer

    NASA Astrophysics Data System (ADS)

    Alekseenko, M. A.; Gendrina, I. Yu.

    2017-11-01

    Recently, due to the abundance of various types of observational data in the systems of vision through the atmosphere and the need for their processing, the use of various methods of statistical research in the study of such systems as correlation-regression analysis, dynamic series, variance analysis, etc. is actual. We have attempted to apply elements of correlation-regression analysis for the study and subsequent prediction of the patterns of radiation transfer in these systems same as in the construction of radiation models of the atmosphere. In this paper, we present some results of statistical processing of the results of numerical simulation of the characteristics of vision systems through the atmosphere obtained with the help of a special software package.1

  2. Estimating biogas production of biologically treated municipal solid waste.

    PubMed

    Scaglia, Barbara; Confalonieri, Roberto; D'Imporzano, Giuliana; Adani, Fabrizio

    2010-02-01

    In this work, a respirometric approach, i.e., Dynamic Respiration Index (DRI), was used to predict the anaerobic biogas potential (ABP), studying 46 waste samples coming directly from MBT full-scale plants. A significant linear regression model was obtained by a jackknife approach: ABP=(34.4+/-2.5)+(0.109+/-0.003).DRI. The comparison of the model of this work with those of the previous works using a different respirometric approach (Sapromat-AT(4)), allowed obtaining similar results and carrying out direct comparison of different limits to accept treated waste in landfill, proposed in the literature. The results indicated that on an average, MBT treatment allowed 56% of ABP reduction after 4weeks of treatment, and 79% reduction after 12weeks of treatment. The obtainment of another regression model allowed transforming Sapromat-AT(4) limit in DRI units, and achieving a description of the kinetics of DRI and the corresponding ABP reductions vs. MBT treatment-time.

  3. Inner and outer coronary vessel wall segmentation from CCTA using an active contour model with machine learning-based 3D voxel context-aware image force

    NASA Astrophysics Data System (ADS)

    Sivalingam, Udhayaraj; Wels, Michael; Rempfler, Markus; Grosskopf, Stefan; Suehling, Michael; Menze, Bjoern H.

    2016-03-01

    In this paper, we present a fully automated approach to coronary vessel segmentation, which involves calcification or soft plaque delineation in addition to accurate lumen delineation, from 3D Cardiac Computed Tomography Angiography data. Adequately virtualizing the coronary lumen plays a crucial role for simulating blood ow by means of fluid dynamics while additionally identifying the outer vessel wall in the case of arteriosclerosis is a prerequisite for further plaque compartment analysis. Our method is a hybrid approach complementing Active Contour Model-based segmentation with an external image force that relies on a Random Forest Regression model generated off-line. The regression model provides a strong estimate of the distance to the true vessel surface for every surface candidate point taking into account 3D wavelet-encoded contextual image features, which are aligned with the current surface hypothesis. The associated external image force is integrated in the objective function of the active contour model, such that the overall segmentation approach benefits from the advantages associated with snakes and from the ones associated with machine learning-based regression alike. This yields an integrated approach achieving competitive results on a publicly available benchmark data collection (Rotterdam segmentation challenge).

  4. Spatiotemporal Anopheles Population Dynamics, Response to Climatic Conditions: The Case of Chabahar, South Baluchistan, Iran.

    PubMed

    Farajzadeh, Manuchehr; Halimi, Mansour; Ghavidel, Yousef; Delavari, Mahdi

    2015-01-01

    An understanding of the factors that affect the abundance of Anopheline species provides an opportunity to better understand the dynamics of malaria transmission in different regions. Chabahar, located south east of Iran, is the most malarious region in the country. The main aim of this study was to quantify the spatiotemporal Anopheles population dynamics, response to climatic conditions in Chabahar. Satellite-based and land-based climatic data were used as explanatory variables. Monthly caught mosquitoes in 6 village sites of Chabahar were used as dependent variable. The spatiotemporal associations were first investigated by inspection of scatter plots and single variable regression analysis. A multivariate linear regression model was developed to reveal the association between environmental variables and the monthly mosquito abundance at a 95% confidence level (P ≤ 0.5). Results indicated that Anopheles mosquitoes can be found all year in Chabahar with 2 significant seasonal peaks from March to June (primary peak) and September to November (secondary peak). Results of the present study showed that 0.77 of yearly mosquito abundance emerges in the thermal range of 24°C to 30°C and the humidity range of 0.70 to 0.80 in Chabahar. According to the developed multivariate linear model, 0.88 of temporal variance of mosquito abundance, nighttime land surface temperature, and relative humidity of 15 Universal Time Coordinated (18.30 Iran) are the main drivers of mosquito population dynamics in Chabahar. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Modeling vocalization with ECoG cortical activity recorded during vocal production in the macaque monkey.

    PubMed

    Fukushima, Makoto; Saunders, Richard C; Fujii, Naotaka; Averbeck, Bruno B; Mishkin, Mortimer

    2014-01-01

    Vocal production is an example of controlled motor behavior with high temporal precision. Previous studies have decoded auditory evoked cortical activity while monkeys listened to vocalization sounds. On the other hand, there have been few attempts at decoding motor cortical activity during vocal production. Here we recorded cortical activity during vocal production in the macaque with a chronically implanted electrocorticographic (ECoG) electrode array. The array detected robust activity in motor cortex during vocal production. We used a nonlinear dynamical model of the vocal organ to reduce the dimensionality of `Coo' calls produced by the monkey. We then used linear regression to evaluate the information in motor cortical activity for this reduced representation of calls. This simple linear model accounted for circa 65% of the variance in the reduced sound representations, supporting the feasibility of using the dynamical model of the vocal organ for decoding motor cortical activity during vocal production.

  6. Modelling of Batch Lactic Acid Fermentation in
the Presence of Anionic Clay

    PubMed Central

    Jinescu, Cosmin; Aruş, Vasilica Alisa; Nistor, Ileana Denisa

    2014-01-01

    Summary Batch fermentation of milk inoculated with lactic acid bacteria was conducted in the presence of hydrotalcite-type anionic clay under static and ultrasonic conditions. An experimental study of the effect of fermentation temperature (t=38–43 °C), clay/milk ratio (R=1–7.5 g/L) and ultrasonic field (ν=0 and 35 kHz) on process dynamics was performed. A mathematical model was selected to describe the fermentation process kinetics and its parameters were estimated based on experimental data. A good agreement between the experimental and simulated results was achieved. Consequently, the model can be employed to predict the dynamics of batch lactic acid fermentation with values of process variables in the studied ranges. A statistical analysis of the data based on a 23 factorial experiment was performed in order to express experimental and model-regressed process responses depending on t, R and ν factors. PMID:27904318

  7. Signaling mechanisms underlying the robustness and tunability of the plant immune network

    PubMed Central

    Kim, Yungil; Tsuda, Kenichi; Igarashi, Daisuke; Hillmer, Rachel A.; Sakakibara, Hitoshi; Myers, Chad L.; Katagiri, Fumiaki

    2014-01-01

    Summary How does robust and tunable behavior emerge in a complex biological network? We sought to understand this for the signaling network controlling pattern-triggered immunity (PTI) in Arabidopsis. A dynamic network model containing four major signaling sectors, the jasmonate, ethylene, PAD4, and salicylate sectors, which together explain up to 80% of the PTI level, was built using data for dynamic sector activities and PTI levels under exhaustive combinatorial sector perturbations. Our regularized multiple regression model had a high level of predictive power and captured known and unexpected signal flows in the network. The sole inhibitory sector in the model, the ethylene sector, was central to the network robustness via its inhibition of the jasmonate sector. The model's multiple input sites linked specific signal input patterns varying in strength and timing to different network response patterns, indicating a mechanism enabling tunability. PMID:24439900

  8. Parametric system identification of catamaran for improving controller design

    NASA Astrophysics Data System (ADS)

    Timpitak, Surasak; Prempraneerach, Pradya; Pengwang, Eakkachai

    2018-01-01

    This paper presents an estimation of simplified dynamic model for only surge- and yaw- motions of catamaran by using system identification (SI) techniques to determine associated unknown parameters. These methods will enhance the performance of designing processes for the motion control system of Unmanned Surface Vehicle (USV). The simulation results demonstrate an effective way to solve for damping forces and to determine added masses by applying least-square and AutoRegressive Exogenous (ARX) methods. Both methods are then evaluated according to estimated parametric errors from the vehicle’s dynamic model. The ARX method, which yields better estimated accuracy, can then be applied to identify unknown parameters as well as to help improving a controller design of a real unmanned catamaran.

  9. Active learning of constitutive relation from mesoscopic dynamics for macroscopic modeling of non-Newtonian flows

    NASA Astrophysics Data System (ADS)

    Zhao, Lifei; Li, Zhen; Caswell, Bruce; Ouyang, Jie; Karniadakis, George Em

    2018-06-01

    We simulate complex fluids by means of an on-the-fly coupling of the bulk rheology to the underlying microstructure dynamics. In particular, a continuum model of polymeric fluids is constructed without a pre-specified constitutive relation, but instead it is actively learned from mesoscopic simulations where the dynamics of polymer chains is explicitly computed. To couple the bulk rheology of polymeric fluids and the microscale dynamics of polymer chains, the continuum approach (based on the finite volume method) provides the transient flow field as inputs for the (mesoscopic) dissipative particle dynamics (DPD), and in turn DPD returns an effective constitutive relation to close the continuum equations. In this multiscale modeling procedure, we employ an active learning strategy based on Gaussian process regression (GPR) to minimize the number of expensive DPD simulations, where adaptively selected DPD simulations are performed only as necessary. Numerical experiments are carried out for flow past a circular cylinder of a non-Newtonian fluid, modeled at the mesoscopic level by bead-spring chains. The results show that only five DPD simulations are required to achieve an effective closure of the continuum equations at Reynolds number Re = 10. Furthermore, when Re is increased to 100, only one additional DPD simulation is required for constructing an extended GPR-informed model closure. Compared to traditional message-passing multiscale approaches, applying an active learning scheme to multiscale modeling of non-Newtonian fluids can significantly increase the computational efficiency. Although the method demonstrated here obtains only a local viscosity from the polymer dynamics, it can be extended to other multiscale models of complex fluids whose macro-rheology is unknown.

  10. Tire-road friction coefficient estimation based on the resonance frequency of in-wheel motor drive system

    NASA Astrophysics Data System (ADS)

    Chen, Long; Bian, Mingyuan; Luo, Yugong; Qin, Zhaobo; Li, Keqiang

    2016-01-01

    In this paper, a resonance frequency-based tire-road friction coefficient (TRFC) estimation method is proposed by considering the dynamics performance of the in-wheel motor drive system under small slip ratio conditions. A frequency response function (FRF) is deduced for the drive system that is composed of a dynamic tire model and a simplified motor model. A linear relationship between the squared system resonance frequency and the TFRC is described with the FRF. Furthermore, the resonance frequency is identified by the Auto-Regressive eXogenous model using the information of the motor torque and the wheel speed, and the TRFC is estimated thereafter by a recursive least squares filter with the identified resonance frequency. Finally, the effectiveness of the proposed approach is demonstrated through simulations and experimental tests on different road surfaces.

  11. Seasonal prediction of East Asian summer rainfall using a multi-model ensemble system

    NASA Astrophysics Data System (ADS)

    Ahn, Joong-Bae; Lee, Doo-Young; Yoo, Jin‑Ho

    2015-04-01

    Using the retrospective forecasts of seven state-of-the-art coupled models and their multi-model ensemble (MME) for boreal summers, the prediction skills of climate models in the western tropical Pacific (WTP) and East Asian region are assessed. The prediction of summer rainfall anomalies in East Asia is difficult, while the WTP has a strong correlation between model prediction and observation. We focus on developing a new approach to further enhance the seasonal prediction skill for summer rainfall in East Asia and investigate the influence of convective activity in the WTP on East Asian summer rainfall. By analyzing the characteristics of the WTP convection, two distinct patterns associated with El Niño-Southern Oscillation developing and decaying modes are identified. Based on the multiple linear regression method, the East Asia Rainfall Index (EARI) is developed by using the interannual variability of the normalized Maritime continent-WTP Indices (MPIs), as potentially useful predictors for rainfall prediction over East Asia, obtained from the above two main patterns. For East Asian summer rainfall, the EARI has superior performance to the East Asia summer monsoon index or each MPI. Therefore, the regressed rainfall from EARI also shows a strong relationship with the observed East Asian summer rainfall pattern. In addition, we evaluate the prediction skill of the East Asia reconstructed rainfall obtained by hybrid dynamical-statistical approach using the cross-validated EARI from the individual models and their MME. The results show that the rainfalls reconstructed from simulations capture the general features of observed precipitation in East Asia quite well. This study convincingly demonstrates that rainfall prediction skill is considerably improved by using a hybrid dynamical-statistical approach compared to the dynamical forecast alone. Acknowledgements This work was carried out with the support of Rural Development Administration Cooperative Research Program for Agriculture Science and Technology Development under grant project PJ009353 and Korea Meteorological Administration Research and Development Program under grant CATER 2012-3100, Republic of Korea.

  12. Information-Decay Pursuit of Dynamic Parameters in Student Models

    DTIC Science & Technology

    1994-04-01

    simple worked-through example). Commercially available computer programs for structuring and using Bayesian inference include ERGO ( Noetic Systems...Tukey, J.W. (1977). Data analysis and Regression: A second course in statistics. Reading, MA: Addison-Wesley. Noetic Systems, Inc. (1991). ERGO...Naval Academy Division of Educational Studies Annapolis MD 21402-5002 Elmory Univerity Dr Janice Gifford 210 Fiabburne Bldg University of

  13. Examining the Influence of Selected Factors on Perceived Co-Op Work-Term Quality from a Student Perspective

    ERIC Educational Resources Information Center

    Drewery, David; Nevison, Colleen; Pretti, T. Judene; Cormier, Lauren; Barclay, Sage; Pennaforte, Antoine

    2016-01-01

    This study discusses and tests a conceptual model of co-op work-term quality from a student perspective. Drawing from an earlier exploration of co-op students' perceptions of work-term quality, variables related to role characteristics, interpersonal dynamics, and organizational elements were used in a multiple linear regression analysis to…

  14. Abnormal dynamics of language in schizophrenia.

    PubMed

    Stephane, Massoud; Kuskowski, Michael; Gundel, Jeanette

    2014-05-30

    Language could be conceptualized as a dynamic system that includes multiple interactive levels (sub-lexical, lexical, sentence, and discourse) and components (phonology, semantics, and syntax). In schizophrenia, abnormalities are observed at all language elements (levels and components) but the dynamic between these elements remains unclear. We hypothesize that the dynamics between language elements in schizophrenia is abnormal and explore how this dynamic is altered. We, first, investigated language elements with comparable procedures in patients and healthy controls. Second, using measures of reaction time, we performed multiple linear regression analyses to evaluate the inter-relationships among language elements and the effect of group on these relationships. Patients significantly differed from controls with respect to sub-lexical/lexical, lexical/sentence, and sentence/discourse regression coefficients. The intercepts of the regression slopes increased in the same order above (from lower to higher levels) in patients but not in controls. Regression coefficients between syntax and both sentence level and discourse level semantics did not differentiate patients from controls. This study indicates that the dynamics between language elements is abnormal in schizophrenia. In patients, top-down flow of linguistic information might be reduced, and the relationship between phonology and semantics but not between syntax and semantics appears to be altered. Published by Elsevier Ireland Ltd.

  15. Discovering governing equations from data by sparse identification of nonlinear dynamical systems

    PubMed Central

    Brunton, Steven L.; Proctor, Joshua L.; Kutz, J. Nathan

    2016-01-01

    Extracting governing equations from data is a central challenge in many diverse areas of science and engineering. Data are abundant whereas models often remain elusive, as in climate science, neuroscience, ecology, finance, and epidemiology, to name only a few examples. In this work, we combine sparsity-promoting techniques and machine learning with nonlinear dynamical systems to discover governing equations from noisy measurement data. The only assumption about the structure of the model is that there are only a few important terms that govern the dynamics, so that the equations are sparse in the space of possible functions; this assumption holds for many physical systems in an appropriate basis. In particular, we use sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data. This results in parsimonious models that balance accuracy with model complexity to avoid overfitting. We demonstrate the algorithm on a wide range of problems, from simple canonical systems, including linear and nonlinear oscillators and the chaotic Lorenz system, to the fluid vortex shedding behind an obstacle. The fluid example illustrates the ability of this method to discover the underlying dynamics of a system that took experts in the community nearly 30 years to resolve. We also show that this method generalizes to parameterized systems and systems that are time-varying or have external forcing. PMID:27035946

  16. Discovering governing equations from data by sparse identification of nonlinear dynamical systems.

    PubMed

    Brunton, Steven L; Proctor, Joshua L; Kutz, J Nathan

    2016-04-12

    Extracting governing equations from data is a central challenge in many diverse areas of science and engineering. Data are abundant whereas models often remain elusive, as in climate science, neuroscience, ecology, finance, and epidemiology, to name only a few examples. In this work, we combine sparsity-promoting techniques and machine learning with nonlinear dynamical systems to discover governing equations from noisy measurement data. The only assumption about the structure of the model is that there are only a few important terms that govern the dynamics, so that the equations are sparse in the space of possible functions; this assumption holds for many physical systems in an appropriate basis. In particular, we use sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data. This results in parsimonious models that balance accuracy with model complexity to avoid overfitting. We demonstrate the algorithm on a wide range of problems, from simple canonical systems, including linear and nonlinear oscillators and the chaotic Lorenz system, to the fluid vortex shedding behind an obstacle. The fluid example illustrates the ability of this method to discover the underlying dynamics of a system that took experts in the community nearly 30 years to resolve. We also show that this method generalizes to parameterized systems and systems that are time-varying or have external forcing.

  17. Using Social Network Analysis to Better Understand Compulsive Exercise Behavior Among a Sample of Sorority Members.

    PubMed

    Patterson, Megan S; Goodson, Patricia

    2017-05-01

    Compulsive exercise, a form of unhealthy exercise often associated with prioritizing exercise and feeling guilty when exercise is missed, is a common precursor to and symptom of eating disorders. College-aged women are at high risk of exercising compulsively compared with other groups. Social network analysis (SNA) is a theoretical perspective and methodology allowing researchers to observe the effects of relational dynamics on the behaviors of people. SNA was used to assess the relationship between compulsive exercise and body dissatisfaction, physical activity, and network variables. Descriptive statistics were conducted using SPSS, and quadratic assignment procedure (QAP) analyses were conducted using UCINET. QAP regression analysis revealed a statistically significant model (R 2 = .375, P < .0001) predicting compulsive exercise behavior. Physical activity, body dissatisfaction, and network variables were statistically significant predictor variables in the QAP regression model. In our sample, women who are connected to "important" or "powerful" people in their network are likely to have higher compulsive exercise scores. This result provides healthcare practitioners key target points for intervention within similar groups of women. For scholars researching eating disorders and associated behaviors, this study supports looking into group dynamics and network structure in conjunction with body dissatisfaction and exercise frequency.

  18. Artificial neural networks and multiple linear regression model using principal components to estimate rainfall over South America

    NASA Astrophysics Data System (ADS)

    dos Santos, T. S.; Mendes, D.; Torres, R. R.

    2015-08-01

    Several studies have been devoted to dynamic and statistical downscaling for analysis of both climate variability and climate change. This paper introduces an application of artificial neural networks (ANN) and multiple linear regression (MLR) by principal components to estimate rainfall in South America. This method is proposed for downscaling monthly precipitation time series over South America for three regions: the Amazon, Northeastern Brazil and the La Plata Basin, which is one of the regions of the planet that will be most affected by the climate change projected for the end of the 21st century. The downscaling models were developed and validated using CMIP5 model out- put and observed monthly precipitation. We used GCMs experiments for the 20th century (RCP Historical; 1970-1999) and two scenarios (RCP 2.6 and 8.5; 2070-2100). The model test results indicate that the ANN significantly outperforms the MLR downscaling of monthly precipitation variability.

  19. Integrating Growth Variability of the Ilium, Fifth Lumbar Vertebra, and Clavicle with Multivariate Adaptive Regression Splines Models for Subadult Age Estimation.

    PubMed

    Corron, Louise; Marchal, François; Condemi, Silvana; Telmon, Norbert; Chaumoitre, Kathia; Adalian, Pascal

    2018-05-31

    Subadult age estimation should rely on sampling and statistical protocols capturing development variability for more accurate age estimates. In this perspective, measurements were taken on the fifth lumbar vertebrae and/or clavicles of 534 French males and females aged 0-19 years and the ilia of 244 males and females aged 0-12 years. These variables were fitted in nonparametric multivariate adaptive regression splines (MARS) models with 95% prediction intervals (PIs) of age. The models were tested on two independent samples from Marseille and the Luis Lopes reference collection from Lisbon. Models using ilium width and module, maximum clavicle length, and lateral vertebral body heights were more than 92% accurate. Precision was lower for postpubertal individuals. Integrating punctual nonlinearities of the relationship between age and the variables and dynamic prediction intervals incorporated the normal increase in interindividual growth variability (heteroscedasticity of variance) with age for more biologically accurate predictions. © 2018 American Academy of Forensic Sciences.

  20. Modeling the prediction of business intelligence system effectiveness.

    PubMed

    Weng, Sung-Shun; Yang, Ming-Hsien; Koo, Tian-Lih; Hsiao, Pei-I

    2016-01-01

    Although business intelligence (BI) technologies are continually evolving, the capability to apply BI technologies has become an indispensable resource for enterprises running in today's complex, uncertain and dynamic business environment. This study performed pioneering work by constructing models and rules for the prediction of business intelligence system effectiveness (BISE) in relation to the implementation of BI solutions. For enterprises, effectively managing critical attributes that determine BISE to develop prediction models with a set of rules for self-evaluation of the effectiveness of BI solutions is necessary to improve BI implementation and ensure its success. The main study findings identified the critical prediction indicators of BISE that are important to forecasting BI performance and highlighted five classification and prediction rules of BISE derived from decision tree structures, as well as a refined regression prediction model with four critical prediction indicators constructed by logistic regression analysis that can enable enterprises to improve BISE while effectively managing BI solution implementation and catering to academics to whom theory is important.

  1. Decoding of finger trajectory from ECoG using deep learning.

    PubMed

    Xie, Ziqian; Schwartz, Odelia; Prasad, Abhishek

    2018-06-01

    Conventional decoding pipeline for brain-machine interfaces (BMIs) consists of chained different stages of feature extraction, time-frequency analysis and statistical learning models. Each of these stages uses a different algorithm trained in a sequential manner, which makes it difficult to make the whole system adaptive. The goal was to create an adaptive online system with a single objective function and a single learning algorithm so that the whole system can be trained in parallel to increase the decoding performance. Here, we used deep neural networks consisting of convolutional neural networks (CNN) and a special kind of recurrent neural network (RNN) called long short term memory (LSTM) to address these needs. We used electrocorticography (ECoG) data collected by Kubanek et al. The task consisted of individual finger flexions upon a visual cue. Our model combined a hierarchical feature extractor CNN and a RNN that was able to process sequential data and recognize temporal dynamics in the neural data. CNN was used as the feature extractor and LSTM was used as the regression algorithm to capture the temporal dynamics of the signal. We predicted the finger trajectory using ECoG signals and compared results for the least angle regression (LARS), CNN-LSTM, random forest, LSTM model (LSTM_HC, for using hard-coded features) and a decoding pipeline consisting of band-pass filtering, energy extraction, feature selection and linear regression. The results showed that the deep learning models performed better than the commonly used linear model. The deep learning models not only gave smoother and more realistic trajectories but also learned the transition between movement and rest state. This study demonstrated a decoding network for BMI that involved a convolutional and recurrent neural network model. It integrated the feature extraction pipeline into the convolution and pooling layer and used LSTM layer to capture the state transitions. The discussed network eliminated the need to separately train the model at each step in the decoding pipeline. The whole system can be jointly optimized using stochastic gradient descent and is capable of online learning.

  2. Decoding of finger trajectory from ECoG using deep learning

    NASA Astrophysics Data System (ADS)

    Xie, Ziqian; Schwartz, Odelia; Prasad, Abhishek

    2018-06-01

    Objective. Conventional decoding pipeline for brain-machine interfaces (BMIs) consists of chained different stages of feature extraction, time-frequency analysis and statistical learning models. Each of these stages uses a different algorithm trained in a sequential manner, which makes it difficult to make the whole system adaptive. The goal was to create an adaptive online system with a single objective function and a single learning algorithm so that the whole system can be trained in parallel to increase the decoding performance. Here, we used deep neural networks consisting of convolutional neural networks (CNN) and a special kind of recurrent neural network (RNN) called long short term memory (LSTM) to address these needs. Approach. We used electrocorticography (ECoG) data collected by Kubanek et al. The task consisted of individual finger flexions upon a visual cue. Our model combined a hierarchical feature extractor CNN and a RNN that was able to process sequential data and recognize temporal dynamics in the neural data. CNN was used as the feature extractor and LSTM was used as the regression algorithm to capture the temporal dynamics of the signal. Main results. We predicted the finger trajectory using ECoG signals and compared results for the least angle regression (LARS), CNN-LSTM, random forest, LSTM model (LSTM_HC, for using hard-coded features) and a decoding pipeline consisting of band-pass filtering, energy extraction, feature selection and linear regression. The results showed that the deep learning models performed better than the commonly used linear model. The deep learning models not only gave smoother and more realistic trajectories but also learned the transition between movement and rest state. Significance. This study demonstrated a decoding network for BMI that involved a convolutional and recurrent neural network model. It integrated the feature extraction pipeline into the convolution and pooling layer and used LSTM layer to capture the state transitions. The discussed network eliminated the need to separately train the model at each step in the decoding pipeline. The whole system can be jointly optimized using stochastic gradient descent and is capable of online learning.

  3. Prediction of Muscle Performance During Dynamic Repetitive Exercise

    NASA Technical Reports Server (NTRS)

    Byerly, D. L.; Byerly, K. A.; Sognier, M. A.; Squires, W. G.

    2002-01-01

    A method for predicting human muscle performance was developed. Eight test subjects performed a repetitive dynamic exercise to failure using a Lordex spinal machine. Electromyography (EMG) data was collected from the erector spinae. Evaluation of the EMG data using a 5th order Autoregressive (AR) model and statistical regression analysis revealed that an AR parameter, the mean average magnitude of AR poles, can predict performance to failure as early as the second repetition of the exercise. Potential applications to the space program include evaluating on-orbit countermeasure effectiveness, maximizing post-flight recovery, and future real-time monitoring capability during Extravehicular Activity.

  4. Retrieval of total suspended matter concentrations from high resolution WorldView-2 imagery: a case study of inland rivers

    NASA Astrophysics Data System (ADS)

    Shi, Liangliang; Mao, Zhihua; Wang, Zheng

    2018-02-01

    Satellite imagery has played an important role in monitoring water quality of lakes or coastal waters presently, but scarcely been applied in inland rivers. This paper presents an attempt of feasibility to apply regression model to quantify and map the concentrations of total suspended matter (CTSM) in inland rivers which have a large scale of spatial and a high CTSM dynamic range by using high resolution satellite remote sensing data, WorldView-2. An empirical approach to quantify CTSM by integrated use of high resolution WorldView-2 multispectral data and 21 in situ CTSM measurements. Radiometric correction, geometric and atmospheric correction involved in image processing procedure is carried out for deriving the surface reflectance to correlate the CTSM and satellite data by using single-variable and multivariable regression technique. Results of regression model show that the single near-infrared (NIR) band 8 of WorldView-2 have a relative strong relationship (R2=0.93) with CTSM. Different prediction models were developed on various combinations of WorldView-2 bands, the Akaike Information Criteria approach was used to choose the best model. The model involving band 1, 3, 5, and 8 of WorldView-2 had a best performance, whose R2 reach to 0.92, with SEE of 53.30 g/m3. The spatial distribution maps were produced by using the best multiple regression model. The results of this paper indicated that it is feasible to apply the empirical model by using high resolution satellite imagery to retrieve CTSM of inland rivers in routine monitoring of water quality.

  5. Seasonally-Dynamic Presence-Only Species Distribution Models for a Cryptic Migratory Bat Impacted by Wind Energy Development

    PubMed Central

    Hayes, Mark A.; Cryan, Paul M.; Wunder, Michael B.

    2015-01-01

    Understanding seasonal distribution and movement patterns of animals that migrate long distances is an essential part of monitoring and conserving their populations. Compared to migratory birds and other more conspicuous migrants, we know very little about the movement patterns of many migratory bats. Hoary bats (Lasiurus cinereus), a cryptic, wide-ranging, long-distance migrant, comprise a substantial proportion of the tens to hundreds of thousands of bat fatalities estimated to occur each year at wind turbines in North America. We created seasonally-dynamic species distribution models (SDMs) from 2,753 museum occurrence records collected over five decades in North America to better understand the seasonal geographic distributions of hoary bats. We used 5 SDM approaches: logistic regression, multivariate adaptive regression splines, boosted regression trees, random forest, and maximum entropy and consolidated outputs to generate ensemble maps. These maps represent the first formal hypotheses for sex- and season-specific hoary bat distributions. Our results suggest that North American hoary bats winter in regions with relatively long growing seasons where temperatures are moderated by proximity to oceans, and then move to the continental interior for the summer. SDMs suggested that hoary bats are most broadly distributed in autumn—the season when they are most susceptible to mortality from wind turbines; this season contains the greatest overlap between potentially suitable habitat and wind energy facilities. Comparing wind-turbine fatality data to model outputs could test many predictions, such as ‘risk from turbines is highest in habitats between hoary bat summering and wintering grounds’. Although future field studies are needed to validate the SDMs, this study generated well-justified and testable hypotheses of hoary bat migration patterns and seasonal distribution. PMID:26208098

  6. The Cancer Stem Cell Fraction in Hierarchically Organized Tumors Can Be Estimated Using Mathematical Modeling and Patient-Specific Treatment Trajectories.

    PubMed

    Werner, Benjamin; Scott, Jacob G; Sottoriva, Andrea; Anderson, Alexander R A; Traulsen, Arne; Altrock, Philipp M

    2016-04-01

    Many tumors are hierarchically organized and driven by a subpopulation of tumor-initiating cells (TIC), or cancer stem cells. TICs are uniquely capable of recapitulating the tumor and are thought to be highly resistant to radio- and chemotherapy. Macroscopic patterns of tumor expansion before treatment and tumor regression during treatment are tied to the dynamics of TICs. Until now, the quantitative information about the fraction of TICs from macroscopic tumor burden trajectories could not be inferred. In this study, we generated a quantitative method based on a mathematical model that describes hierarchically organized tumor dynamics and patient-derived tumor burden information. The method identifies two characteristic equilibrium TIC regimes during expansion and regression. We show that tumor expansion and regression curves can be leveraged to infer estimates of the TIC fraction in individual patients at detection and after continued therapy. Furthermore, our method is parameter-free; it solely requires the knowledge of a patient's tumor burden over multiple time points to reveal microscopic properties of the malignancy. We demonstrate proof of concept in the case of chronic myeloid leukemia (CML), wherein our model recapitulated the clinical history of the disease in two independent patient cohorts. On the basis of patient-specific treatment responses in CML, we predict that after one year of targeted treatment, the fraction of TICs increases 100-fold and continues to increase up to 1,000-fold after 5 years of treatment. Our novel framework may significantly influence the implementation of personalized treatment strategies and has the potential for rapid translation into the clinic. Cancer Res; 76(7); 1705-13. ©2016 AACR. ©2016 American Association for Cancer Research.

  7. Seasonally-dynamic presence-only species distribution models for a cryptic migratory bat impacted by wind energy development

    USGS Publications Warehouse

    Hayes, Mark A.; Cryan, Paul M.; Wunder, Michael B.

    2015-01-01

    Understanding seasonal distribution and movement patterns of animals that migrate long distances is an essential part of monitoring and conserving their populations. Compared to migratory birds and other more conspicuous migrants, we know very little about the movement patterns of many migratory bats. Hoary bats (Lasiurus cinereus), a cryptic, wide-ranging, long-distance migrant, comprise a substantial proportion of the tens to hundreds of thousands of bat fatalities estimated to occur each year at wind turbines in North America. We created seasonally-dynamic species distribution models (SDMs) from 2,753 museum occurrence records collected over five decades in North America to better understand the seasonal geographic distributions of hoary bats. We used 5 SDM approaches: logistic regression, multivariate adaptive regression splines, boosted regression trees, random forest, and maximum entropy and consolidated outputs to generate ensemble maps. These maps represent the first formal hypotheses for sex- and season-specific hoary bat distributions. Our results suggest that North American hoary bats winter in regions with relatively long growing seasons where temperatures are moderated by proximity to oceans, and then move to the continental interior for the summer. SDMs suggested that hoary bats are most broadly distributed in autumn—the season when they are most susceptible to mortality from wind turbines; this season contains the greatest overlap between potentially suitable habitat and wind energy facilities. Comparing wind-turbine fatality data to model outputs could test many predictions, such as ‘risk from turbines is highest in habitats between hoary bat summering and wintering grounds’. Although future field studies are needed to validate the SDMs, this study generated well-justified and testable hypotheses of hoary bat migration patterns and seasonal distribution.

  8. Seasonally-Dynamic Presence-Only Species Distribution Models for a Cryptic Migratory Bat Impacted by Wind Energy Development.

    PubMed

    Hayes, Mark A; Cryan, Paul M; Wunder, Michael B

    2015-01-01

    Understanding seasonal distribution and movement patterns of animals that migrate long distances is an essential part of monitoring and conserving their populations. Compared to migratory birds and other more conspicuous migrants, we know very little about the movement patterns of many migratory bats. Hoary bats (Lasiurus cinereus), a cryptic, wide-ranging, long-distance migrant, comprise a substantial proportion of the tens to hundreds of thousands of bat fatalities estimated to occur each year at wind turbines in North America. We created seasonally-dynamic species distribution models (SDMs) from 2,753 museum occurrence records collected over five decades in North America to better understand the seasonal geographic distributions of hoary bats. We used 5 SDM approaches: logistic regression, multivariate adaptive regression splines, boosted regression trees, random forest, and maximum entropy and consolidated outputs to generate ensemble maps. These maps represent the first formal hypotheses for sex- and season-specific hoary bat distributions. Our results suggest that North American hoary bats winter in regions with relatively long growing seasons where temperatures are moderated by proximity to oceans, and then move to the continental interior for the summer. SDMs suggested that hoary bats are most broadly distributed in autumn-the season when they are most susceptible to mortality from wind turbines; this season contains the greatest overlap between potentially suitable habitat and wind energy facilities. Comparing wind-turbine fatality data to model outputs could test many predictions, such as 'risk from turbines is highest in habitats between hoary bat summering and wintering grounds'. Although future field studies are needed to validate the SDMs, this study generated well-justified and testable hypotheses of hoary bat migration patterns and seasonal distribution.

  9. PREDICTION OF MALIGNANT BREAST LESIONS FROM MRI FEATURES: A COMPARISON OF ARTIFICIAL NEURAL NETWORK AND LOGISTIC REGRESSION TECHNIQUES

    PubMed Central

    McLaren, Christine E.; Chen, Wen-Pin; Nie, Ke; Su, Min-Ying

    2009-01-01

    Rationale and Objectives Dynamic contrast enhanced MRI (DCE-MRI) is a clinical imaging modality for detection and diagnosis of breast lesions. Analytical methods were compared for diagnostic feature selection and performance of lesion classification to differentiate between malignant and benign lesions in patients. Materials and Methods The study included 43 malignant and 28 benign histologically-proven lesions. Eight morphological parameters, ten gray level co-occurrence matrices (GLCM) texture features, and fourteen Laws’ texture features were obtained using automated lesion segmentation and quantitative feature extraction. Artificial neural network (ANN) and logistic regression analysis were compared for selection of the best predictors of malignant lesions among the normalized features. Results Using ANN, the final four selected features were compactness, energy, homogeneity, and Law_LS, with area under the receiver operating characteristic curve (AUC) = 0.82, and accuracy = 0.76. The diagnostic performance of these 4-features computed on the basis of logistic regression yielded AUC = 0.80 (95% CI, 0.688 to 0.905), similar to that of ANN. The analysis also shows that the odds of a malignant lesion decreased by 48% (95% CI, 25% to 92%) for every increase of 1 SD in the Law_LS feature, adjusted for differences in compactness, energy, and homogeneity. Using logistic regression with z-score transformation, a model comprised of compactness, NRL entropy, and gray level sum average was selected, and it had the highest overall accuracy of 0.75 among all models, with AUC = 0.77 (95% CI, 0.660 to 0.880). When logistic modeling of transformations using the Box-Cox method was performed, the most parsimonious model with predictors, compactness and Law_LS, had an AUC of 0.79 (95% CI, 0.672 to 0.898). Conclusion The diagnostic performance of models selected by ANN and logistic regression was similar. The analytic methods were found to be roughly equivalent in terms of predictive ability when a small number of variables were chosen. The robust ANN methodology utilizes a sophisticated non-linear model, while logistic regression analysis provides insightful information to enhance interpretation of the model features. PMID:19409817

  10. REVEAL: An Extensible Reduced Order Model Builder for Simulation and Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, Khushbu; Sharma, Poorva; Ma, Jinliang

    2013-04-30

    Many science domains need to build computationally efficient and accurate representations of high fidelity, computationally expensive simulations. These computationally efficient versions are known as reduced-order models. This paper presents the design and implementation of a novel reduced-order model (ROM) builder, the REVEAL toolset. This toolset generates ROMs based on science- and engineering-domain specific simulations executed on high performance computing (HPC) platforms. The toolset encompasses a range of sampling and regression methods that can be used to generate a ROM, automatically quantifies the ROM accuracy, and provides support for an iterative approach to improve ROM accuracy. REVEAL is designed to bemore » extensible in order to utilize the core functionality with any simulator that has published input and output formats. It also defines programmatic interfaces to include new sampling and regression techniques so that users can ‘mix and match’ mathematical techniques to best suit the characteristics of their model. In this paper, we describe the architecture of REVEAL and demonstrate its usage with a computational fluid dynamics model used in carbon capture.« less

  11. A generalized partially linear mean-covariance regression model for longitudinal proportional data, with applications to the analysis of quality of life data from cancer clinical trials.

    PubMed

    Zheng, Xueying; Qin, Guoyou; Tu, Dongsheng

    2017-05-30

    Motivated by the analysis of quality of life data from a clinical trial on early breast cancer, we propose in this paper a generalized partially linear mean-covariance regression model for longitudinal proportional data, which are bounded in a closed interval. Cholesky decomposition of the covariance matrix for within-subject responses and generalized estimation equations are used to estimate unknown parameters and the nonlinear function in the model. Simulation studies are performed to evaluate the performance of the proposed estimation procedures. Our new model is also applied to analyze the data from the cancer clinical trial that motivated this research. In comparison with available models in the literature, the proposed model does not require specific parametric assumptions on the density function of the longitudinal responses and the probability function of the boundary values and can capture dynamic changes of time or other interested variables on both mean and covariance of the correlated proportional responses. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Predictability and Quantification of Complex Groundwater Table Dynamics Driven by Irregular Surface Water Fluctuations

    NASA Astrophysics Data System (ADS)

    Xin, Pei; Wang, Shen S. J.; Shen, Chengji; Zhang, Zeyu; Lu, Chunhui; Li, Ling

    2018-03-01

    Shallow groundwater interacts strongly with surface water across a quarter of global land area, affecting significantly the terrestrial eco-hydrology and biogeochemistry. We examined groundwater behavior subjected to unimodal impulse and irregular surface water fluctuations, combining physical experiments, numerical simulations, and functional data analysis. Both the experiments and numerical simulations demonstrated a damped and delayed response of groundwater table to surface water fluctuations. To quantify this hysteretic shallow groundwater behavior, we developed a regression model with the Gamma distribution functions adopted to account for the dependence of groundwater behavior on antecedent surface water conditions. The regression model fits and predicts well the groundwater table oscillations resulting from propagation of irregular surface water fluctuations in both laboratory and large-scale aquifers. The coefficients of the Gamma distribution function vary spatially, reflecting the hysteresis effect associated with increased amplitude damping and delay as the fluctuation propagates. The regression model, in a relatively simple functional form, has demonstrated its capacity of reproducing high-order nonlinear effects that underpin the surface water and groundwater interactions. The finding has important implications for understanding and predicting shallow groundwater behavior and associated biogeochemical processes, and will contribute broadly to studies of groundwater-dependent ecology and biogeochemistry.

  13. The Intergenerational Transmission of Generosity

    PubMed Central

    Wilhelm, Mark O.; Brown, Eleanor; Rooney, Patrick M.; Steinberg, Richard

    2008-01-01

    This paper estimates the correlation between the generosity of parents and the generosity of their adult children using regression models of adult children’s charitable giving. New charitable giving data are collected in the Panel Study of Income Dynamics and used to estimate the regression models. The regression models are estimated using a wide variety of techniques and specification tests, and the strength of the intergenerational giving correlations are compared with intergenerational correlations in income, wealth, and consumption expenditure from the same sample using the same set of controls. We find the religious giving of parents and children to be strongly correlated, as strongly correlated as are their income and wealth. The correlation in the secular giving (e.g., giving to the United Way, educational institutions, for poverty relief) of parents and children is smaller, similar in magnitude to the intergenerational correlation in consumption. Parents’ religious giving is positively associated with children’s secular giving, but in a more limited sense. Overall, the results are consistent with generosity emerging at least in part from the influence of parental charitable behavior. In contrast to intergenerational models in which parental generosity towards their children can undo government transfer policy (Ricardian equivalence), these results suggest that parental generosity towards charitable organizations might reinforce government policies, such as tax incentives aimed at encouraging voluntary transfers. PMID:19802345

  14. Dynamic decomposition of spatiotemporal neural signals

    PubMed Central

    2017-01-01

    Neural signals are characterized by rich temporal and spatiotemporal dynamics that reflect the organization of cortical networks. Theoretical research has shown how neural networks can operate at different dynamic ranges that correspond to specific types of information processing. Here we present a data analysis framework that uses a linearized model of these dynamic states in order to decompose the measured neural signal into a series of components that capture both rhythmic and non-rhythmic neural activity. The method is based on stochastic differential equations and Gaussian process regression. Through computer simulations and analysis of magnetoencephalographic data, we demonstrate the efficacy of the method in identifying meaningful modulations of oscillatory signals corrupted by structured temporal and spatiotemporal noise. These results suggest that the method is particularly suitable for the analysis and interpretation of complex temporal and spatiotemporal neural signals. PMID:28558039

  15. Model parameter estimation approach based on incremental analysis for lithium-ion batteries without using open circuit voltage

    NASA Astrophysics Data System (ADS)

    Wu, Hongjie; Yuan, Shifei; Zhang, Xi; Yin, Chengliang; Ma, Xuerui

    2015-08-01

    To improve the suitability of lithium-ion battery model under varying scenarios, such as fluctuating temperature and SoC variation, dynamic model with parameters updated realtime should be developed. In this paper, an incremental analysis-based auto regressive exogenous (I-ARX) modeling method is proposed to eliminate the modeling error caused by the OCV effect and improve the accuracy of parameter estimation. Then, its numerical stability, modeling error, and parametric sensitivity are analyzed at different sampling rates (0.02, 0.1, 0.5 and 1 s). To identify the model parameters recursively, a bias-correction recursive least squares (CRLS) algorithm is applied. Finally, the pseudo random binary sequence (PRBS) and urban dynamic driving sequences (UDDSs) profiles are performed to verify the realtime performance and robustness of the newly proposed model and algorithm. Different sampling rates (1 Hz and 10 Hz) and multiple temperature points (5, 25, and 45 °C) are covered in our experiments. The experimental and simulation results indicate that the proposed I-ARX model can present high accuracy and suitability for parameter identification without using open circuit voltage.

  16. Competitive assessment of aerospace systems using system dynamics

    NASA Astrophysics Data System (ADS)

    Pfaender, Jens Holger

    Aircraft design has recently experienced a trend away from performance centric design towards a more balanced approach with increased emphasis on engineering an economically successful system. This approach focuses on bringing forward a comprehensive economic and life-cycle cost analysis. Since the success of any system also depends on many external factors outside of the control of the designer, this traditionally has been modeled as noise affecting the uncertainty of the design. However, this approach is currently lacking a strategic treatment of necessary early decisions affecting the probability of success of a given concept in a dynamic environment. This suggests that the introduction of a dynamic method into a life-cycle cost analysis should allow the analysis of the future attractiveness of such a concept in the presence of uncertainty. One way of addressing this is through the use of a competitive market model. However, existing market models do not focus on the dynamics of the market. Instead, they focus on modeling and predicting market share through logit regression models. The resulting models exhibit relatively poor predictive capabilities. The method proposed here focuses on a top-down approach that integrates a competitive model based on work in the field of system dynamics into the aircraft design process. Demonstrating such integration is one of the primary contributions of this work, which previously has not been demonstrated. This integration is achieved through the use of surrogate models, in this case neural networks. This enabled not only the practical integration of analysis techniques, but also reduced the computational requirements so that interactive exploration as envisioned was actually possible. The example demonstration of this integration is built on the competition in the 250 seat large commercial aircraft market exemplified by the Boeing 767-400ER and the Airbus A330-200. Both aircraft models were calibrated to existing performance and certification data and then integrated into the system dynamics market model. The market model was then calibrated with historical market data. This calibration showed a much improved predictive capability as compared to the conventional logit regression models. An additional advantage of this dynamic model is that to realize this improved capability, no additional explanatory variables were required. Furthermore, the resulting market model was then integrated into a prediction profiler environment with a time variant Monte-Carlo analysis resulting in a unique trade-off environment. This environment was shown to allow interactive trade-off between aircraft design decisions and economic considerations while allowing the exploration potential market success in the light of varying external market conditions and scenarios. The resulting method is capable of reduced decision support uncertainty and identification of robust design decisions in future scenarios with a high likelihood of occurrence with special focus on the path dependent nature of future implications of decisions. Furthermore, it was possible to demonstrate the increased importance of design and technology choices on the competitiveness in scenarios with drastic increases in commodity prices during the time period modeled. Another use of the existing outputs of the Monte-Carlo analysis was then realized by showing them on a multivariate scatter plot. This plot was then shown to enable by appropriate grouping of variables to enable the top down definition of an aircraft design, also known as inverse design. In other words this enables the designer to define strategic market and return on investment goals for a number of scenarios, for example the development of fuel prices, and then directly see which specific aircraft designs meet these goals.

  17. Design an optimum safety policy for personnel safety management - A system dynamic approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balaji, P.

    2014-10-06

    Personnel safety management (PSM) ensures that employee's work conditions are healthy and safe by various proactive and reactive approaches. Nowadays it is a complex phenomenon because of increasing dynamic nature of organisations which results in an increase of accidents. An important part of accident prevention is to understand the existing system properly and make safety strategies for that system. System dynamics modelling appears to be an appropriate methodology to explore and make strategy for PSM. Many system dynamics models of industrial systems have been built entirely for specific host firms. This thesis illustrates an alternative approach. The generic system dynamicsmore » model of Personnel safety management was developed and tested in a host firm. The model was undergone various structural, behavioural and policy tests. The utility and effectiveness of model was further explored through modelling a safety scenario. In order to create effective safety policy under resource constraint, DOE (Design of experiment) was used. DOE uses classic designs, namely, fractional factorials and central composite designs. It used to make second order regression equation which serve as an objective function. That function was optimized under budget constraint and optimum value used for safety policy which shown greatest improvement in overall PSM. The outcome of this research indicates that personnel safety management model has the capability for acting as instruction tool to improve understanding of safety management and also as an aid to policy making.« less

  18. Vulnerability of carbon storage in North American boreal forests to wildfires during the 21st century

    Treesearch

    M.S. Balshi; A.D. McGuire; P. Duffy; M. Flannigan; D.W. Kicklighter; J. Melillo

    2009-01-01

    We use a gridded data set developed with a multivariate adaptive regression spline approach to determine how area burned varies each year with changing climatic and fuel moisture conditions. We apply the process-based Terrestrial Ecosystem Model to evaluate the role of future fire on the carbon dynamics of boreal North America in the context of changing atmospheric...

  19. Basal area increment and growth efficiency as functions of canopy dynamics and stem mechanics

    Treesearch

    Thomas J. Dean

    2004-01-01

    Crown and canopy structurecorrelate with growth efficiency and also determine stem size and taper as described by the uniform stress principle of stem formation. A regression model was derived from this principle that expresses basal area increment in terms of the amount and vertical distribution of leaf area and change in these variables during a growth period. This...

  20. Enhancement of seasonal prediction of East Asian summer rainfall related to western tropical Pacific convection

    NASA Astrophysics Data System (ADS)

    Lee, Doo Young; Ahn, Joong-Bae; Yoo, Jin-Ho

    2015-08-01

    The prediction skills of climate model simulations in the western tropical Pacific (WTP) and East Asian region are assessed using the retrospective forecasts of seven state-of-the-art coupled models and their multi-model ensemble (MME) for boreal summers (June-August) during the period 1983-2005, along with corresponding observed and reanalyzed data. The prediction of summer rainfall anomalies in East Asia is difficult, while the WTP has a strong correlation between model prediction and observation. We focus on developing a new approach to further enhance the seasonal prediction skill for summer rainfall in East Asia and investigate the influence of convective activity in the WTP on East Asian summer rainfall. By analyzing the characteristics of the WTP convection, two distinct patterns associated with El Niño-Southern Oscillation developing and decaying modes are identified. Based on the multiple linear regression method, the East Asia Rainfall Index (EARI) is developed by using the interannual variability of the normalized Maritime continent-WTP Indices (MPIs), as potentially useful predictors for rainfall prediction over East Asia, obtained from the above two main patterns. For East Asian summer rainfall, the EARI has superior performance to the East Asia summer monsoon index or each MPI. Therefore, the regressed rainfall from EARI also shows a strong relationship with the observed East Asian summer rainfall pattern. In addition, we evaluate the prediction skill of the East Asia reconstructed rainfall obtained by hybrid dynamical-statistical approach using the cross-validated EARI from the individual models and their MME. The results show that the rainfalls reconstructed from simulations capture the general features of observed precipitation in East Asia quite well. This study convincingly demonstrates that rainfall prediction skill is considerably improved by using a hybrid dynamical-statistical approach compared to the dynamical forecast alone.

  1. Predicting Culex pipiens/restuans population dynamics by interval lagged weather data

    PubMed Central

    2013-01-01

    Background Culex pipiens/restuans mosquitoes are important vectors for a variety of arthropod borne viral infections. In this study, the associations between 20 years of mosquito capture data and the time lagged environmental quantities daytime length, temperature, precipitation, relative humidity and wind speed were used to generate a predictive model for the population dynamics of this vector species. Methods Mosquito population in the study area was represented by averaged time series of mosquitos counts captured at 6 sites in Cook County (Illinois, USA). Cross-correlation maps (CCMs) were compiled to investigate the association between mosquito abundances and environmental quantities. The results obtained from the CCMs were incorporated into a Poisson regression to generate a predictive model. To optimize the predictive model the time lags obtained from the CCMs were adjusted using a genetic algorithm. Results CCMs for weekly data showed a highly positive correlation of mosquito abundances with daytime length 4 to 5 weeks prior to capture (quantified by a Spearman rank order correlation of rS = 0.898) and with temperature during 2 weeks prior to capture (rS = 0.870). Maximal negative correlations were found for wind speed averaged over 3 week prior to capture (rS = −0.621). Cx. pipiens/restuans population dynamics was predicted by integrating the CCM results in Poisson regression models. They were used to simulate the average seasonal cycle of the mosquito abundance. Verification with observations resulted in a correlation of rS = 0.899 for daily and rS = 0.917 for weekly data. Applying the optimized models to the entire 20-years time series also resulted in a suitable fit with rS = 0.876 for daily and rS = 0.899 for weekly data. Conclusions The study demonstrates the application of interval lagged weather data to predict mosquito abundances with a feasible accuracy, especially when related to weekly Cx. pipiens/restuans populations. PMID:23634763

  2. Modeling the role of the close-range effect and environmental variables in the occurrence and spread of Phragmites australis in four sites on the Finnish coast of the Gulf of Finland and the Archipelago Sea

    PubMed Central

    Altartouri, Anas; Nurminen, Leena; Jolma, Ari

    2014-01-01

    Phragmites australis, a native helophyte in coastal areas of the Baltic Sea, has significantly spread on the Finnish coast in the last decades raising ecological questions and social interest and concern due to the important role it plays in the ecosystem dynamics of shallow coastal areas. Despite its important implications on the planning and management of the area, predictive modeling of Phragmites distribution is not well studied. We examined the prevalence and progression of Phragmites in four sites along the Southern Finnish coast in multiple time frames in relation to a number of predictors. We also analyzed patterns of neighborhood effect on the expansion and disappearance of Phragmites in a cellular data model. We developed boosted regression trees models to predict Phragmites occurrences and produce maps of habitat suitability. Various Phragmites spread figures were observed in different areas and time periods, with a minimum annual expansion rate of 1% and a maximum of 8%. The water depth, shore openness, and proximity to river mouths were found influential in Phragmites distribution. The neighborhood configuration partially explained the dynamics of Phragmites colonies. The boosted regression trees method was successfully used to interpolate and extrapolate Phragmites distributions in the study sites highlighting its potential for assessing habitat suitability for Phragmites along the Finnish coast. Our findings are useful for a number of applications. With variables easily available, delineation of areas susceptible for Phragmites colonization allows early management plans to be made. Given the influence of reed beds on the littoral species and ecosystem, these results can be useful for the ecological studies of coastal areas. We provide estimates of habitat suitability and quantification of Phragmites expansion in a form suitable for dynamic modeling, which would be useful for predicting future Phragmites distribution under different scenarios of land cover change and Phragmites spatial configuration. PMID:24772277

  3. Analytical Modelling and Optimization of the Temperature-Dependent Dynamic Mechanical Properties of Fused Deposition Fabricated Parts Made of PC-ABS.

    PubMed

    Mohamed, Omar Ahmed; Masood, Syed Hasan; Bhowmik, Jahar Lal

    2016-11-04

    Fused deposition modeling (FDM) additive manufacturing has been intensively used for many industrial applications due to its attractive advantages over traditional manufacturing processes. The process parameters used in FDM have significant influence on the part quality and its properties. This process produces the plastic part through complex mechanisms and it involves complex relationships between the manufacturing conditions and the quality of the processed part. In the present study, the influence of multi-level manufacturing parameters on the temperature-dependent dynamic mechanical properties of FDM processed parts was investigated using IV-optimality response surface methodology (RSM) and multilayer feed-forward neural networks (MFNNs). The process parameters considered for optimization and investigation are slice thickness, raster to raster air gap, deposition angle, part print direction, bead width, and number of perimeters. Storage compliance and loss compliance were considered as response variables. The effect of each process parameter was investigated using developed regression models and multiple regression analysis. The surface characteristics are studied using scanning electron microscope (SEM). Furthermore, performance of optimum conditions was determined and validated by conducting confirmation experiment. The comparison between the experimental values and the predicted values by IV-Optimal RSM and MFNN was conducted for each experimental run and results indicate that the MFNN provides better predictions than IV-Optimal RSM.

  4. Analytical Modelling and Optimization of the Temperature-Dependent Dynamic Mechanical Properties of Fused Deposition Fabricated Parts Made of PC-ABS

    PubMed Central

    Mohamed, Omar Ahmed; Masood, Syed Hasan; Bhowmik, Jahar Lal

    2016-01-01

    Fused deposition modeling (FDM) additive manufacturing has been intensively used for many industrial applications due to its attractive advantages over traditional manufacturing processes. The process parameters used in FDM have significant influence on the part quality and its properties. This process produces the plastic part through complex mechanisms and it involves complex relationships between the manufacturing conditions and the quality of the processed part. In the present study, the influence of multi-level manufacturing parameters on the temperature-dependent dynamic mechanical properties of FDM processed parts was investigated using IV-optimality response surface methodology (RSM) and multilayer feed-forward neural networks (MFNNs). The process parameters considered for optimization and investigation are slice thickness, raster to raster air gap, deposition angle, part print direction, bead width, and number of perimeters. Storage compliance and loss compliance were considered as response variables. The effect of each process parameter was investigated using developed regression models and multiple regression analysis. The surface characteristics are studied using scanning electron microscope (SEM). Furthermore, performance of optimum conditions was determined and validated by conducting confirmation experiment. The comparison between the experimental values and the predicted values by IV-Optimal RSM and MFNN was conducted for each experimental run and results indicate that the MFNN provides better predictions than IV-Optimal RSM. PMID:28774019

  5. Voxel-wise prostate cell density prediction using multiparametric magnetic resonance imaging and machine learning.

    PubMed

    Sun, Yu; Reynolds, Hayley M; Wraith, Darren; Williams, Scott; Finnegan, Mary E; Mitchell, Catherine; Murphy, Declan; Haworth, Annette

    2018-04-26

    There are currently no methods to estimate cell density in the prostate. This study aimed to develop predictive models to estimate prostate cell density from multiparametric magnetic resonance imaging (mpMRI) data at a voxel level using machine learning techniques. In vivo mpMRI data were collected from 30 patients before radical prostatectomy. Sequences included T2-weighted imaging, diffusion-weighted imaging and dynamic contrast-enhanced imaging. Ground truth cell density maps were computed from histology and co-registered with mpMRI. Feature extraction and selection were performed on mpMRI data. Final models were fitted using three regression algorithms including multivariate adaptive regression spline (MARS), polynomial regression (PR) and generalised additive model (GAM). Model parameters were optimised using leave-one-out cross-validation on the training data and model performance was evaluated on test data using root mean square error (RMSE) measurements. Predictive models to estimate voxel-wise prostate cell density were successfully trained and tested using the three algorithms. The best model (GAM) achieved a RMSE of 1.06 (± 0.06) × 10 3 cells/mm 2 and a relative deviation of 13.3 ± 0.8%. Prostate cell density can be quantitatively estimated non-invasively from mpMRI data using high-quality co-registered data at a voxel level. These cell density predictions could be used for tissue classification, treatment response evaluation and personalised radiotherapy.

  6. Groundwater depth prediction in a shallow aquifer in north China by a quantile regression model

    NASA Astrophysics Data System (ADS)

    Li, Fawen; Wei, Wan; Zhao, Yong; Qiao, Jiale

    2017-01-01

    There is a close relationship between groundwater level in a shallow aquifer and the surface ecological environment; hence, it is important to accurately simulate and predict the groundwater level in eco-environmental construction projects. The multiple linear regression (MLR) model is one of the most useful methods to predict groundwater level (depth); however, the predicted values by this model only reflect the mean distribution of the observations and cannot effectively fit the extreme distribution data (outliers). The study reported here builds a prediction model of groundwater-depth dynamics in a shallow aquifer using the quantile regression (QR) method on the basis of the observed data of groundwater depth and related factors. The proposed approach was applied to five sites in Tianjin city, north China, and the groundwater depth was calculated in different quantiles, from which the optimal quantile was screened out according to the box plot method and compared to the values predicted by the MLR model. The results showed that the related factors in the five sites did not follow the standard normal distribution and that there were outliers in the precipitation and last-month (initial state) groundwater-depth factors because the basic assumptions of the MLR model could not be achieved, thereby causing errors. Nevertheless, these conditions had no effect on the QR model, as it could more effectively describe the distribution of original data and had a higher precision in fitting the outliers.

  7. Observing Consistency in Online Communication Patterns for User Re-Identification.

    PubMed

    Adeyemi, Ikuesan Richard; Razak, Shukor Abd; Salleh, Mazleena; Venter, Hein S

    2016-01-01

    Comprehension of the statistical and structural mechanisms governing human dynamics in online interaction plays a pivotal role in online user identification, online profile development, and recommender systems. However, building a characteristic model of human dynamics on the Internet involves a complete analysis of the variations in human activity patterns, which is a complex process. This complexity is inherent in human dynamics and has not been extensively studied to reveal the structural composition of human behavior. A typical method of anatomizing such a complex system is viewing all independent interconnectivity that constitutes the complexity. An examination of the various dimensions of human communication pattern in online interactions is presented in this paper. The study employed reliable server-side web data from 31 known users to explore characteristics of human-driven communications. Various machine-learning techniques were explored. The results revealed that each individual exhibited a relatively consistent, unique behavioral signature and that the logistic regression model and model tree can be used to accurately distinguish online users. These results are applicable to one-to-one online user identification processes, insider misuse investigation processes, and online profiling in various areas.

  8. Monthly Rainfall Erosivity Assessment for Switzerland

    NASA Astrophysics Data System (ADS)

    Schmidt, Simon; Meusburger, Katrin; Alewell, Christine

    2016-04-01

    Water erosion is crucially controlled by rainfall erosivity, which is quantified out of the kinetic energy of raindrop impact and associated surface runoff. Rainfall erosivity is often expressed as the R-factor in soil erosion risk models like the Universal Soil Loss Equation (USLE) and its revised version (RUSLE). Just like precipitation, the rainfall erosivity of Switzerland has a characteristic seasonal dynamic throughout the year. This inter-annual variability is to be assessed by a monthly and seasonal modelling approach. We used a network of 86 precipitation gauging stations with a 10-minute temporal resolution to calculate long-term average monthly R-factors. Stepwise regression and Monte Carlo Cross Validation (MCCV) was used to select spatial covariates to explain the spatial pattern of R-factor for each month across Switzerland. The regionalized monthly R-factor is mapped by its individual regression equation and the ordinary kriging interpolation of its residuals (Regression-Kriging). As covariates, a variety of precipitation indicator data has been included like snow height, a combination of hourly gauging measurements and radar observations (CombiPrecip), mean monthly alpine precipitation (EURO4M-APGD) and monthly precipitation sums (Rhires). Topographic parameters were also significant explanatory variables for single months. The comparison of all 12 monthly rainfall erosivity maps showed seasonality with highest rainfall erosivity in summer (June, July, and August) and lowest rainfall erosivity in winter months. Besides the inter-annual temporal regime, a seasonal spatial variability was detectable. Spatial maps of monthly rainfall erosivity are presented for the first time for Switzerland. The assessment of the spatial and temporal dynamic behaviour of the R-factor is valuable for the identification of more susceptible seasons and regions as well as for the application of selective erosion control measures. A combination with monthly vegetation cover (C-factor) maps would enable the assessment of seasonal dynamics of erosion processes in Switzerland.

  9. The Virtual Brain: Modeling Biological Correlates of Recovery after Chronic Stroke

    PubMed Central

    Falcon, Maria Inez; Riley, Jeffrey D.; Jirsa, Viktor; McIntosh, Anthony R.; Shereen, Ahmed D.; Chen, E. Elinor; Solodkin, Ana

    2015-01-01

    There currently remains considerable variability in stroke survivor recovery. To address this, developing individualized treatment has become an important goal in stroke treatment. As a first step, it is necessary to determine brain dynamics associated with stroke and recovery. While recent methods have made strides in this direction, we still lack physiological biomarkers. The Virtual Brain (TVB) is a novel application for modeling brain dynamics that simulates an individual’s brain activity by integrating their own neuroimaging data with local biophysical models. Here, we give a detailed description of the TVB modeling process and explore model parameters associated with stroke. In order to establish a parallel between this new type of modeling and those currently in use, in this work we establish an association between a specific TVB parameter (long-range coupling) that increases after stroke with metrics derived from graph analysis. We used TVB to simulate the individual BOLD signals for 20 patients with stroke and 10 healthy controls. We performed graph analysis on their structural connectivity matrices calculating degree centrality, betweenness centrality, and global efficiency. Linear regression analysis demonstrated that long-range coupling is negatively correlated with global efficiency (P = 0.038), but is not correlated with degree centrality or betweenness centrality. Our results suggest that the larger influence of local dynamics seen through the long-range coupling parameter is closely associated with a decreased efficiency of the system. We thus propose that the increase in the long-range parameter in TVB (indicating a bias toward local over global dynamics) is deleterious because it reduces communication as suggested by the decrease in efficiency. The new model platform TVB hence provides a novel perspective to understanding biophysical parameters responsible for global brain dynamics after stroke, allowing the design of focused therapeutic interventions. PMID:26579071

  10. Supervised Learning for Dynamical System Learning.

    PubMed

    Hefny, Ahmed; Downey, Carlton; Gordon, Geoffrey J

    2015-01-01

    Recently there has been substantial interest in spectral methods for learning dynamical systems. These methods are popular since they often offer a good tradeoff between computational and statistical efficiency. Unfortunately, they can be difficult to use and extend in practice: e.g., they can make it difficult to incorporate prior information such as sparsity or structure. To address this problem, we present a new view of dynamical system learning: we show how to learn dynamical systems by solving a sequence of ordinary supervised learning problems, thereby allowing users to incorporate prior knowledge via standard techniques such as L 1 regularization. Many existing spectral methods are special cases of this new framework, using linear regression as the supervised learner. We demonstrate the effectiveness of our framework by showing examples where nonlinear regression or lasso let us learn better state representations than plain linear regression does; the correctness of these instances follows directly from our general analysis.

  11. Modeling workplace bullying using catastrophe theory.

    PubMed

    Escartin, J; Ceja, L; Navarro, J; Zapf, D

    2013-10-01

    Workplace bullying is defined as negative behaviors directed at organizational members or their work context that occur regularly and repeatedly over a period of time. Employees' perceptions of psychosocial safety climate, workplace bullying victimization, and workplace bullying perpetration were assessed within a sample of nearly 5,000 workers. Linear and nonlinear approaches were applied in order to model both continuous and sudden changes in workplace bullying. More specifically, the present study examines whether a nonlinear dynamical systems model (i.e., a cusp catastrophe model) is superior to the linear combination of variables for predicting the effect of psychosocial safety climate and workplace bullying victimization on workplace bullying perpetration. According to the AICc, and BIC indices, the linear regression model fits the data better than the cusp catastrophe model. The study concludes that some phenomena, especially unhealthy behaviors at work (like workplace bullying), may be better studied using linear approaches as opposed to nonlinear dynamical systems models. This can be explained through the healthy variability hypothesis, which argues that positive organizational behavior is likely to present nonlinear behavior, while a decrease in such variability may indicate the occurrence of negative behaviors at work.

  12. The Lateral Tracking Control for the Intelligent Vehicle Based on Adaptive PID Neural Network.

    PubMed

    Han, Gaining; Fu, Weiping; Wang, Wen; Wu, Zongsheng

    2017-05-30

    The intelligent vehicle is a complicated nonlinear system, and the design of a path tracking controller is one of the key technologies in intelligent vehicle research. This paper mainly designs a lateral control dynamic model of the intelligent vehicle, which is used for lateral tracking control. Firstly, the vehicle dynamics model (i.e., transfer function) is established according to the vehicle parameters. Secondly, according to the vehicle steering control system and the CARMA (Controlled Auto-Regression and Moving-Average) model, a second-order control system model is built. Using forgetting factor recursive least square estimation (FFRLS), the system parameters are identified. Finally, a neural network PID (Proportion Integral Derivative) controller is established for lateral path tracking control based on the vehicle model and the steering system model. Experimental simulation results show that the proposed model and algorithm have the high real-time and robustness in path tracing control. This provides a certain theoretical basis for intelligent vehicle autonomous navigation tracking control, and lays the foundation for the vertical and lateral coupling control.

  13. The Lateral Tracking Control for the Intelligent Vehicle Based on Adaptive PID Neural Network

    PubMed Central

    Han, Gaining; Fu, Weiping; Wang, Wen; Wu, Zongsheng

    2017-01-01

    The intelligent vehicle is a complicated nonlinear system, and the design of a path tracking controller is one of the key technologies in intelligent vehicle research. This paper mainly designs a lateral control dynamic model of the intelligent vehicle, which is used for lateral tracking control. Firstly, the vehicle dynamics model (i.e., transfer function) is established according to the vehicle parameters. Secondly, according to the vehicle steering control system and the CARMA (Controlled Auto-Regression and Moving-Average) model, a second-order control system model is built. Using forgetting factor recursive least square estimation (FFRLS), the system parameters are identified. Finally, a neural network PID (Proportion Integral Derivative) controller is established for lateral path tracking control based on the vehicle model and the steering system model. Experimental simulation results show that the proposed model and algorithm have the high real-time and robustness in path tracing control. This provides a certain theoretical basis for intelligent vehicle autonomous navigation tracking control, and lays the foundation for the vertical and lateral coupling control. PMID:28556817

  14. CO2 flux determination by closed-chamber methods can be seriously biased by inappropriate application of linear regression

    NASA Astrophysics Data System (ADS)

    Kutzbach, L.; Schneider, J.; Sachs, T.; Giebels, M.; Nykänen, H.; Shurpali, N. J.; Martikainen, P. J.; Alm, J.; Wilmking, M.

    2007-11-01

    Closed (non-steady state) chambers are widely used for quantifying carbon dioxide (CO2) fluxes between soils or low-stature canopies and the atmosphere. It is well recognised that covering a soil or vegetation by a closed chamber inherently disturbs the natural CO2 fluxes by altering the concentration gradients between the soil, the vegetation and the overlying air. Thus, the driving factors of CO2 fluxes are not constant during the closed chamber experiment, and no linear increase or decrease of CO2 concentration over time within the chamber headspace can be expected. Nevertheless, linear regression has been applied for calculating CO2 fluxes in many recent, partly influential, studies. This approach has been justified by keeping the closure time short and assuming the concentration change over time to be in the linear range. Here, we test if the application of linear regression is really appropriate for estimating CO2 fluxes using closed chambers over short closure times and if the application of nonlinear regression is necessary. We developed a nonlinear exponential regression model from diffusion and photosynthesis theory. This exponential model was tested with four different datasets of CO2 flux measurements (total number: 1764) conducted at three peatlands sites in Finland and a tundra site in Siberia. Thorough analyses of residuals demonstrated that linear regression was frequently not appropriate for the determination of CO2 fluxes by closed-chamber methods, even if closure times were kept short. The developed exponential model was well suited for nonlinear regression of the concentration over time c(t) evolution in the chamber headspace and estimation of the initial CO2 fluxes at closure time for the majority of experiments. However, a rather large percentage of the exponential regression functions showed curvatures not consistent with the theoretical model which is considered to be caused by violations of the underlying model assumptions. Especially the effects of turbulence and pressure disturbances by the chamber deployment are suspected to have caused unexplainable curvatures. CO2 flux estimates by linear regression can be as low as 40% of the flux estimates of exponential regression for closure times of only two minutes. The degree of underestimation increased with increasing CO2 flux strength and was dependent on soil and vegetation conditions which can disturb not only the quantitative but also the qualitative evaluation of CO2 flux dynamics. The underestimation effect by linear regression was observed to be different for CO2 uptake and release situations which can lead to stronger bias in the daily, seasonal and annual CO2 balances than in the individual fluxes. To avoid serious bias of CO2 flux estimates based on closed chamber experiments, we suggest further tests using published datasets and recommend the use of nonlinear regression models for future closed chamber studies.

  15. Geodesic regression for image time-series.

    PubMed

    Niethammer, Marc; Huang, Yang; Vialard, François-Xavier

    2011-01-01

    Registration of image-time series has so far been accomplished (i) by concatenating registrations between image pairs, (ii) by solving a joint estimation problem resulting in piecewise geodesic paths between image pairs, (iii) by kernel based local averaging or (iv) by augmenting the joint estimation with additional temporal irregularity penalties. Here, we propose a generative model extending least squares linear regression to the space of images by using a second-order dynamic formulation for image registration. Unlike previous approaches, the formulation allows for a compact representation of an approximation to the full spatio-temporal trajectory through its initial values. The method also opens up possibilities to design image-based approximation algorithms. The resulting optimization problem is solved using an adjoint method.

  16. Impacts of land use change on watershed streamflow and sediment yield: An assessment using hydrologic modelling and partial least squares regression

    NASA Astrophysics Data System (ADS)

    Yan, B.; Fang, N. F.; Zhang, P. C.; Shi, Z. H.

    2013-03-01

    SummaryUnderstanding how changes in individual land use types influence the dynamics of streamflow and sediment yield would greatly improve the predictability of the hydrological consequences of land use changes and could thus help stakeholders to make better decisions. Multivariate statistics are commonly used to compare individual land use types to control the dynamics of streamflow or sediment yields. However, one issue with the use of conventional statistical methods to address relationships between land use types and streamflow or sediment yield is multicollinearity. In this study, an integrated approach involving hydrological modelling and partial least squares regression (PLSR) was used to quantify the contributions of changes in individual land use types to changes in streamflow and sediment yield. In a case study, hydrological modelling was conducted using land use maps from four time periods (1978, 1987, 1999, and 2007) for the Upper Du watershed (8973 km2) in China using the Soil and Water Assessment Tool (SWAT). Changes in streamflow and sediment yield across the two simulations conducted using the land use maps from 2007 to 1978 were found to be related to land use changes according to a PLSR, which was used to quantify the effect of this influence at the sub-basin scale. The major land use changes that affected streamflow in the studied catchment areas were related to changes in the farmland, forest and urban areas between 1978 and 2007; the corresponding regression coefficients were 0.232, -0.147 and 1.256, respectively, and the Variable Influence on Projection (VIP) was greater than 1. The dominant first-order factors affecting the changes in sediment yield in our study were: farmland (the VIP and regression coefficient were 1.762 and 14.343, respectively) and forest (the VIP and regression coefficient were 1.517 and -7.746, respectively). The PLSR methodology presented in this paper is beneficial and novel, as it partially eliminates the co-dependency of the variables and facilitates a more unbiased view of the contribution of the changes in individual land use types to changes in streamflow and sediment yield. This practicable and simple approach could be applied to a variety of other watersheds for which time-sequenced digital land use maps are available.

  17. Optimal Control Method of Robot End Position and Orientation Based on Dynamic Tracking Measurement

    NASA Astrophysics Data System (ADS)

    Liu, Dalong; Xu, Lijuan

    2018-01-01

    In order to improve the accuracy of robot pose positioning and control, this paper proposed a dynamic tracking measurement robot pose optimization control method based on the actual measurement of D-H parameters of the robot, the parameters is taken with feedback compensation of the robot, according to the geometrical parameters obtained by robot pose tracking measurement, improved multi sensor information fusion the extended Kalan filter method, with continuous self-optimal regression, using the geometric relationship between joint axes for kinematic parameters in the model, link model parameters obtained can timely feedback to the robot, the implementation of parameter correction and compensation, finally we can get the optimal attitude angle, realize the robot pose optimization control experiments were performed. 6R dynamic tracking control of robot joint robot with independent research and development is taken as experimental subject, the simulation results show that the control method improves robot positioning accuracy, and it has the advantages of versatility, simplicity, ease of operation and so on.

  18. Measuring the Impact of Financial Intermediation: Linking Contract Theory to Econometric Policy Evaluation *

    PubMed Central

    Townsend, Robert M.; Urzua, Sergio S.

    2010-01-01

    We study the impact that financial intermediation can have on productivity through the alleviation of credit constraints in occupation choice and/or an improved allocation of risk, using both static and dynamic structural models as well as reduced form OLS and IV regressions. Our goal in this paper is to bring these two strands of the literature together. Even though, under certain assumptions, IV regressions can recover accurately the true model-generated local average treatment effect, these are quantitatively different, in order of magnitude and even sign, from other policy impact parameters (e.g., ATE and TT). We also show that laying out clearly alternative models can guide the search for instruments. On the other hand adding more margins of decision, i.e., occupation choice and intermediation jointly, or adding more periods with promised utilities as key state variables, as in optimal multi-period contracts, can cause the misinterpretation of IV as the causal effect of interest. PMID:20436953

  19. Mobile Phone-Based Unobtrusive Ecological Momentary Assessment of Day-to-Day Mood: An Explorative Study.

    PubMed

    Asselbergs, Joost; Ruwaard, Jeroen; Ejdys, Michal; Schrader, Niels; Sijbrandij, Marit; Riper, Heleen

    2016-03-29

    Ecological momentary assessment (EMA) is a useful method to tap the dynamics of psychological and behavioral phenomena in real-world contexts. However, the response burden of (self-report) EMA limits its clinical utility. The aim was to explore mobile phone-based unobtrusive EMA, in which mobile phone usage logs are considered as proxy measures of clinically relevant user states and contexts. This was an uncontrolled explorative pilot study. Our study consisted of 6 weeks of EMA/unobtrusive EMA data collection in a Dutch student population (N=33), followed by a regression modeling analysis. Participants self-monitored their mood on their mobile phone (EMA) with a one-dimensional mood measure (1 to 10) and a two-dimensional circumplex measure (arousal/valence, -2 to 2). Meanwhile, with participants' consent, a mobile phone app unobtrusively collected (meta) data from six smartphone sensor logs (unobtrusive EMA: calls/short message service (SMS) text messages, screen time, application usage, accelerometer, and phone camera events). Through forward stepwise regression (FSR), we built personalized regression models from the unobtrusive EMA variables to predict day-to-day variation in EMA mood ratings. The predictive performance of these models (ie, cross-validated mean squared error and percentage of correct predictions) was compared to naive benchmark regression models (the mean model and a lag-2 history model). A total of 27 participants (81%) provided a mean 35.5 days (SD 3.8) of valid EMA/unobtrusive EMA data. The FSR models accurately predicted 55% to 76% of EMA mood scores. However, the predictive performance of these models was significantly inferior to that of naive benchmark models. Mobile phone-based unobtrusive EMA is a technically feasible and potentially powerful EMA variant. The method is young and positive findings may not replicate. At present, we do not recommend the application of FSR-based mood prediction in real-world clinical settings. Further psychometric studies and more advanced data mining techniques are needed to unlock unobtrusive EMA's true potential.

  20. A financial network perspective of financial institutions' systemic risk contributions

    NASA Astrophysics Data System (ADS)

    Huang, Wei-Qiang; Zhuang, Xin-Tian; Yao, Shuang; Uryasev, Stan

    2016-08-01

    This study considers the effects of the financial institutions' local topology structure in the financial network on their systemic risk contribution using data from the Chinese stock market. We first measure the systemic risk contribution with the Conditional Value-at-Risk (CoVaR) which is estimated by applying dynamic conditional correlation multivariate GARCH model (DCC-MVGARCH). Financial networks are constructed from dynamic conditional correlations (DCC) with graph filtering method of minimum spanning trees (MSTs). Then we investigate dynamics of systemic risk contributions of financial institution. Also we study dynamics of financial institution's local topology structure in the financial network. Finally, we analyze the quantitative relationships between the local topology structure and systemic risk contribution with panel data regression analysis. We find that financial institutions with greater node strength, larger node betweenness centrality, larger node closeness centrality and larger node clustering coefficient tend to be associated with larger systemic risk contributions.

  1. A dynamic multi-level optimal design method with embedded finite-element modeling for power transformers

    NASA Astrophysics Data System (ADS)

    Zhang, Yunpeng; Ho, Siu-lau; Fu, Weinong

    2018-05-01

    This paper proposes a dynamic multi-level optimal design method for power transformer design optimization (TDO) problems. A response surface generated by second-order polynomial regression analysis is updated dynamically by adding more design points, which are selected by Shifted Hammersley Method (SHM) and calculated by finite-element method (FEM). The updating stops when the accuracy requirement is satisfied, and optimized solutions of the preliminary design are derived simultaneously. The optimal design level is modulated through changing the level of error tolerance. Based on the response surface of the preliminary design, a refined optimal design is added using multi-objective genetic algorithm (MOGA). The effectiveness of the proposed optimal design method is validated through a classic three-phase power TDO problem.

  2. Active Learning to Understand Infectious Disease Models and Improve Policy Making

    PubMed Central

    Vladislavleva, Ekaterina; Broeckhove, Jan; Beutels, Philippe; Hens, Niel

    2014-01-01

    Modeling plays a major role in policy making, especially for infectious disease interventions but such models can be complex and computationally intensive. A more systematic exploration is needed to gain a thorough systems understanding. We present an active learning approach based on machine learning techniques as iterative surrogate modeling and model-guided experimentation to systematically analyze both common and edge manifestations of complex model runs. Symbolic regression is used for nonlinear response surface modeling with automatic feature selection. First, we illustrate our approach using an individual-based model for influenza vaccination. After optimizing the parameter space, we observe an inverse relationship between vaccination coverage and cumulative attack rate reinforced by herd immunity. Second, we demonstrate the use of surrogate modeling techniques on input-response data from a deterministic dynamic model, which was designed to explore the cost-effectiveness of varicella-zoster virus vaccination. We use symbolic regression to handle high dimensionality and correlated inputs and to identify the most influential variables. Provided insight is used to focus research, reduce dimensionality and decrease decision uncertainty. We conclude that active learning is needed to fully understand complex systems behavior. Surrogate models can be readily explored at no computational expense, and can also be used as emulator to improve rapid policy making in various settings. PMID:24743387

  3. Active learning to understand infectious disease models and improve policy making.

    PubMed

    Willem, Lander; Stijven, Sean; Vladislavleva, Ekaterina; Broeckhove, Jan; Beutels, Philippe; Hens, Niel

    2014-04-01

    Modeling plays a major role in policy making, especially for infectious disease interventions but such models can be complex and computationally intensive. A more systematic exploration is needed to gain a thorough systems understanding. We present an active learning approach based on machine learning techniques as iterative surrogate modeling and model-guided experimentation to systematically analyze both common and edge manifestations of complex model runs. Symbolic regression is used for nonlinear response surface modeling with automatic feature selection. First, we illustrate our approach using an individual-based model for influenza vaccination. After optimizing the parameter space, we observe an inverse relationship between vaccination coverage and cumulative attack rate reinforced by herd immunity. Second, we demonstrate the use of surrogate modeling techniques on input-response data from a deterministic dynamic model, which was designed to explore the cost-effectiveness of varicella-zoster virus vaccination. We use symbolic regression to handle high dimensionality and correlated inputs and to identify the most influential variables. Provided insight is used to focus research, reduce dimensionality and decrease decision uncertainty. We conclude that active learning is needed to fully understand complex systems behavior. Surrogate models can be readily explored at no computational expense, and can also be used as emulator to improve rapid policy making in various settings.

  4. Design optimization of tailor-rolled blank thin-walled structures based on ɛ-support vector regression technique and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Duan, Libin; Xiao, Ning-cong; Li, Guangyao; Cheng, Aiguo; Chen, Tao

    2017-07-01

    Tailor-rolled blank thin-walled (TRB-TH) structures have become important vehicle components owing to their advantages of light weight and crashworthiness. The purpose of this article is to provide an efficient lightweight design for improving the energy-absorbing capability of TRB-TH structures under dynamic loading. A finite element (FE) model for TRB-TH structures is established and validated by performing a dynamic axial crash test. Different material properties for individual parts with different thicknesses are considered in the FE model. Then, a multi-objective crashworthiness design of the TRB-TH structure is constructed based on the ɛ-support vector regression (ɛ-SVR) technique and non-dominated sorting genetic algorithm-II. The key parameters (C, ɛ and σ) are optimized to further improve the predictive accuracy of ɛ-SVR under limited sample points. Finally, the technique for order preference by similarity to the ideal solution method is used to rank the solutions in Pareto-optimal frontiers and find the best compromise optima. The results demonstrate that the light weight and crashworthiness performance of the optimized TRB-TH structures are superior to their uniform thickness counterparts. The proposed approach provides useful guidance for designing TRB-TH energy absorbers for vehicle bodies.

  5. An ensemble Kalman filter for statistical estimation of physics constrained nonlinear regression models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harlim, John, E-mail: jharlim@psu.edu; Mahdi, Adam, E-mail: amahdi@ncsu.edu; Majda, Andrew J., E-mail: jonjon@cims.nyu.edu

    2014-01-15

    A central issue in contemporary science is the development of nonlinear data driven statistical–dynamical models for time series of noisy partial observations from nature or a complex model. It has been established recently that ad-hoc quadratic multi-level regression models can have finite-time blow-up of statistical solutions and/or pathological behavior of their invariant measure. Recently, a new class of physics constrained nonlinear regression models were developed to ameliorate this pathological behavior. Here a new finite ensemble Kalman filtering algorithm is developed for estimating the state, the linear and nonlinear model coefficients, the model and the observation noise covariances from available partialmore » noisy observations of the state. Several stringent tests and applications of the method are developed here. In the most complex application, the perfect model has 57 degrees of freedom involving a zonal (east–west) jet, two topographic Rossby waves, and 54 nonlinearly interacting Rossby waves; the perfect model has significant non-Gaussian statistics in the zonal jet with blocked and unblocked regimes and a non-Gaussian skewed distribution due to interaction with the other 56 modes. We only observe the zonal jet contaminated by noise and apply the ensemble filter algorithm for estimation. Numerically, we find that a three dimensional nonlinear stochastic model with one level of memory mimics the statistical effect of the other 56 modes on the zonal jet in an accurate fashion, including the skew non-Gaussian distribution and autocorrelation decay. On the other hand, a similar stochastic model with zero memory levels fails to capture the crucial non-Gaussian behavior of the zonal jet from the perfect 57-mode model.« less

  6. Multiscale asymmetric orthogonal wavelet kernel for linear programming support vector learning and nonlinear dynamic systems identification.

    PubMed

    Lu, Zhao; Sun, Jing; Butts, Kenneth

    2014-05-01

    Support vector regression for approximating nonlinear dynamic systems is more delicate than the approximation of indicator functions in support vector classification, particularly for systems that involve multitudes of time scales in their sampled data. The kernel used for support vector learning determines the class of functions from which a support vector machine can draw its solution, and the choice of kernel significantly influences the performance of a support vector machine. In this paper, to bridge the gap between wavelet multiresolution analysis and kernel learning, the closed-form orthogonal wavelet is exploited to construct new multiscale asymmetric orthogonal wavelet kernels for linear programming support vector learning. The closed-form multiscale orthogonal wavelet kernel provides a systematic framework to implement multiscale kernel learning via dyadic dilations and also enables us to represent complex nonlinear dynamics effectively. To demonstrate the superiority of the proposed multiscale wavelet kernel in identifying complex nonlinear dynamic systems, two case studies are presented that aim at building parallel models on benchmark datasets. The development of parallel models that address the long-term/mid-term prediction issue is more intricate and challenging than the identification of series-parallel models where only one-step ahead prediction is required. Simulation results illustrate the effectiveness of the proposed multiscale kernel learning.

  7. Dynamic genome-scale metabolic modeling of the yeast Pichia pastoris.

    PubMed

    Saitua, Francisco; Torres, Paulina; Pérez-Correa, José Ricardo; Agosin, Eduardo

    2017-02-21

    Pichia pastoris shows physiological advantages in producing recombinant proteins, compared to other commonly used cell factories. This yeast is mostly grown in dynamic cultivation systems, where the cell's environment is continuously changing and many variables influence process productivity. In this context, a model capable of explaining and predicting cell behavior for the rational design of bioprocesses is highly desirable. Currently, there are five genome-scale metabolic reconstructions of P. pastoris which have been used to predict extracellular cell behavior in stationary conditions. In this work, we assembled a dynamic genome-scale metabolic model for glucose-limited, aerobic cultivations of Pichia pastoris. Starting from an initial model structure for batch and fed-batch cultures, we performed pre/post regression diagnostics to ensure that model parameters were identifiable, significant and sensitive. Once identified, the non-relevant ones were iteratively fixed until a priori robust modeling structures were found for each type of cultivation. Next, the robustness of these reduced structures was confirmed by calibrating the model with new datasets, where no sensitivity, identifiability or significance problems appeared in their parameters. Afterwards, the model was validated for the prediction of batch and fed-batch dynamics in the studied conditions. Lastly, the model was employed as a case study to analyze the metabolic flux distribution of a fed-batch culture and to unravel genetic and process engineering strategies to improve the production of recombinant Human Serum Albumin (HSA). Simulation of single knock-outs indicated that deviation of carbon towards cysteine and tryptophan formation improves HSA production. The deletion of methylene tetrahydrofolate dehydrogenase could increase the HSA volumetric productivity by 630%. Moreover, given specific bioprocess limitations and strain characteristics, the model suggests that implementation of a decreasing specific growth rate during the feed phase of a fed-batch culture results in a 25% increase of the volumetric productivity of the protein. In this work, we formulated a dynamic genome scale metabolic model of Pichia pastoris that yields realistic metabolic flux distributions throughout dynamic cultivations. The model can be calibrated with experimental data to rationally propose genetic and process engineering strategies to improve the performance of a P. pastoris strain of interest.

  8. Eutrophication risk assessment in coastal embayments using simple statistical models.

    PubMed

    Arhonditsis, G; Eleftheriadou, M; Karydis, M; Tsirtsis, G

    2003-09-01

    A statistical methodology is proposed for assessing the risk of eutrophication in marine coastal embayments. The procedure followed was the development of regression models relating the levels of chlorophyll a (Chl) with the concentration of the limiting nutrient--usually nitrogen--and the renewal rate of the systems. The method was applied in the Gulf of Gera, Island of Lesvos, Aegean Sea and a surrogate for renewal rate was created using the Canberra metric as a measure of the resemblance between the Gulf and the oligotrophic waters of the open sea in terms of their physical, chemical and biological properties. The Chl-total dissolved nitrogen-renewal rate regression model was the most significant, accounting for 60% of the variation observed in Chl. Predicted distributions of Chl for various combinations of the independent variables, based on Bayesian analysis of the models, enabled comparison of the outcomes of specific scenarios of interest as well as further analysis of the system dynamics. The present statistical approach can be used as a methodological tool for testing the resilience of coastal ecosystems under alternative managerial schemes and levels of exogenous nutrient loading.

  9. Creep analysis of silicone for podiatry applications.

    PubMed

    Janeiro-Arocas, Julia; Tarrío-Saavedra, Javier; López-Beceiro, Jorge; Naya, Salvador; López-Canosa, Adrián; Heredia-García, Nicolás; Artiaga, Ramón

    2016-10-01

    This work shows an effective methodology to characterize the creep-recovery behavior of silicones before their application in podiatry. The aim is to characterize, model and compare the creep-recovery properties of different types of silicone used in podiatry orthotics. Creep-recovery phenomena of silicones used in podiatry orthotics is characterized by dynamic mechanical analysis (DMA). Silicones provided by Herbitas are compared by observing their viscoelastic properties by Functional Data Analysis (FDA) and nonlinear regression. The relationship between strain and time is modeled by fixed and mixed effects nonlinear regression to compare easily and intuitively podiatry silicones. Functional ANOVA and Kohlrausch-Willians-Watts (KWW) model with fixed and mixed effects allows us to compare different silicones observing the values of fitting parameters and their physical meaning. The differences between silicones are related to the variations of breadth of creep-recovery time distribution and instantaneous deformation-permanent strain. Nevertheless, the mean creep-relaxation time is the same for all the studied silicones. Silicones used in palliative orthoses have higher instantaneous deformation-permanent strain and narrower creep-recovery distribution. The proposed methodology based on DMA, FDA and nonlinear regression is an useful tool to characterize and choose the proper silicone for each podiatry application according to their viscoelastic properties. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Analysis of a Rocket Based Combined Cycle Engine during Rocket Only Operation

    NASA Technical Reports Server (NTRS)

    Smith, T. D.; Steffen, C. J., Jr.; Yungster, S.; Keller, D. J.

    1998-01-01

    The all rocket mode of operation is a critical factor in the overall performance of a rocket based combined cycle (RBCC) vehicle. However, outside of performing experiments or a full three dimensional analysis, there are no first order parametric models to estimate performance. As a result, an axisymmetric RBCC engine was used to analytically determine specific impulse efficiency values based upon both full flow and gas generator configurations. Design of experiments methodology was used to construct a test matrix and statistical regression analysis was used to build parametric models. The main parameters investigated in this study were: rocket chamber pressure, rocket exit area ratio, percent of injected secondary flow, mixer-ejector inlet area, mixer-ejector area ratio, and mixer-ejector length-to-inject diameter ratio. A perfect gas computational fluid dynamics analysis was performed to obtain values of vacuum specific impulse. Statistical regression analysis was performed based on both full flow and gas generator engine cycles. Results were also found to be dependent upon the entire cycle assumptions. The statistical regression analysis determined that there were five significant linear effects, six interactions, and one second-order effect. Two parametric models were created to provide performance assessments of an RBCC engine in the all rocket mode of operation.

  11. Predictions of Control Inputs, Periodic Responses and Damping Levels of an Isolated Experimental Rotor in Trimmed Flight

    NASA Technical Reports Server (NTRS)

    Gaonkar, G. H.; Subramanian, S.

    1996-01-01

    Since the early 1990s the Aeroflightdynamics Directorate at the Ames Research Center has been conducting tests on isolated hingeless rotors in hover and forward flight. The primary objective is to generate a database on aeroelastic stability in trimmed flight for torsionally soft rotors at realistic tip speeds. The rotor test model has four soft inplane blades of NACA 0012 airfoil section with low torsional stiffness. The collective pitch and shaft tilt are set prior to each test run, and then the rotor is trimmed in the following sense: the longitudinal and lateral cyclic pitch controls are adjusted through a swashplate to minimize the 1/rev flapping moment at the 12 percent radial station. In hover, the database comprises lag regressive-mode damping with pitch variations. In forward flight the database comprises cyclic pitch controls, root flap moment and lag regressive-mode damping with advance ratio, shaft angle and pitch variations. This report presents the predictions and their correlation with the database. A modal analysis is used, in which nonrotating modes in flap bending, lag bending and torsion are computed from the measured blade mass and stiffness distributions. The airfoil aerodynamics is represented by the ONERA dynamic stall models of lift, drag and pitching moment, and the wake dynamics is represented by a state-space wake model. The trim analysis of finding, the cyclic controls and the corresponding, periodic responses is based on periodic shooting with damped Newton iteration; the Floquet transition matrix (FTM) comes out as a byproduct. The stabillty analysis of finding the frequencies and damping levels is based on the eigenvalue-eigenvector analysis of the FTM. All the structural and aerodynamic states are included from modeling to trim analysis. A major finding is that dynamic wake dramatically improves the correlation for the lateral cyclic pitch control. Overall, the correlation is fairly good.

  12. Time Series Expression Analyses Using RNA-seq: A Statistical Approach

    PubMed Central

    Oh, Sunghee; Song, Seongho; Grabowski, Gregory; Zhao, Hongyu; Noonan, James P.

    2013-01-01

    RNA-seq is becoming the de facto standard approach for transcriptome analysis with ever-reducing cost. It has considerable advantages over conventional technologies (microarrays) because it allows for direct identification and quantification of transcripts. Many time series RNA-seq datasets have been collected to study the dynamic regulations of transcripts. However, statistically rigorous and computationally efficient methods are needed to explore the time-dependent changes of gene expression in biological systems. These methods should explicitly account for the dependencies of expression patterns across time points. Here, we discuss several methods that can be applied to model timecourse RNA-seq data, including statistical evolutionary trajectory index (SETI), autoregressive time-lagged regression (AR(1)), and hidden Markov model (HMM) approaches. We use three real datasets and simulation studies to demonstrate the utility of these dynamic methods in temporal analysis. PMID:23586021

  13. Time series expression analyses using RNA-seq: a statistical approach.

    PubMed

    Oh, Sunghee; Song, Seongho; Grabowski, Gregory; Zhao, Hongyu; Noonan, James P

    2013-01-01

    RNA-seq is becoming the de facto standard approach for transcriptome analysis with ever-reducing cost. It has considerable advantages over conventional technologies (microarrays) because it allows for direct identification and quantification of transcripts. Many time series RNA-seq datasets have been collected to study the dynamic regulations of transcripts. However, statistically rigorous and computationally efficient methods are needed to explore the time-dependent changes of gene expression in biological systems. These methods should explicitly account for the dependencies of expression patterns across time points. Here, we discuss several methods that can be applied to model timecourse RNA-seq data, including statistical evolutionary trajectory index (SETI), autoregressive time-lagged regression (AR(1)), and hidden Markov model (HMM) approaches. We use three real datasets and simulation studies to demonstrate the utility of these dynamic methods in temporal analysis.

  14. Functional Data Analysis for Dynamical System Identification of Behavioral Processes

    PubMed Central

    Trail, Jessica B.; Collins, Linda M.; Rivera, Daniel E.; Li, Runze; Piper, Megan E.; Baker, Timothy B.

    2014-01-01

    Efficient new technology has made it straightforward for behavioral scientists to collect anywhere from several dozen to several thousand dense, repeated measurements on one or more time-varying variables. These intensive longitudinal data (ILD) are ideal for examining complex change over time, but present new challenges that illustrate the need for more advanced analytic methods. For example, in ILD the temporal spacing of observations may be irregular, and individuals may be sampled at different times. Also, it is important to assess both how the outcome changes over time and the variation between participants' time-varying processes to make inferences about a particular intervention's effectiveness within the population of interest. The methods presented in this article integrate two innovative ILD analytic techniques: functional data analysis and dynamical systems modeling. An empirical application is presented using data from a smoking cessation clinical trial. Study participants provided 42 daily assessments of pre-quit and post-quit withdrawal symptoms. Regression splines were used to approximate smooth functions of craving and negative affect and to estimate the variables' derivatives for each participant. We then modeled the dynamics of nicotine craving using standard input-output dynamical systems models. These models provide a more detailed characterization of the post-quit craving process than do traditional longitudinal models, including information regarding the type, magnitude, and speed of the response to an input. The results, in conjunction with standard engineering control theory techniques, could potentially be used by tobacco researchers to develop a more effective smoking intervention. PMID:24079929

  15. Inference of Gene Regulatory Networks Incorporating Multi-Source Biological Knowledge via a State Space Model with L1 Regularization

    PubMed Central

    Hasegawa, Takanori; Yamaguchi, Rui; Nagasaki, Masao; Miyano, Satoru; Imoto, Seiya

    2014-01-01

    Comprehensive understanding of gene regulatory networks (GRNs) is a major challenge in the field of systems biology. Currently, there are two main approaches in GRN analysis using time-course observation data, namely an ordinary differential equation (ODE)-based approach and a statistical model-based approach. The ODE-based approach can generate complex dynamics of GRNs according to biologically validated nonlinear models. However, it cannot be applied to ten or more genes to simultaneously estimate system dynamics and regulatory relationships due to the computational difficulties. The statistical model-based approach uses highly abstract models to simply describe biological systems and to infer relationships among several hundreds of genes from the data. However, the high abstraction generates false regulations that are not permitted biologically. Thus, when dealing with several tens of genes of which the relationships are partially known, a method that can infer regulatory relationships based on a model with low abstraction and that can emulate the dynamics of ODE-based models while incorporating prior knowledge is urgently required. To accomplish this, we propose a method for inference of GRNs using a state space representation of a vector auto-regressive (VAR) model with L1 regularization. This method can estimate the dynamic behavior of genes based on linear time-series modeling constructed from an ODE-based model and can infer the regulatory structure among several tens of genes maximizing prediction ability for the observational data. Furthermore, the method is capable of incorporating various types of existing biological knowledge, e.g., drug kinetics and literature-recorded pathways. The effectiveness of the proposed method is shown through a comparison of simulation studies with several previous methods. For an application example, we evaluated mRNA expression profiles over time upon corticosteroid stimulation in rats, thus incorporating corticosteroid kinetics/dynamics, literature-recorded pathways and transcription factor (TF) information. PMID:25162401

  16. Digression and Value Concatenation to Enable Privacy-Preserving Regression.

    PubMed

    Li, Xiao-Bai; Sarkar, Sumit

    2014-09-01

    Regression techniques can be used not only for legitimate data analysis, but also to infer private information about individuals. In this paper, we demonstrate that regression trees, a popular data-analysis and data-mining technique, can be used to effectively reveal individuals' sensitive data. This problem, which we call a "regression attack," has not been addressed in the data privacy literature, and existing privacy-preserving techniques are not appropriate in coping with this problem. We propose a new approach to counter regression attacks. To protect against privacy disclosure, our approach introduces a novel measure, called digression , which assesses the sensitive value disclosure risk in the process of building a regression tree model. Specifically, we develop an algorithm that uses the measure for pruning the tree to limit disclosure of sensitive data. We also propose a dynamic value-concatenation method for anonymizing data, which better preserves data utility than a user-defined generalization scheme commonly used in existing approaches. Our approach can be used for anonymizing both numeric and categorical data. An experimental study is conducted using real-world financial, economic and healthcare data. The results of the experiments demonstrate that the proposed approach is very effective in protecting data privacy while preserving data quality for research and analysis.

  17. 4D-LQTA-QSAR and docking study on potent Gram-negative specific LpxC inhibitors: a comparison to CoMFA modeling.

    PubMed

    Ghasemi, Jahan B; Safavi-Sohi, Reihaneh; Barbosa, Euzébio G

    2012-02-01

    A quasi 4D-QSAR has been carried out on a series of potent Gram-negative LpxC inhibitors. This approach makes use of the molecular dynamics (MD) trajectories and topology information retrieved from the GROMACS package. This new methodology is based on the generation of a conformational ensemble profile, CEP, for each compound instead of only one conformation, followed by the calculation intermolecular interaction energies at each grid point considering probes and all aligned conformations resulting from MD simulations. These interaction energies are independent variables employed in a QSAR analysis. The comparison of the proposed methodology to comparative molecular field analysis (CoMFA) formalism was performed. This methodology explores jointly the main features of CoMFA and 4D-QSAR models. Step-wise multiple linear regression was used for the selection of the most informative variables. After variable selection, multiple linear regression (MLR) and partial least squares (PLS) methods used for building the regression models. Leave-N-out cross-validation (LNO), and Y-randomization were performed in order to confirm the robustness of the model in addition to analysis of the independent test set. Best models provided the following statistics: [Formula in text] (PLS) and [Formula in text] (MLR). Docking study was applied to investigate the major interactions in protein-ligand complex with CDOCKER algorithm. Visualization of the descriptors of the best model helps us to interpret the model from the chemical point of view, supporting the applicability of this new approach in rational drug design.

  18. A Novel Continuous Blood Pressure Estimation Approach Based on Data Mining Techniques.

    PubMed

    Miao, Fen; Fu, Nan; Zhang, Yuan-Ting; Ding, Xiao-Rong; Hong, Xi; He, Qingyun; Li, Ye

    2017-11-01

    Continuous blood pressure (BP) estimation using pulse transit time (PTT) is a promising method for unobtrusive BP measurement. However, the accuracy of this approach must be improved for it to be viable for a wide range of applications. This study proposes a novel continuous BP estimation approach that combines data mining techniques with a traditional mechanism-driven model. First, 14 features derived from simultaneous electrocardiogram and photoplethysmogram signals were extracted for beat-to-beat BP estimation. A genetic algorithm-based feature selection method was then used to select BP indicators for each subject. Multivariate linear regression and support vector regression were employed to develop the BP model. The accuracy and robustness of the proposed approach were validated for static, dynamic, and follow-up performance. Experimental results based on 73 subjects showed that the proposed approach exhibited excellent accuracy in static BP estimation, with a correlation coefficient and mean error of 0.852 and -0.001 ± 3.102 mmHg for systolic BP, and 0.790 and -0.004 ± 2.199 mmHg for diastolic BP. Similar performance was observed for dynamic BP estimation. The robustness results indicated that the estimation accuracy was lower by a certain degree one day after model construction but was relatively stable from one day to six months after construction. The proposed approach is superior to the state-of-the-art PTT-based model for an approximately 2-mmHg reduction in the standard derivation at different time intervals, thus providing potentially novel insights for cuffless BP estimation.

  19. Interdependency of the maximum range of flexion-extension of hand metacarpophalangeal joints.

    PubMed

    Gracia-Ibáñez, V; Vergara, M; Sancho-Bru, J-L

    2016-12-01

    Mobility of the fingers metacarpophalangeal (MCP) joints depends on the posture of the adjacent ones. Current Biomechanical hand models consider fixed ranges of movement at joints, regardless of the posture, thus allowing for non-realistic postures, generating wrong results in reach studies and forward dynamic analyses. This study provides data for more realistic hand models. The maximum voluntary extension (MVE) and flexion (MVF) of different combinations of MCP joints were measured covering their range of motion. Dependency of the MVF and MVE on the posture of the adjacent MCP joints was confirmed and mathematical models obtained through regression analyses (RMSE 7.7°).

  20. Are There Long-Run Effects of the Minimum Wage?

    PubMed Central

    Sorkin, Isaac

    2014-01-01

    An empirical consensus suggests that there are small employment effects of minimum wage increases. This paper argues that these are short-run elasticities. Long-run elasticities, which may differ from short-run elasticities, are policy relevant. This paper develops a dynamic industry equilibrium model of labor demand. The model makes two points. First, long-run regressions have been misinterpreted because even if the short- and long-run employment elasticities differ, standard methods would not detect a difference using US variation. Second, the model offers a reconciliation of the small estimated short-run employment effects with the commonly found pass-through of minimum wage increases to product prices. PMID:25937790

  1. Are There Long-Run Effects of the Minimum Wage?

    PubMed

    Sorkin, Isaac

    2015-04-01

    An empirical consensus suggests that there are small employment effects of minimum wage increases. This paper argues that these are short-run elasticities. Long-run elasticities, which may differ from short-run elasticities, are policy relevant. This paper develops a dynamic industry equilibrium model of labor demand. The model makes two points. First, long-run regressions have been misinterpreted because even if the short- and long-run employment elasticities differ, standard methods would not detect a difference using US variation. Second, the model offers a reconciliation of the small estimated short-run employment effects with the commonly found pass-through of minimum wage increases to product prices.

  2. Influence of crop type specification and spatial resolution on empirical modeling of field-scale Maize and Soybean carbon fluxes in the US Great Plains

    NASA Astrophysics Data System (ADS)

    McCombs, A. G.; Hiscox, A.; Wang, C.; Desai, A. R.

    2016-12-01

    A challenge in satellite land surface remote-sensing models of ecosystem carbon dynamics in agricultural systems is the lack of differentiation by crop type and management. This generalization can lead to large discrepancies between model predictions and eddy covariance flux tower observations of net ecosystem exchange of CO2 (NEE). Literature confirms that NEE varies remarkably among different crop types making the generalization of agriculture in remote sensing based models inaccurate. Here, we address this inaccuracy by identifying and mapping net ecosystem exchange (NEE) in agricultural fields by comparing bulk modeling and modeling by crop type, and using this information to develop empirical models for future use. We focus on mapping NEE in maize and soybean fields in the US Great Plains at higher spatial resolution using the fusion of MODIS and LandSAT surface reflectance. MODIS observed reflectance was downscaled using the ESTARFM downscaling methodology to match spatial scales to those found in LandSAT and that are more appropriate for carbon dynamics in agriculture fields. A multiple regression model was developed from surface reflectance of the downscaled MODIS and LandSAT remote sensing values calibrated against five FLUXNET/AMERIFLUX flux towers located on soybean and/or maize agricultural fields in the US Great Plains with multi-year NEE observations. Our new methodology improves upon bulk approximates to map and model carbon dynamics in maize and soybean fields, which have significantly different photosynthetic capacities.

  3. Empirical Behavioral Models to Support Alternative Tools for the Analysis of Mixed-Priority Pedestrian-Vehicle Interaction in a Highway Capacity Context

    PubMed Central

    Rouphail, Nagui M.

    2011-01-01

    This paper presents behavioral-based models for describing pedestrian gap acceptance at unsignalized crosswalks in a mixed-priority environment, where some drivers yield and some pedestrians cross in gaps. Logistic regression models are developed to predict the probability of pedestrian crossings as a function of vehicle dynamics, pedestrian assertiveness, and other factors. In combination with prior work on probabilistic yielding models, the results can be incorporated in a simulation environment, where they can more fully describe the interaction of these two modes. The approach is intended to supplement HCM analytical procedure for locations where significant interaction occurs between drivers and pedestrians, including modern roundabouts. PMID:21643488

  4. MpTheory Java library: a multi-platform Java library for systems biology based on the Metabolic P theory.

    PubMed

    Marchetti, Luca; Manca, Vincenzo

    2015-04-15

    MpTheory Java library is an open-source project collecting a set of objects and algorithms for modeling observed dynamics by means of the Metabolic P (MP) theory, that is, a mathematical theory introduced in 2004 for modeling biological dynamics. By means of the library, it is possible to model biological systems both at continuous and at discrete time. Moreover, the library comprises a set of regression algorithms for inferring MP models starting from time series of observations. To enhance the modeling experience, beside a pure Java usage, the library can be directly used within the most popular computing environments, such as MATLAB, GNU Octave, Mathematica and R. The library is open-source and licensed under the GNU Lesser General Public License (LGPL) Version 3.0. Source code, binaries and complete documentation are available at http://mptheory.scienze.univr.it. luca.marchetti@univr.it, marchetti@cosbi.eu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. Approximate Dynamic Programming Algorithms for United States Air Force Officer Sustainment

    DTIC Science & Technology

    2015-03-26

    level of correction needed. While paying bonuses has an easily calculable cost, RIFs have more subtle costs. Mone (1994) discovered that in a steady...a regression is performed utilizing instrumental variables to minimize Bellman error. This algorithm uses a set of basis functions to approximate the...transitioned to an all-volunteer force. Charnes et al. (1972) utilize a goal programming model for General Schedule civilian manpower management in the

  6. A Bayesian model averaging method for the derivation of reservoir operating rules

    NASA Astrophysics Data System (ADS)

    Zhang, Jingwen; Liu, Pan; Wang, Hao; Lei, Xiaohui; Zhou, Yanlai

    2015-09-01

    Because the intrinsic dynamics among optimal decision making, inflow processes and reservoir characteristics are complex, functional forms of reservoir operating rules are always determined subjectively. As a result, the uncertainty of selecting form and/or model involved in reservoir operating rules must be analyzed and evaluated. In this study, we analyze the uncertainty of reservoir operating rules using the Bayesian model averaging (BMA) model. Three popular operating rules, namely piecewise linear regression, surface fitting and a least-squares support vector machine, are established based on the optimal deterministic reservoir operation. These individual models provide three-member decisions for the BMA combination, enabling the 90% release interval to be estimated by the Markov Chain Monte Carlo simulation. A case study of China's the Baise reservoir shows that: (1) the optimal deterministic reservoir operation, superior to any reservoir operating rules, is used as the samples to derive the rules; (2) the least-squares support vector machine model is more effective than both piecewise linear regression and surface fitting; (3) BMA outperforms any individual model of operating rules based on the optimal trajectories. It is revealed that the proposed model can reduce the uncertainty of operating rules, which is of great potential benefit in evaluating the confidence interval of decisions.

  7. The use of modelling to evaluate and adapt strategies for animal disease control.

    PubMed

    Saegerman, C; Porter, S R; Humblet, M F

    2011-08-01

    Disease is often associated with debilitating clinical signs, disorders or production losses in animals and/or humans, leading to severe socio-economic repercussions. This explains the high priority that national health authorities and international organisations give to selecting control strategies for and the eradication of specific diseases. When a control strategy is selected and implemented, an effective method of evaluating its efficacy is through modelling. To illustrate the usefulness of models in evaluating control strategies, the authors describe several examples in detail, including three examples of classification and regression tree modelling to evaluate and improve the early detection of disease: West Nile fever in equids, bovine spongiform encephalopathy (BSE) and multifactorial diseases, such as colony collapse disorder (CCD) in the United States. Also examined are regression modelling to evaluate skin test practices and the efficacy of an awareness campaign for bovine tuberculosis (bTB); mechanistic modelling to monitor the progress of a control strategy for BSE; and statistical nationwide modelling to analyse the spatio-temporal dynamics of bTB and search for potential risk factors that could be used to target surveillance measures more effectively. In the accurate application of models, an interdisciplinary rather than a multidisciplinary approach is required, with the fewest assumptions possible.

  8. Dynamics and rate-dependence of the spatial angle between ventricular depolarization and repolarization wave fronts during exercise ECG.

    PubMed

    Kenttä, Tuomas; Karsikas, Mari; Kiviniemi, Antti; Tulppo, Mikko; Seppänen, Tapio; Huikuri, Heikki V

    2010-07-01

    QRS/T angle and the cosine of the angle between QRS and T-wave vectors (TCRT), measured from standard 12-lead electrocardiogram (ECG), have been used in risk stratification of patients. This study assessed the possible rate dependence of these variables during exercise ECG in healthy subjects. Forty healthy volunteers, 20 men and 20 women, aged 34.6 +/- 3.4, underwent an exercise ECG testing. Twelve-lead ECG was recorded from each test subject and the spatial QRS/T angle and TCRT were automatically analyzed in a beat-to-beat manner with custom-made software. The individual TCRT/RR and QRST/RR patterns were fitted with seven different regression models, including a linear model and six nonlinear models. TCRT and QRS/T angle showed a significant rate dependence, with decreased values at higher heart rates (HR). In individual subjects, the second-degree polynomic model was the best regression model for TCRT/RR and QRST/RR slopes. It provided the best fit for both exercise and recovery. The overall TCRT/RR and QRST/RR slopes were similar between men and women during exercise and recovery. However, women had predominantly higher TCRT and QRS/T values. With respect to time, the dynamics of TCRT differed significantly between men and women; with a steeper exercise slope in women (women, -0.04/min vs -0.02/min in men, P < 0.0001). In addition, evident hysteresis was observed in the TCRT/RR slopes; with higher TCRT values during exercise. The individual patterns of TCRT and QRS/T angle are affected by HR and gender. Delayed rate adaptation creates hysteresis in the TCRT/RR slopes.

  9. Poster — Thur Eve — 44: Linearization of Compartmental Models for More Robust Estimates of Regional Hemodynamic, Metabolic and Functional Parameters using DCE-CT/PET Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blais, AR; Dekaban, M; Lee, T-Y

    2014-08-15

    Quantitative analysis of dynamic positron emission tomography (PET) data usually involves minimizing a cost function with nonlinear regression, wherein the choice of starting parameter values and the presence of local minima affect the bias and variability of the estimated kinetic parameters. These nonlinear methods can also require lengthy computation time, making them unsuitable for use in clinical settings. Kinetic modeling of PET aims to estimate the rate parameter k{sub 3}, which is the binding affinity of the tracer to a biological process of interest and is highly susceptible to noise inherent in PET image acquisition. We have developed linearized kineticmore » models for kinetic analysis of dynamic contrast enhanced computed tomography (DCE-CT)/PET imaging, including a 2-compartment model for DCE-CT and a 3-compartment model for PET. Use of kinetic parameters estimated from DCE-CT can stabilize the kinetic analysis of dynamic PET data, allowing for more robust estimation of k{sub 3}. Furthermore, these linearized models are solved with a non-negative least squares algorithm and together they provide other advantages including: 1) only one possible solution and they do not require a choice of starting parameter values, 2) parameter estimates are comparable in accuracy to those from nonlinear models, 3) significantly reduced computational time. Our simulated data show that when blood volume and permeability are estimated with DCE-CT, the bias of k{sub 3} estimation with our linearized model is 1.97 ± 38.5% for 1,000 runs with a signal-to-noise ratio of 10. In summary, we have developed a computationally efficient technique for accurate estimation of k{sub 3} from noisy dynamic PET data.« less

  10. Observing Consistency in Online Communication Patterns for User Re-Identification

    PubMed Central

    Venter, Hein S.

    2016-01-01

    Comprehension of the statistical and structural mechanisms governing human dynamics in online interaction plays a pivotal role in online user identification, online profile development, and recommender systems. However, building a characteristic model of human dynamics on the Internet involves a complete analysis of the variations in human activity patterns, which is a complex process. This complexity is inherent in human dynamics and has not been extensively studied to reveal the structural composition of human behavior. A typical method of anatomizing such a complex system is viewing all independent interconnectivity that constitutes the complexity. An examination of the various dimensions of human communication pattern in online interactions is presented in this paper. The study employed reliable server-side web data from 31 known users to explore characteristics of human-driven communications. Various machine-learning techniques were explored. The results revealed that each individual exhibited a relatively consistent, unique behavioral signature and that the logistic regression model and model tree can be used to accurately distinguish online users. These results are applicable to one-to-one online user identification processes, insider misuse investigation processes, and online profiling in various areas. PMID:27918593

  11. Uncertainty of streamwater solute fluxes in five contrasting headwater catchments including model uncertainty and natural variability (Invited)

    NASA Astrophysics Data System (ADS)

    Aulenbach, B. T.; Burns, D. A.; Shanley, J. B.; Yanai, R. D.; Bae, K.; Wild, A.; Yang, Y.; Dong, Y.

    2013-12-01

    There are many sources of uncertainty in estimates of streamwater solute flux. Flux is the product of discharge and concentration (summed over time), each of which has measurement uncertainty of its own. Discharge can be measured almost continuously, but concentrations are usually determined from discrete samples, which increases uncertainty dependent on sampling frequency and how concentrations are assigned for the periods between samples. Gaps between samples can be estimated by linear interpolation or by models that that use the relations between concentration and continuously measured or known variables such as discharge, season, temperature, and time. For this project, developed in cooperation with QUEST (Quantifying Uncertainty in Ecosystem Studies), we evaluated uncertainty for three flux estimation methods and three different sampling frequencies (monthly, weekly, and weekly plus event). The constituents investigated were dissolved NO3, Si, SO4, and dissolved organic carbon (DOC), solutes whose concentration dynamics exhibit strongly contrasting behavior. The evaluation was completed for a 10-year period at five small, forested watersheds in Georgia, New Hampshire, New York, Puerto Rico, and Vermont. Concentration regression models were developed for each solute at each of the three sampling frequencies for all five watersheds. Fluxes were then calculated using (1) a linear interpolation approach, (2) a regression-model method, and (3) the composite method - which combines the regression-model method for estimating concentrations and the linear interpolation method for correcting model residuals to the observed sample concentrations. We considered the best estimates of flux to be derived using the composite method at the highest sampling frequencies. We also evaluated the importance of sampling frequency and estimation method on flux estimate uncertainty; flux uncertainty was dependent on the variability characteristics of each solute and varied for different reporting periods (e.g. 10-year, study period vs. annually vs. monthly). The usefulness of the two regression model based flux estimation approaches was dependent upon the amount of variance in concentrations the regression models could explain. Our results can guide the development of optimal sampling strategies by weighing sampling frequency with improvements in uncertainty in stream flux estimates for solutes with particular characteristics of variability. The appropriate flux estimation method is dependent on a combination of sampling frequency and the strength of concentration regression models. Sites: Biscuit Brook (Frost Valley, NY), Hubbard Brook Experimental Forest and LTER (West Thornton, NH), Luquillo Experimental Forest and LTER (Luquillo, Puerto Rico), Panola Mountain (Stockbridge, GA), Sleepers River Research Watershed (Danville, VT)

  12. Hierarchical Bayesian Markov switching models with application to predicting spawning success of shovelnose sturgeon

    USGS Publications Warehouse

    Holan, S.H.; Davis, G.M.; Wildhaber, M.L.; DeLonay, A.J.; Papoulias, D.M.

    2009-01-01

    The timing of spawning in fish is tightly linked to environmental factors; however, these factors are not very well understood for many species. Specifically, little information is available to guide recruitment efforts for endangered species such as the sturgeon. Therefore, we propose a Bayesian hierarchical model for predicting the success of spawning of the shovelnose sturgeon which uses both biological and behavioural (longitudinal) data. In particular, we use data that were produced from a tracking study that was conducted in the Lower Missouri River. The data that were produced from this study consist of biological variables associated with readiness to spawn along with longitudinal behavioural data collected by using telemetry and archival data storage tags. These high frequency data are complex both biologically and in the underlying behavioural process. To accommodate such complexity we developed a hierarchical linear regression model that uses an eigenvalue predictor, derived from the transition probability matrix of a two-state Markov switching model with generalized auto-regressive conditional heteroscedastic dynamics. Finally, to minimize the computational burden that is associated with estimation of this model, a parallel computing approach is proposed. ?? Journal compilation 2009 Royal Statistical Society.

  13. Occupational injuries in Italy: risk factors and long term trend (1951-98)

    PubMed Central

    Fabiano, B; Curro, F; Pastorino, R

    2001-01-01

    OBJECTIVES—Trends in the rates of total injuries and fatal accidents in the different sectors of Italian industries were explored during the period 1951-98. Causes and dynamics of injury were also studied for setting priorities for improving safety standards.
METHODS—Data on occupational injuries from the National Organisation for Labour Injury Insurance were combined with data from the State Statistics Institute to highlight the interaction between the injury frequency index trend and the production cycle—that is, the evolution of industrial production throughout the years. Multiple regression with log transformed rates was adopted to model the trends of occupational fatalities for each industrial group.
RESULTS—The ratios between the linked indices of injury frequency and industrial production showed a good correlation over the whole period. A general decline in injuries was found across all sectors, with values ranging from 79.86% in the energy group to 23.32% in the textile group. In analysing fatalities, the trend seemed to be more clearly decreasing than the trend of total injuries, including temporary and permanent disabilities; the fatalities showed an exponential decrease according to multiple regression, with an annual decline equal to 4.42%.
CONCLUSIONS—The overall probability of industrial fatal accidents in Italy tended to decrease exponentially by year. The most effective actions in preventing injuries were directed towards fatal accidents. By analysing the rates of fatal accident in the different sectors, appropriate targets and priorities for increased strategies to prevent injuries can be suggested. The analysis of the dynamics and the material causes of injuries showed that still more consideration should be given to human and organisational factors.


Keywords: labour injuries; severity; regression model PMID:11303083

  14. Photonic single nonlinear-delay dynamical node for information processing

    NASA Astrophysics Data System (ADS)

    Ortín, Silvia; San-Martín, Daniel; Pesquera, Luis; Gutiérrez, José Manuel

    2012-06-01

    An electro-optical system with a delay loop based on semiconductor lasers is investigated for information processing by performing numerical simulations. This system can replace a complex network of many nonlinear elements for the implementation of Reservoir Computing. We show that a single nonlinear-delay dynamical system has the basic properties to perform as reservoir: short-term memory and separation property. The computing performance of this system is evaluated for two prediction tasks: Lorenz chaotic time series and nonlinear auto-regressive moving average (NARMA) model. We sweep the parameters of the system to find the best performance. The results achieved for the Lorenz and the NARMA-10 tasks are comparable to those obtained by other machine learning methods.

  15. Spatio-temporal variations of nitric acid total columns from 9 years of IASI measurements - a driver study

    NASA Astrophysics Data System (ADS)

    Ronsmans, Gaétane; Wespes, Catherine; Hurtmans, Daniel; Clerbaux, Cathy; Coheur, Pierre-François

    2018-04-01

    This study aims to understand the spatial and temporal variability of HNO3 total columns in terms of explanatory variables. To achieve this, multiple linear regressions are used to fit satellite-derived time series of HNO3 daily averaged total columns. First, an analysis of the IASI 9-year time series (2008-2016) is conducted based on various equivalent latitude bands. The strong and systematic denitrification of the southern polar stratosphere is observed very clearly. It is also possible to distinguish, within the polar vortex, three regions which are differently affected by the denitrification. Three exceptional denitrification episodes in 2011, 2014 and 2016 are also observed in the Northern Hemisphere, due to unusually low arctic temperatures. The time series are then fitted by multivariate regressions to identify what variables are responsible for HNO3 variability in global distributions and time series, and to quantify their respective influence. Out of an ensemble of proxies (annual cycle, solar flux, quasi-biennial oscillation, multivariate ENSO index, Arctic and Antarctic oscillations and volume of polar stratospheric clouds), only the those defined as significant (p value < 0.05) by a selection algorithm are retained for each equivalent latitude band. Overall, the regression gives a good representation of HNO3 variability, with especially good results at high latitudes (60-80 % of the observed variability explained by the model). The regressions show the dominance of annual variability in all latitudinal bands, which is related to specific chemistry and dynamics depending on the latitudes. We find that the polar stratospheric clouds (PSCs) also have a major influence in the polar regions, and that their inclusion in the model improves the correlation coefficients and the residuals. However, there is still a relatively large portion of HNO3 variability that remains unexplained by the model, especially in the intertropical regions, where factors not included in the regression model (such as vegetation fires or lightning) may be at play.

  16. Computer-aided molecular modeling techniques for predicting the stability of drug cyclodextrin inclusion complexes in aqueous solutions

    NASA Astrophysics Data System (ADS)

    Faucci, Maria Teresa; Melani, Fabrizio; Mura, Paola

    2002-06-01

    Molecular modeling was used to investigate factors influencing complex formation between cyclodextrins and guest molecules and predict their stability through a theoretical model based on the search for a correlation between experimental stability constants ( Ks) and some theoretical parameters describing complexation (docking energy, host-guest contact surfaces, intermolecular interaction fields) calculated from complex structures at a minimum conformational energy, obtained through stochastic methods based on molecular dynamic simulations. Naproxen, ibuprofen, ketoprofen and ibuproxam were used as model drug molecules. Multiple Regression Analysis allowed identification of the significant factors for the complex stability. A mathematical model ( r=0.897) related log Ks with complex docking energy and lipophilic molecular fields of cyclodextrin and drug.

  17. Combustion Processes in Hybrid Rocket Engines

    NASA Technical Reports Server (NTRS)

    Venkateswaran,S.; Merkle, C. L.

    1996-01-01

    In recent years, there has been a resurgence of interest in the development of hybrid rocket engines for advanced launch vehicle applications. Hybrid propulsion systems use a solid fuel such as hydroxyl-terminated polybutadiene (HTPB) along with a gaseous/liquid oxidizer. The performance of hybrid combustors depends on the convective and radiative heat fluxes to the fuel surface, the rate of pyrolysis in the solid phase, and the turbulent combustion processes in the gaseous phases. These processes in combination specify the regression rates of the fuel surface and thereby the utilization efficiency of the fuel. In this paper, we employ computational fluid dynamics (CFD) techniques in order to gain a quantitative understanding of the physical trends in hybrid rocket combustors. The computational modeling is tailored to ongoing experiments at Penn State that employ a two dimensional slab burner configuration. The coordinated computational/experimental effort enables model validation while providing an understanding of the experimental observations. Computations to date have included the full length geometry with and with the aft nozzle section as well as shorter length domains for extensive parametric characterization. HTPB is sed as the fuel with 1,3 butadiene being taken as the gaseous product of the pyrolysis. Pure gaseous oxygen is taken as the oxidizer. The fuel regression rate is specified using an Arrhenius rate reaction, which the fuel surface temperature is given by an energy balance involving gas-phase convection and radiation as well as thermal conduction in the solid-phase. For the gas-phase combustion, a two step global reaction is used. The standard kappa - epsilon model is used for turbulence closure. Radiation is presently treated using a simple diffusion approximation which is valid for large optical path lengths, representative of radiation from soot particles. Computational results are obtained to determine the trends in the fuel burning or regression rates as a function of the head-end oxidizer mass flux, G=rho(e)U(e), and the chamber pressure. Furthermore, computation of the full slab burner configuration has also been obtained for various stages of the burn. Comparisons with available experimental data from small scale tests conducted by General Dynamics-Thiokol-Rocketdyne suggest reasonable agreement in the predicted regression rates. Future work will include: (1) a model for soot generation in the flame for more quantitative radiative transfer modelling, (2) a parametric study of combustion efficiency, and (3) transient calculations to help determine the possible mechanisms responsible for combustion instability in hybrid rocket motors.

  18. CO2 flux determination by closed-chamber methods can be seriously biased by inappropriate application of linear regression

    NASA Astrophysics Data System (ADS)

    Kutzbach, L.; Schneider, J.; Sachs, T.; Giebels, M.; Nykänen, H.; Shurpali, N. J.; Martikainen, P. J.; Alm, J.; Wilmking, M.

    2007-07-01

    Closed (non-steady state) chambers are widely used for quantifying carbon dioxide (CO2) fluxes between soils or low-stature canopies and the atmosphere. It is well recognised that covering a soil or vegetation by a closed chamber inherently disturbs the natural CO2 fluxes by altering the concentration gradients between the soil, the vegetation and the overlying air. Thus, the driving factors of CO2 fluxes are not constant during the closed chamber experiment, and no linear increase or decrease of CO2 concentration over time within the chamber headspace can be expected. Nevertheless, linear regression has been applied for calculating CO2 fluxes in many recent, partly influential, studies. This approach was justified by keeping the closure time short and assuming the concentration change over time to be in the linear range. Here, we test if the application of linear regression is really appropriate for estimating CO2 fluxes using closed chambers over short closure times and if the application of nonlinear regression is necessary. We developed a nonlinear exponential regression model from diffusion and photosynthesis theory. This exponential model was tested with four different datasets of CO2 flux measurements (total number: 1764) conducted at three peatland sites in Finland and a tundra site in Siberia. The flux measurements were performed using transparent chambers on vegetated surfaces and opaque chambers on bare peat surfaces. Thorough analyses of residuals demonstrated that linear regression was frequently not appropriate for the determination of CO2 fluxes by closed-chamber methods, even if closure times were kept short. The developed exponential model was well suited for nonlinear regression of the concentration over time c(t) evolution in the chamber headspace and estimation of the initial CO2 fluxes at closure time for the majority of experiments. CO2 flux estimates by linear regression can be as low as 40% of the flux estimates of exponential regression for closure times of only two minutes and even lower for longer closure times. The degree of underestimation increased with increasing CO2 flux strength and is dependent on soil and vegetation conditions which can disturb not only the quantitative but also the qualitative evaluation of CO2 flux dynamics. The underestimation effect by linear regression was observed to be different for CO2 uptake and release situations which can lead to stronger bias in the daily, seasonal and annual CO2 balances than in the individual fluxes. To avoid serious bias of CO2 flux estimates based on closed chamber experiments, we suggest further tests using published datasets and recommend the use of nonlinear regression models for future closed chamber studies.

  19. Multimodel inference to quantify the relative importance of abiotic factors in the population dynamics of marine zooplankton

    NASA Astrophysics Data System (ADS)

    Everaert, Gert; Deschutter, Yana; De Troch, Marleen; Janssen, Colin R.; De Schamphelaere, Karel

    2018-05-01

    The effect of multiple stressors on marine ecosystems remains poorly understood and most of the knowledge available is related to phytoplankton. To partly address this knowledge gap, we tested if combining multimodel inference with generalized additive modelling could quantify the relative contribution of environmental variables on the population dynamics of a zooplankton species in the Belgian part of the North Sea. Hence, we have quantified the relative contribution of oceanographic variables (e.g. water temperature, salinity, nutrient concentrations, and chlorophyll a concentrations) and anthropogenic chemicals (i.e. polychlorinated biphenyls) to the density of Acartia clausi. We found that models with water temperature and chlorophyll a concentration explained ca. 73% of the population density of the marine copepod. Multimodel inference in combination with regression-based models are a generic way to disentangle and quantify multiple stressor-induced changes in marine ecosystems. Future-oriented simulations of copepod densities suggested increased copepod densities under predicted environmental changes.

  20. A multilinear regression methodology to analyze the effect of atmospheric and surface forcing on Arctic clouds

    NASA Astrophysics Data System (ADS)

    Boeke, R.; Taylor, P. C.; Li, Y.

    2017-12-01

    Arctic cloud amount as simulated in CMIP5 models displays large intermodel spread- models disagree on the processes important for cloud formation as well as the radiative impact of clouds. The radiative response to cloud forcing can be better assessed when the drivers of Arctic cloud formation are known. Arctic cloud amount (CA) is a function of both atmospheric and surface conditions, and it is crucial to separate the influences of unique processes to understand why the models are different. This study uses a multilinear regression methodology to determine cloud changes using 3 variables as predictors: lower tropospheric stability (LTS), 500-hPa vertical velocity (ω500), and sea ice concentration (SIC). These three explanatory variables were chosen because their effects on clouds can be attributed to unique climate processes: LTS is a thermodynamic indicator of the relationship between clouds and atmospheric stability, SIC determines the interaction between clouds and the surface, and ω500 is a metric for dynamical change. Vertical, seasonal profiles of necessary variables are obtained from the Coupled Model Intercomparison Project 5 (CMIP5) historical simulation, an ocean-atmosphere couple model forced with the best-estimate natural and anthropogenic radiative forcing from 1850-2005, and statistical significance tests are used to confirm the regression equation. A unique heuristic model will be constructed for each climate model and for observations, and models will be tested by their ability to capture the observed cloud amount and behavior. Lastly, the intermodel spread in Arctic cloud amount will be attributed to individual processes, ranking the relative contributions of each factor to shed light on emergent constraints in the Arctic cloud radiative effect.

  1. Limb-darkening and the structure of the Jovian atmosphere

    NASA Technical Reports Server (NTRS)

    Newman, W. I.; Sagan, C.

    1978-01-01

    By observing the transit of various cloud features across the Jovian disk, limb-darkening curves were constructed for three regions in the 4.6 to 5.1 mu cm band. Several models currently employed in describing the radiative or dynamical properties of planetary atmospheres are here examined to understand their implications for limb-darkening. The statistical problem of fitting these models to the observed data is reviewed and methods for applying multiple regression analysis are discussed. Analysis of variance techniques are introduced to test the viability of a given physical process as a cause of the observed limb-darkening.

  2. Assessing the response of area burned to changing climate in western boreal North America using a Multivariate Adaptive Regression Splines (MARS) approach

    USGS Publications Warehouse

    Balshi, M. S.; McGuire, A.D.; Duffy, P.; Flannigan, M.; Walsh, J.; Melillo, J.

    2009-01-01

    Fire is a common disturbance in the North American boreal forest that influences ecosystem structure and function. The temporal and spatial dynamics of fire are likely to be altered as climate continues to change. In this study, we ask the question: how will area burned in boreal North America by wildfire respond to future changes in climate? To evaluate this question, we developed temporally and spatially explicit relationships between air temperature and fuel moisture codes derived from the Canadian Fire Weather Index System to estimate annual area burned at 2.5?? (latitude ?? longitude) resolution using a Multivariate Adaptive Regression Spline (MARS) approach across Alaska and Canada. Burned area was substantially more predictable in the western portion of boreal North America than in eastern Canada. Burned area was also not very predictable in areas of substantial topographic relief and in areas along the transition between boreal forest and tundra. At the scale of Alaska and western Canada, the empirical fire models explain on the order of 82% of the variation in annual area burned for the period 1960-2002. July temperature was the most frequently occurring predictor across all models, but the fuel moisture codes for the months June through August (as a group) entered the models as the most important predictors of annual area burned. To predict changes in the temporal and spatial dynamics of fire under future climate, the empirical fire models used output from the Canadian Climate Center CGCM2 global climate model to predict annual area burned through the year 2100 across Alaska and western Canada. Relative to 1991-2000, the results suggest that average area burned per decade will double by 2041-2050 and will increase on the order of 3.5-5.5 times by the last decade of the 21st century. To improve the ability to better predict wildfire across Alaska and Canada, future research should focus on incorporating additional effects of long-term and successional vegetation changes on area burned to account more fully for interactions among fire, climate, and vegetation dynamics. ?? 2009 The Authors Journal compilation ?? 2009 Blackwell Publishing Ltd.

  3. Complex regression Doppler optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Elahi, Sahar; Gu, Shi; Thrane, Lars; Rollins, Andrew M.; Jenkins, Michael W.

    2018-04-01

    We introduce a new method to measure Doppler shifts more accurately and extend the dynamic range of Doppler optical coherence tomography (OCT). The two-point estimate of the conventional Doppler method is replaced with a regression that is applied to high-density B-scans in polar coordinates. We built a high-speed OCT system using a 1.68-MHz Fourier domain mode locked laser to acquire high-density B-scans (16,000 A-lines) at high enough frame rates (˜100 fps) to accurately capture the dynamics of the beating embryonic heart. Flow phantom experiments confirm that the complex regression lowers the minimum detectable velocity from 12.25 mm / s to 374 μm / s, whereas the maximum velocity of 400 mm / s is measured without phase wrapping. Complex regression Doppler OCT also demonstrates higher accuracy and precision compared with the conventional method, particularly when signal-to-noise ratio is low. The extended dynamic range allows monitoring of blood flow over several stages of development in embryos without adjusting the imaging parameters. In addition, applying complex averaging recovers hidden features in structural images.

  4. Nonlinear empirical model of gas humidity-related voltage dynamics of a polymer-electrolyte-membrane fuel cell stack

    NASA Astrophysics Data System (ADS)

    Meiler, M.; Andre, D.; Schmid, O.; Hofer, E. P.

    Intelligent energy management is a cost-effective key path to realize efficient automotive drive trains [R. O'Hayre, S.W. Cha, W. Colella, F.B. Prinz. Fuel Cell Fundamentals, John Wiley & Sons, Hoboken, 2006]. To develop operating strategy in fuel cell drive trains, precise and computational efficient models of all system components, especially the fuel cell stack, are needed. Should these models further be used in diagnostic or control applications, then some major requirements must be fulfilled. First, the model must predict the mean fuel cell voltage very precisely in all possible operating conditions, even during transients. The model output should be as smooth as possible to support best efficient optimization strategies of the complete system. At least, the model must be computational efficient. For most applications, a difference between real fuel cell voltage and model output of less than 10 mV and 1000 calculations per second will be sufficient. In general, empirical models based on system identification offer a better accuracy and consume less calculation resources than detailed models derived from theoretical considerations [J. Larminie, A. Dicks. Fuel Cell Systems Explained, John Wiley & Sons, West Sussex, 2003]. In this contribution, the dynamic behaviour of the mean cell voltage of a polymer-electrolyte-membrane fuel cell (PEMFC) stack due to variations in humidity of cell's reactant gases is investigated. The validity of the overall model structure, a so-called general Hammerstein model (or Uryson model), was introduced recently in [M. Meiler, O. Schmid, M. Schudy, E.P. Hofer. Dynamic fuel cell stack model for real-time simulation based on system identification, J. Power Sources 176 (2007) 523-528]. Fuel cell mean voltage is calculated as the sum of a stationary and a dynamic voltage component. The stationary component of cell voltage is represented by a lookup-table and the dynamic voltage by a parallel placed, nonlinear transfer function. A suitable experimental setup to apply fast variations of gas humidity is introduced and is used to investigate a 10 cell PEMFC stack under various operation conditions. Using methods like stepwise multiple-regression a good mathematical description with reduced free parameters is achieved.

  5. Response of macroinvertebrate communities to temporal dynamics of pesticide mixtures: A case study from the Sacramento River watershed, California.

    PubMed

    Chiu, Ming-Chih; Hunt, Lisa; Resh, Vincent H

    2016-12-01

    Pesticide pollution from agricultural field run-off or spray drift has been documented to impact river ecosystems worldwide. However, there is limited data on short- and long-term effects of repeated pulses of pesticide mixtures on biotic assemblages in natural systems. We used reported pesticide application data as input to a hydrological fate and transport model (Soil and Water Assessment Tool) to simulate spatiotemporal dynamics of pesticides mixtures in streams on a daily time-step. We then applied regression models to explore the relationship between macroinvertebrate communities and pesticide dynamics in the Sacramento River watershed of California during 2002-2013. We found that both maximum and average pesticide toxic units were important in determining impacts on macroinvertebrates, and that the compositions of macroinvertebrates trended toward taxa having higher resilience and resistance to pesticide exposure, based on the Species at Risk pesticide (SPEAR pesticides ) index. Results indicate that risk-assessment efforts can be improved by considering both short- and long-term effects of pesticide mixtures on macroinvertebrate community composition. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Dynamic calibration and analysis of crack tip propagation in energetic materials using real-time radiography

    NASA Astrophysics Data System (ADS)

    Butt, Ali

    Crack propagation in a solid rocket motor environment is difficult to measure directly. This experimental and analytical study evaluated the viability of real-time radiography for detecting bore regression and propellant crack propagation speed. The scope included the quantitative interpretation of crack tip velocity from simulated radiographic images of a burning, center-perforated grain and actual real-time radiographs taken on a rapid-prototyped model that dynamically produced the surface movements modeled in the simulation. The simplified motor simulation portrayed a bore crack that propagated radially at a speed that was 10 times the burning rate of the bore. Comparing the experimental image interpretation with the calibrated surface inputs, measurement accuracies were quantified. The average measurements of the bore radius were within 3% of the calibrated values with a maximum error of 7%. The crack tip speed could be characterized with image processing algorithms, but not with the dynamic calibration data. The laboratory data revealed that noise in the transmitted X-Ray intensity makes sensing the crack tip propagation using changes in the centerline transmitted intensity level impractical using the algorithms employed.

  7. Forecasting financial asset processes: stochastic dynamics via learning neural networks.

    PubMed

    Giebel, S; Rainer, M

    2010-01-01

    Models for financial asset dynamics usually take into account their inherent unpredictable nature by including a suitable stochastic component into their process. Unknown (forward) values of financial assets (at a given time in the future) are usually estimated as expectations of the stochastic asset under a suitable risk-neutral measure. This estimation requires the stochastic model to be calibrated to some history of sufficient length in the past. Apart from inherent limitations, due to the stochastic nature of the process, the predictive power is also limited by the simplifying assumptions of the common calibration methods, such as maximum likelihood estimation and regression methods, performed often without weights on the historic time series, or with static weights only. Here we propose a novel method of "intelligent" calibration, using learning neural networks in order to dynamically adapt the parameters of the stochastic model. Hence we have a stochastic process with time dependent parameters, the dynamics of the parameters being themselves learned continuously by a neural network. The back propagation in training the previous weights is limited to a certain memory length (in the examples we consider 10 previous business days), which is similar to the maximal time lag of autoregressive processes. We demonstrate the learning efficiency of the new algorithm by tracking the next-day forecasts for the EURTRY and EUR-HUF exchange rates each.

  8. Estimating the Mechanical Behavior of the Knee Joint during Crouch Gait: Implications for Real-Time Motor Control of Robotic Knee Orthoses

    PubMed Central

    Damiano, Diane L.; Bulea, Thomas C.

    2016-01-01

    Individuals with cerebral palsy frequently exhibit crouch gait, a pathological walking pattern characterized by excessive knee flexion. Knowledge of the knee joint moment during crouch gait is necessary for the design and control of assistive devices used for treatment. Our goal was to 1) develop statistical models to estimate knee joint moment extrema and dynamic stiffness during crouch gait, and 2) use the models to estimate the instantaneous joint moment during weight-acceptance. We retrospectively computed knee moments from 10 children with crouch gait and used stepwise linear regression to develop statistical models describing the knee moment features. The models explained at least 90% of the response value variability: peak moment in early (99%) and late (90%) stance, and dynamic stiffness of weight-acceptance flexion (94%) and extension (98%). We estimated knee extensor moment profiles from the predicted dynamic stiffness and instantaneous knee angle. This approach captured the timing and shape of the computed moment (root-mean-squared error: 2.64 Nm); including the predicted early-stance peak moment as a correction factor improved model performance (root-mean-squared error: 1.37 Nm). Our strategy provides a practical, accurate method to estimate the knee moment during crouch gait, and could be used for real-time, adaptive control of robotic orthoses. PMID:27101612

  9. Functional data analysis for dynamical system identification of behavioral processes.

    PubMed

    Trail, Jessica B; Collins, Linda M; Rivera, Daniel E; Li, Runze; Piper, Megan E; Baker, Timothy B

    2014-06-01

    Efficient new technology has made it straightforward for behavioral scientists to collect anywhere from several dozen to several thousand dense, repeated measurements on one or more time-varying variables. These intensive longitudinal data (ILD) are ideal for examining complex change over time but present new challenges that illustrate the need for more advanced analytic methods. For example, in ILD the temporal spacing of observations may be irregular, and individuals may be sampled at different times. Also, it is important to assess both how the outcome changes over time and the variation between participants' time-varying processes to make inferences about a particular intervention's effectiveness within the population of interest. The methods presented in this article integrate 2 innovative ILD analytic techniques: functional data analysis and dynamical systems modeling. An empirical application is presented using data from a smoking cessation clinical trial. Study participants provided 42 daily assessments of pre-quit and post-quit withdrawal symptoms. Regression splines were used to approximate smooth functions of craving and negative affect and to estimate the variables' derivatives for each participant. We then modeled the dynamics of nicotine craving using standard input-output dynamical systems models. These models provide a more detailed characterization of the post-quit craving process than do traditional longitudinal models, including information regarding the type, magnitude, and speed of the response to an input. The results, in conjunction with standard engineering control theory techniques, could potentially be used by tobacco researchers to develop a more effective smoking intervention. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  10. A new model-free index of dynamic cerebral blood flow autoregulation.

    PubMed

    Chacón, Max; Jara, José Luis; Panerai, Ronney B

    2014-01-01

    The classic dynamic autoregulatory index (ARI), proposed by Aaslid and Tiecks, is one of the most widely used methods to assess the efficiency of dynamic cerebral autoregulation. Although this index is often used in clinical research and is also included in some commercial equipment, it exhibits considerable intra-subject variability, and has the tendency to produce false positive results in clinical applications. An alternative index of dynamic cerebral autoregulation is proposed, which overcomes most of the limitations of the classic method and also has the advantage of being model-free. This new index uses two parameters that are obtained directly from the response signal of the cerebral blood flow velocity to a transient decrease in arterial blood pressure provoked by the sudden release of bilateral thigh cuffs, and a third parameter measuring the difference in slope of this response and the change in arterial blood pressure achieved. With the values of these parameters, a corresponding classic autoregulatory index value could be calculated by using a linear regression model built from theoretical curves generated with the Aaslid-Tiecks model. In 16 healthy subjects who underwent repeated thigh-cuff manoeuvres, the model-free approach exhibited significantly lower intra-subject variability, as measured by the unbiased coefficient of variation, than the classic autoregulatory index (p = 0.032) and the Rate of Return (p<0.001), another measure of cerebral autoregulation used for this type of systemic pressure stimulus, from 39.23%±41.91% and 55.31%±31.27%, respectively, to 15.98%±7.75%.

  11. A New Model-Free Index of Dynamic Cerebral Blood Flow Autoregulation

    PubMed Central

    Chacón, Max; Jara, José Luis; Panerai, Ronney B.

    2014-01-01

    The classic dynamic autoregulatory index (ARI), proposed by Aaslid and Tiecks, is one of the most widely used methods to assess the efficiency of dynamic cerebral autoregulation. Although this index is often used in clinical research and is also included in some commercial equipment, it exhibits considerable intra-subject variability, and has the tendency to produce false positive results in clinical applications. An alternative index of dynamic cerebral autoregulation is proposed, which overcomes most of the limitations of the classic method and also has the advantage of being model-free. This new index uses two parameters that are obtained directly from the response signal of the cerebral blood flow velocity to a transient decrease in arterial blood pressure provoked by the sudden release of bilateral thigh cuffs, and a third parameter measuring the difference in slope of this response and the change in arterial blood pressure achieved. With the values of these parameters, a corresponding classic autoregulatory index value could be calculated by using a linear regression model built from theoretical curves generated with the Aaslid-Tiecks model. In 16 healthy subjects who underwent repeated thigh-cuff manoeuvres, the model-free approach exhibited significantly lower intra-subject variability, as measured by the unbiased coefficient of variation, than the classic autoregulatory index (p = 0.032) and the Rate of Return (p<0.001), another measure of cerebral autoregulation used for this type of systemic pressure stimulus, from 39.23%±41.91% and 55.31%±31.27%, respectively, to 15.98%±7.75%. PMID:25313519

  12. Modeling the temporal dynamics of intertidal benthic infauna biomass with environmental factors: Impact assessment of land reclamation.

    PubMed

    Yang, Ye; Chui, Ting Fong May; Shen, Ping Ping; Yang, Yang; Gu, Ji Dong

    2018-03-15

    Anthropogenic activities such as land reclamation are threatening tidal marshes worldwide. This study's hypothesis is that land reclamation in a semi-enclosed bay alters the seasonal dynamics of intertidal benthic infauna, which is a key component in the tidal marsh ecosystem. Mai Po Tidal Marsh, Deep Bay, Pearl River Estuary, China was used as a case study to evaluate the hypothesis. Ecological models that simulate benthic biomass dynamics with governing environmental factors were developed, and various scenario experiments were conducted to evaluate the impact of reclamations. Environmental variables, selected from the areas of hydrodynamics, meteorology, and water quality based on correlation analysis, were used to generate Bayesian regression models for biomass prediction. The best-performing model, which considered average water age (i.e., a hydrodynamic indicator of estuarine circulation) in the previous month, salinity variation (i.e., standard deviation of salinity), and the total sunny period in the current month, captured well both seasonal and yearly trends in the benthic infauna observations from 2002 to 2008. This model was then used to simulate biomass dynamics with varying inputs of water age and salinity variation from coastal numerical models of different reclamation scenarios. The simulation results suggest that the reclamation in 2007 decreased the spatial and annual average benthic infauna biomass in the tidal marsh by 20%, which agreed with the 28% biomass decrease recorded by field survey. The range of biomass seasonal variation also decreased significantly from 2.1 to 230.5g/m 2 (without any reclamation) to 1.2 to 131.1g/m 2 (after the 2007 reclamation), which further demonstrates the substantial ecological impact of reclamation. The ecological model developed in this study could simulate seasonal biomass dynamics and evaluate the ecological impact of reclamation projects. It can therefore be applied to evaluate the ecological impact of coastal engineering projects for tidal marsh management, conservation, and restoration. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Studies on kinetics of water quality factors to establish water transparency model in Neijiang River, China.

    PubMed

    Li, Ronghui; Pan, Wei; Guo, Jinchuan; Pang, Yong; Wu, Jianqiang; Li, Yiping; Pan, Baozhu; Ji, Yong; Ding, Ling

    2014-05-01

    The basis for submerged plant restoration in surface water is to research the complicated dynamic mechanism of water transparency. In this paper, through the impact factor analysis of water transparency, the suspended sediment, dissolved organic matter, algae were determined as three main impactfactors for water transparency of Neijiang River in Eastern China. And the multiple regression equation of water transparency and sediment concentration, permanganate index, chlorophyll-a concentration was developed. Considering the complicated transport and transformation of suspended sediment, dissolved organic matter and algae, numerical model of them were developed respectively for simulating the dynamic process. Water transparency numerical model was finally developed by coupling the sediment, water quality, and algae model. These results showed that suspended sediment was a key factor influencing water transparency of Neijiang River, the influence of water quality indicated by chemical oxygen demand and algal concentration indicated by chlorophyll a were indeterminate when their concentrations were lower, the influence was more obvious when high concentrations are available, such three factors showed direct influence on water transparency.

  14. Principal dynamic mode analysis of neural mass model for the identification of epileptic states

    NASA Astrophysics Data System (ADS)

    Cao, Yuzhen; Jin, Liu; Su, Fei; Wang, Jiang; Deng, Bin

    2016-11-01

    The detection of epileptic seizures in Electroencephalography (EEG) signals is significant for the diagnosis and treatment of epilepsy. In this paper, in order to obtain characteristics of various epileptiform EEGs that may differentiate different states of epilepsy, the concept of Principal Dynamic Modes (PDMs) was incorporated to an autoregressive model framework. First, the neural mass model was used to simulate the required intracerebral EEG signals of various epileptiform activities. Then, the PDMs estimated from the nonlinear autoregressive Volterra models, as well as the corresponding Associated Nonlinear Functions (ANFs), were used for the modeling of epileptic EEGs. The efficient PDM modeling approach provided physiological interpretation of the system. Results revealed that the ANFs of the 1st and 2nd PDMs for the auto-regressive input exhibited evident differences among different states of epilepsy, where the ANFs of the sustained spikes' activity encountered at seizure onset or during a seizure were the most differentiable from that of the normal state. Therefore, the ANFs may be characteristics for the classification of normal and seizure states in the clinical detection of seizures and thus provide assistance for the diagnosis of epilepsy.

  15. A robust empirical seasonal prediction of winter NAO and surface climate.

    PubMed

    Wang, L; Ting, M; Kushner, P J

    2017-03-21

    A key determinant of winter weather and climate in Europe and North America is the North Atlantic Oscillation (NAO), the dominant mode of atmospheric variability in the Atlantic domain. Skilful seasonal forecasting of the surface climate in both Europe and North America is reflected largely in how accurately models can predict the NAO. Most dynamical models, however, have limited skill in seasonal forecasts of the winter NAO. A new empirical model is proposed for the seasonal forecast of the winter NAO that exhibits higher skill than current dynamical models. The empirical model provides robust and skilful prediction of the December-January-February (DJF) mean NAO index using a multiple linear regression (MLR) technique with autumn conditions of sea-ice concentration, stratospheric circulation, and sea-surface temperature. The predictability is, for the most part, derived from the relatively long persistence of sea ice in the autumn. The lower stratospheric circulation and sea-surface temperature appear to play more indirect roles through a series of feedbacks among systems driving NAO evolution. This MLR model also provides skilful seasonal outlooks of winter surface temperature and precipitation over many regions of Eurasia and eastern North America.

  16. Multi-Axis Identifiability Using Single-Surface Parameter Estimation Maneuvers on the X-48B Blended Wing Body

    NASA Technical Reports Server (NTRS)

    Ratnayake, Nalin A.; Koshimoto, Ed T.; Taylor, Brian R.

    2011-01-01

    The problem of parameter estimation on hybrid-wing-body type aircraft is complicated by the fact that many design candidates for such aircraft involve a large number of aero- dynamic control effectors that act in coplanar motion. This fact adds to the complexity already present in the parameter estimation problem for any aircraft with a closed-loop control system. Decorrelation of system inputs must be performed in order to ascertain individual surface derivatives with any sort of mathematical confidence. Non-standard control surface configurations, such as clamshell surfaces and drag-rudder modes, further complicate the modeling task. In this paper, asymmetric, single-surface maneuvers are used to excite multiple axes of aircraft motion simultaneously. Time history reconstructions of the moment coefficients computed by the solved regression models are then compared to each other in order to assess relative model accuracy. The reduced flight-test time required for inner surface parameter estimation using multi-axis methods was found to come at the cost of slightly reduced accuracy and statistical confidence for linear regression methods. Since the multi-axis maneuvers captured parameter estimates similar to both longitudinal and lateral-directional maneuvers combined, the number of test points required for the inner, aileron-like surfaces could in theory have been reduced by 50%. While trends were similar, however, individual parameters as estimated by a multi-axis model were typically different by an average absolute difference of roughly 15-20%, with decreased statistical significance, than those estimated by a single-axis model. The multi-axis model exhibited an increase in overall fit error of roughly 1-5% for the linear regression estimates with respect to the single-axis model, when applied to flight data designed for each, respectively.

  17. Catchments as non-linear filters: evaluating data-driven approaches for spatio-temporal predictions in ungauged basins

    NASA Astrophysics Data System (ADS)

    Bellugi, D. G.; Tennant, C.; Larsen, L.

    2016-12-01

    Catchment and climate heterogeneity complicate prediction of runoff across time and space, and resulting parameter uncertainty can lead to large accumulated errors in hydrologic models, particularly in ungauged basins. Recently, data-driven modeling approaches have been shown to avoid the accumulated uncertainty associated with many physically-based models, providing an appealing alternative for hydrologic prediction. However, the effectiveness of different methods in hydrologically and geomorphically distinct catchments, and the robustness of these methods to changing climate and changing hydrologic processes remain to be tested. Here, we evaluate the use of machine learning techniques to predict daily runoff across time and space using only essential climatic forcing (e.g. precipitation, temperature, and potential evapotranspiration) time series as model input. Model training and testing was done using a high quality dataset of daily runoff and climate forcing data for 25+ years for 600+ minimally-disturbed catchments (drainage area range 5-25,000 km2, median size 336 km2) that cover a wide range of climatic and physical characteristics. Preliminary results using Support Vector Regression (SVR) suggest that in some catchments this nonlinear-based regression technique can accurately predict daily runoff, while the same approach fails in other catchments, indicating that the representation of climate inputs and/or catchment filter characteristics in the model structure need further refinement to increase performance. We bolster this analysis by using Sparse Identification of Nonlinear Dynamics (a sparse symbolic regression technique) to uncover the governing equations that describe runoff processes in catchments where SVR performed well and for ones where it performed poorly, thereby enabling inference about governing processes. This provides a robust means of examining how catchment complexity influences runoff prediction skill, and represents a contribution towards the integration of data-driven inference and physically-based models.

  18. Multi-decadal trend and space-time variability of sea level over the Indian Ocean since the 1950s: impact of decadal climate modes

    NASA Astrophysics Data System (ADS)

    Han, W.; Stammer, D.; Meehl, G. A.; Hu, A.; Sienz, F.

    2016-12-01

    Sea level varies on decadal and multi-decadal timescales over the Indian Ocean. The variations are not spatially uniform, and can deviate considerably from the global mean sea level rise (SLR) due to various geophysical processes. One of these processes is the change of ocean circulation, which can be partly attributed to natural internal modes of climate variability. Over the Indian Ocean, the most influential climate modes on decadal and multi-decadal timescales are the Interdecadal Pacific Oscillation (IPO) and decadal variability of the Indian Ocean dipole (IOD). Here, we first analyze observational datasets to investigate the impacts of IPO and IOD on spatial patterns of decadal and interdecadal (hereafter decal) sea level variability & multi-decadal trend over the Indian Ocean since the 1950s, using a new statistical approach of Bayesian Dynamical Linear regression Model (DLM). The Bayesian DLM overcomes the limitation of "time-constant (static)" regression coefficients in conventional multiple linear regression model, by allowing the coefficients to vary with time and therefore measuring "time-evolving (dynamical)" relationship between climate modes and sea level. For the multi-decadal sea level trend since the 1950s, our results show that climate modes and non-climate modes (the part that cannot be explained by climate modes) have comparable contributions in magnitudes but with different spatial patterns, with each dominating different regions of the Indian Ocean. For decadal variability, climate modes are the major contributors for sea level variations over most region of the tropical Indian Ocean. The relative importance of IPO and decadal variability of IOD, however, varies spatially. For example, while IOD decadal variability dominates IPO in the eastern equatorial basin (85E-100E, 5S-5N), IPO dominates IOD in causing sea level variations in the tropical southwest Indian Ocean (45E-65E, 12S-2S). To help decipher the possible contribution of external forcing to the multi-decadal sea level trend and decadal variability, we also analyze the model outputs from NCAR's Community Earth System Model (CESM) Large Ensemble Experiments, and compare the results with our observational analyses.

  19. Dynamic spatiotemporal analysis of indigenous dengue fever at street-level in Guangzhou city, China

    PubMed Central

    Xia, Yao; Zhang, Yingtao; Huang, Xiaodong; Huang, Jiawei; Nie, Enqiong; Jing, Qinlong; Wang, Guoling; Yang, Zhicong; Hu, Wenbiao

    2018-01-01

    Background This study aimed to investigate the spatiotemporal clustering and socio-environmental factors associated with dengue fever (DF) incidence rates at street level in Guangzhou city, China. Methods Spatiotemporal scan technique was applied to identify the high risk region of DF. Multiple regression model was used to identify the socio-environmental factors associated with DF infection. A Poisson regression model was employed to examine the spatiotemporal patterns in the spread of DF. Results Spatial clusters of DF were primarily concentrated at the southwest part of Guangzhou city. Age group (65+ years) (Odd Ratio (OR) = 1.49, 95% Confidence Interval (CI) = 1.13 to 2.03), floating population (OR = 1.09, 95% CI = 1.05 to 1.15), low-education (OR = 1.08, 95% CI = 1.01 to 1.16) and non-agriculture (OR = 1.07, 95% CI = 1.03 to 1.11) were associated with DF transmission. Poisson regression results indicated that changes in DF incidence rates were significantly associated with longitude (β = -5.08, P<0.01) and latitude (β = -1.99, P<0.01). Conclusions The study demonstrated that social-environmental factors may play an important role in DF transmission in Guangzhou. As geographic range of notified DF has significantly expanded over recent years, an early warning systems based on spatiotemporal model with socio-environmental is urgently needed to improve the effectiveness and efficiency of dengue control and prevention. PMID:29561835

  20. Dynamic spatiotemporal analysis of indigenous dengue fever at street-level in Guangzhou city, China.

    PubMed

    Liu, Kangkang; Zhu, Yanshan; Xia, Yao; Zhang, Yingtao; Huang, Xiaodong; Huang, Jiawei; Nie, Enqiong; Jing, Qinlong; Wang, Guoling; Yang, Zhicong; Hu, Wenbiao; Lu, Jiahai

    2018-03-01

    This study aimed to investigate the spatiotemporal clustering and socio-environmental factors associated with dengue fever (DF) incidence rates at street level in Guangzhou city, China. Spatiotemporal scan technique was applied to identify the high risk region of DF. Multiple regression model was used to identify the socio-environmental factors associated with DF infection. A Poisson regression model was employed to examine the spatiotemporal patterns in the spread of DF. Spatial clusters of DF were primarily concentrated at the southwest part of Guangzhou city. Age group (65+ years) (Odd Ratio (OR) = 1.49, 95% Confidence Interval (CI) = 1.13 to 2.03), floating population (OR = 1.09, 95% CI = 1.05 to 1.15), low-education (OR = 1.08, 95% CI = 1.01 to 1.16) and non-agriculture (OR = 1.07, 95% CI = 1.03 to 1.11) were associated with DF transmission. Poisson regression results indicated that changes in DF incidence rates were significantly associated with longitude (β = -5.08, P<0.01) and latitude (β = -1.99, P<0.01). The study demonstrated that social-environmental factors may play an important role in DF transmission in Guangzhou. As geographic range of notified DF has significantly expanded over recent years, an early warning systems based on spatiotemporal model with socio-environmental is urgently needed to improve the effectiveness and efficiency of dengue control and prevention.

  1. A mathematical model for ethanol fermentation from oil palm trunk sap using Saccharomyces cerevisiae

    NASA Astrophysics Data System (ADS)

    Sultana, S.; Jamil, Norazaliza Mohd; Saleh, E. A. M.; Yousuf, A.; Faizal, Che Ku M.

    2017-09-01

    This paper presents a mathematical model and solution strategy of ethanol fermentation for oil palm trunk (OPT) sap by considering the effect of substrate limitation, substrate inhibition product inhibition and cell death. To investigate the effect of cell death rate on the fermentation process we extended and improved the current mathematical model. The kinetic parameters of the model were determined by nonlinear regression using maximum likelihood function. The temporal profiles of sugar, cell and ethanol concentrations were modelled by a set of ordinary differential equations, which were solved numerically by the 4th order Runge-Kutta method. The model was validated by the experimental data and the agreement between the model and experimental results demonstrates that the model is reasonable for prediction of the dynamic behaviour of the fermentation process.

  2. Non-Linear Approach in Kinesiology Should Be Preferred to the Linear--A Case of Basketball.

    PubMed

    Trninić, Marko; Jeličić, Mario; Papić, Vladan

    2015-07-01

    In kinesiology, medicine, biology and psychology, in which research focus is on dynamical self-organized systems, complex connections exist between variables. Non-linear nature of complex systems has been discussed and explained by the example of non-linear anthropometric predictors of performance in basketball. Previous studies interpreted relations between anthropometric features and measures of effectiveness in basketball by (a) using linear correlation models, and by (b) including all basketball athletes in the same sample of participants regardless of their playing position. In this paper the significance and character of linear and non-linear relations between simple anthropometric predictors (AP) and performance criteria consisting of situation-related measures of effectiveness (SE) in basketball were determined and evaluated. The sample of participants consisted of top-level junior basketball players divided in three groups according to their playing time (8 minutes and more per game) and playing position: guards (N = 42), forwards (N = 26) and centers (N = 40). Linear (general model) and non-linear (general model) regression models were calculated simultaneously and separately for each group. The conclusion is viable: non-linear regressions are frequently superior to linear correlations when interpreting actual association logic among research variables.

  3. A novel Gaussian process regression model for state-of-health estimation of lithium-ion battery using charging curve

    NASA Astrophysics Data System (ADS)

    Yang, Duo; Zhang, Xu; Pan, Rui; Wang, Yujie; Chen, Zonghai

    2018-04-01

    The state-of-health (SOH) estimation is always a crucial issue for lithium-ion batteries. In order to provide an accurate and reliable SOH estimation, a novel Gaussian process regression (GPR) model based on charging curve is proposed in this paper. Different from other researches where SOH is commonly estimated by cycle life, in this work four specific parameters extracted from charging curves are used as inputs of the GPR model instead of cycle numbers. These parameters can reflect the battery aging phenomenon from different angles. The grey relational analysis method is applied to analyze the relational grade between selected features and SOH. On the other hand, some adjustments are made in the proposed GPR model. Covariance function design and the similarity measurement of input variables are modified so as to improve the SOH estimate accuracy and adapt to the case of multidimensional input. Several aging data from NASA data repository are used for demonstrating the estimation effect by the proposed method. Results show that the proposed method has high SOH estimation accuracy. Besides, a battery with dynamic discharging profile is used to verify the robustness and reliability of this method.

  4. Linking multi-temporal satellite imagery to coastal wetland dynamics and bird distribution

    USGS Publications Warehouse

    Pickens, Bradley A.; King, Sammy L.

    2014-01-01

    Ecosystems are characterized by dynamic ecological processes, such as flooding and fires, but spatial models are often limited to a single measurement in time. The characterization of direct, fine-scale processes affecting animals is potentially valuable for management applications, but these are difficult to quantify over broad extents. Direct predictors are also expected to improve transferability of models beyond the area of study. Here, we investigated the ability of non-static and multi-temporal habitat characteristics to predict marsh bird distributions, while testing model generality and transferability between two coastal habitats. Distribution models were developed for king rail (Rallus elegans), common gallinule (Gallinula galeata), least bittern (Ixobrychus exilis), and purple gallinule (Porphyrio martinica) in fresh and intermediate marsh types in the northern Gulf Coast of Louisiana and Texas, USA. For model development, repeated point count surveys of marsh birds were conducted from 2009 to 2011. Landsat satellite imagery was used to quantify both annual conditions and cumulative, multi-temporal habitat characteristics. We used multivariate adaptive regression splines to quantify bird-habitat relationships for fresh, intermediate, and combined marsh habitats. Multi-temporal habitat characteristics ranked as more important than single-date characteristics, as temporary water was most influential in six of eight models. Predictive power was greater for marsh type-specific models compared to general models and model transferability was poor. Birds in fresh marsh selected for annual habitat characterizations, while birds in intermediate marsh selected for cumulative wetness and heterogeneity. Our findings emphasize that dynamic ecological processes can affect species distribution and species-habitat relationships may differ with dominant landscape characteristics.

  5. Transmission of linear regression patterns between time series: From relationship in time series to complex networks

    NASA Astrophysics Data System (ADS)

    Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong; Ding, Yinghui

    2014-07-01

    The linear regression parameters between two time series can be different under different lengths of observation period. If we study the whole period by the sliding window of a short period, the change of the linear regression parameters is a process of dynamic transmission over time. We tackle fundamental research that presents a simple and efficient computational scheme: a linear regression patterns transmission algorithm, which transforms linear regression patterns into directed and weighted networks. The linear regression patterns (nodes) are defined by the combination of intervals of the linear regression parameters and the results of the significance testing under different sizes of the sliding window. The transmissions between adjacent patterns are defined as edges, and the weights of the edges are the frequency of the transmissions. The major patterns, the distance, and the medium in the process of the transmission can be captured. The statistical results of weighted out-degree and betweenness centrality are mapped on timelines, which shows the features of the distribution of the results. Many measurements in different areas that involve two related time series variables could take advantage of this algorithm to characterize the dynamic relationships between the time series from a new perspective.

  6. Transmission of linear regression patterns between time series: from relationship in time series to complex networks.

    PubMed

    Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong; Ding, Yinghui

    2014-07-01

    The linear regression parameters between two time series can be different under different lengths of observation period. If we study the whole period by the sliding window of a short period, the change of the linear regression parameters is a process of dynamic transmission over time. We tackle fundamental research that presents a simple and efficient computational scheme: a linear regression patterns transmission algorithm, which transforms linear regression patterns into directed and weighted networks. The linear regression patterns (nodes) are defined by the combination of intervals of the linear regression parameters and the results of the significance testing under different sizes of the sliding window. The transmissions between adjacent patterns are defined as edges, and the weights of the edges are the frequency of the transmissions. The major patterns, the distance, and the medium in the process of the transmission can be captured. The statistical results of weighted out-degree and betweenness centrality are mapped on timelines, which shows the features of the distribution of the results. Many measurements in different areas that involve two related time series variables could take advantage of this algorithm to characterize the dynamic relationships between the time series from a new perspective.

  7. Environmental modeling in data-sparse regions: Mozambique demonstrator case

    NASA Astrophysics Data System (ADS)

    Schumann, G.; Niebuhr, E.; Rashid, K.; Escobar, V. M.; Andreadis, K.; Njoku, E. G.; Neal, J. C.; Voisin, N.; Pappenberger, F.; Phanthuwongpakdee, N.; Bates, P. D.; Chao, Y.; Moller, D.; Paron, P.

    2014-12-01

    Long time-series computations of seasonal and flood event inundation volumes from archived forecast rainfall events for the Lower Zambezi basin (Mozambique), using a coupled hydrology-hydrodynamic model, are correlated and regressed with satellite soil moisture observations and NWP rainfall forecasts as predictors for inundation volumes. This dynamic library of volume predictions can then be re-projected onto the topography to generate the corresponding floodplain and wetland inundation dynamics, including periods of flood and low flows. Especially for data-poor regions, the application potential of such a library of data is invaluable as the modeling chain is greatly simplified and readily available. The library is flexible, portable and transitional. Furthermore, deriving environmental indicators from this dynamic look-up catalogue would be relatively straightforward. Application fields are various and here we present conceptually a few that we plan to research in more detail and on some of which we already collaborate with other scientists and international institutions, though at the moment largely on an unfunded basis. The primary application is to implement an early warning system for flood inundation relief operations and flood inundation mitigation and resilience. Having this flood inundation warning system set up adequately would also allow looking into long-term predictions of crop productivity and consequently food security. Another potentially high-impact application is to relate flood inundation dynamics to disease modeling for public health monitoring and prediction, in particular focusing on Malaria. Last but not least, the dynamic inundation library we are building can be validated and complemented with advanced airborne radar imagery of flooding and inundated wetlands to study changes in wetland ecology and biodiversity with unprecedented detail in data-poor regions, in this case in particular the important wetlands of the Zambezi Delta.

  8. Derivation of the linear-logistic model and Cox's proportional hazard model from a canonical system description.

    PubMed

    Voit, E O; Knapp, R G

    1997-08-15

    The linear-logistic regression model and Cox's proportional hazard model are widely used in epidemiology. Their successful application leaves no doubt that they are accurate reflections of observed disease processes and their associated risks or incidence rates. In spite of their prominence, it is not a priori evident why these models work. This article presents a derivation of the two models from the framework of canonical modeling. It begins with a general description of the dynamics between risk sources and disease development, formulates this description in the canonical representation of an S-system, and shows how the linear-logistic model and Cox's proportional hazard model follow naturally from this representation. The article interprets the model parameters in terms of epidemiological concepts as well as in terms of general systems theory and explains the assumptions and limitations generally accepted in the application of these epidemiological models.

  9. Using Electromagnetic Induction Technique to Detect Hydropedological Dynamics: Principles and Applications

    NASA Astrophysics Data System (ADS)

    Zhu, Qing; Liao, Kaihua; Doolittle, James; Lin, Henry

    2014-05-01

    Hydropedological dynamics including soil moisture variation, subsurface flow, and spatial distributions of different soil properties are important parameters in ecological, environmental, hydrological, and agricultural modeling and applications. However, technical gap exists in mapping these dynamics at intermediate spatial scale (e.g., farm and catchment scales). At intermediate scales, in-situ monitoring provides detailed data, but is restricted in number and spatial coverage; while remote sensing provides more acceptable spatial coverage, but has comparatively low spatial resolution, limited observation depths, and is greatly influenced by the surface condition and climate. As a non-invasive, fast, and convenient geophysical tool, electromagnetic induction (EMI) measures soil apparent electrical conductivity (ECa) and has great potential to bridge this technical gap. In this presentation, principles of different EMI meters are briefly introduced. Then, case studies of using repeated EMI to detect spatial distributions of subsurface convergent flow, soil moisture dynamics, soil types and their transition zones, and different soil properties are presented. The suitability, effectiveness, and accuracy of EMI are evaluated for mapping different hydropedological dynamics. Lastly, contributions of different hydropedological and terrain properties on soil ECa are quantified under different wetness conditions, seasons, and land use types using Classification and Regression Tree model. Trend removal and residual analysis are then used for further mining of EMI survey data. Based on these analyses, proper EMI survey designs and data processing are proposed.

  10. Characterizing Touch Using Pressure Data and Auto Regressive Models

    PubMed Central

    Laufer, Shlomi; Pugh, Carla M.; Van Veen, Barry D.

    2014-01-01

    Palpation plays a critical role in medical physical exams. Despite the wide range of exams, there are several reproducible and subconscious sets of maneuvers that are common to examination by palpation. Previous studies by our group demonstrated the use of manikins and pressure sensors for measuring and quantifying how physicians palpate during different physical exams. In this study we develop mathematical models that describe some of these common maneuvers. Dynamic pressure data was measured using a simplified testbed and different autoregressive models were used to describe the motion of interest. The frequency, direction and type of motion used were identified from the models. We believe these models can a provide better understanding of how humans explore objects in general and more specifically give insights to understand medical physical exams. PMID:25570335

  11. Optimization of polyphenol removal from kiwifruit juice using a macroporous resin.

    PubMed

    Gao, Zhenpeng; Yu, Zhifang; Yue, Tianli; Quek, Siew Young

    2017-06-01

    The separation of polyphenols from kiwifruit juice is essential for enhancing sensory properties and prevent the browning reaction in juice during processing and storage. The present study investigated the dynamic adsorption and desorption of polyphenols in kiwifruit juice using AB-8 resin. The model obtained could be successfully applied to predict the experimental results of dynamic adsorption capacity (DAC) and dynamic desorption quantity (DDQ). The results showed that dynamic adsorption of polyphenols could be optimised in a juice concentration of 19 °Brix, with a feed flow-rate of 1.3 mL min -1 and a feed volume of 7 bed volume (BV). The optimum conditions for dynamic desorption of polyphenols from the AB-8 resin were an ethanol concentration of 43% (v/v), an elute flow-rate of 2.2 mL min -1 and an elute volume of 3 BV. The optimized DAC value was 3.16 g of polyphenols kg -1 resin, whereas that for DDQ was 917.5 g kg -1 , with both values being consistent with the predicted values generated by the regression models. The major polyphenols in the dynamic desorption solution consisted of seven compositions. The present study could be scaled-up using a continuous column system for industrial application, thus contributing to the improved flavor and color of kiwifruit juice. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  12. Spatial patterns of March and September streamflow trends in Pacific Northwest Streams, 1958-2008

    USGS Publications Warehouse

    Chang, Heejun; Jung, Il-Won; Steele, Madeline; Gannett, Marshall

    2012-01-01

    Summer streamflow is a vital water resource for municipal and domestic water supplies, irrigation, salmonid habitat, recreation, and water-related ecosystem services in the Pacific Northwest (PNW) in the United States. This study detects significant negative trends in September absolute streamflow in a majority of 68 stream-gauging stations located on unregulated streams in the PNW from 1958 to 2008. The proportion of March streamflow to annual streamflow increases in most stations over 1,000 m elevation, with a baseflow index of less than 50, while absolute March streamflow does not increase in most stations. The declining trends of September absolute streamflow are strongly associated with seven-day low flow, January–March maximum temperature trends, and the size of the basin (19–7,260 km2), while the increasing trends of the fraction of March streamflow are associated with elevation, April 1 snow water equivalent, March precipitation, center timing of streamflow, and October–December minimum temperature trends. Compared with ordinary least squares (OLS) estimated regression models, spatial error regression and geographically weighted regression (GWR) models effectively remove spatial autocorrelation in residuals. The GWR model results show spatial gradients of local R 2 values with consistently higher local R 2 values in the northern Cascades. This finding illustrates that different hydrologic landscape factors, such as geology and seasonal distribution of precipitation, also influence streamflow trends in the PNW. In addition, our spatial analysis model results show that considering various geographic factors help clarify the dynamics of streamflow trends over a large geographical area, supporting a spatial analysis approach over aspatial OLS-estimated regression models for predicting streamflow trends. Results indicate that transitional rain–snow surface water-dominated basins are likely to have reduced summer streamflow under warming scenarios. Consequently, a better understanding of the relationships among summer streamflow, precipitation, snowmelt, elevation, and geology can help water managers predict the response of regional summer streamflow to global warming.

  13. Fear of falling and postural reactivity in patients with glaucoma.

    PubMed

    Daga, Fábio B; Diniz-Filho, Alberto; Boer, Erwin R; Gracitelli, Carolina P B; Abe, Ricardo Y; Medeiros, Felipe A

    2017-01-01

    To investigate the relationship between postural metrics obtained by dynamic visual stimulation in a virtual reality environment and the presence of fear of falling in glaucoma patients. This cross-sectional study included 35 glaucoma patients and 26 controls that underwent evaluation of postural balance by a force platform during presentation of static and dynamic visual stimuli with head-mounted goggles (Oculus Rift). In dynamic condition, a peripheral translational stimulus was used to induce vection and assess postural reactivity. Standard deviations of torque moments (SDTM) were calculated as indicative of postural stability. Fear of falling was assessed by a standardized questionnaire. The relationship between a summary score of fear of falling and postural metrics was investigated using linear regression models, adjusting for potentially confounding factors. Subjects with glaucoma reported greater fear of falling compared to controls (-0.21 vs. 0.27; P = 0.039). In glaucoma patients, postural metrics during dynamic visual stimulus were more associated with fear of falling (R2 = 18.8%; P = 0.001) than static (R2 = 3.0%; P = 0.005) and dark field (R2 = 5.7%; P = 0.007) conditions. In the univariable model, fear of falling was not significantly associated with binocular standard perimetry mean sensitivity (P = 0.855). In the multivariable model, each 1 Nm larger SDTM in anteroposterior direction during dynamic stimulus was associated with a worsening of 0.42 units in the fear of falling questionnaire score (P = 0.001). In glaucoma patients, postural reactivity to a dynamic visual stimulus using a virtual reality environment was more strongly associated with fear of falling than visual field testing and traditional balance assessment.

  14. Fear of falling and postural reactivity in patients with glaucoma

    PubMed Central

    Daga, Fábio B.; Diniz-Filho, Alberto; Boer, Erwin R.; Gracitelli, Carolina P. B.; Abe, Ricardo Y.; Medeiros, Felipe A.

    2017-01-01

    Purpose To investigate the relationship between postural metrics obtained by dynamic visual stimulation in a virtual reality environment and the presence of fear of falling in glaucoma patients. Methods This cross-sectional study included 35 glaucoma patients and 26 controls that underwent evaluation of postural balance by a force platform during presentation of static and dynamic visual stimuli with head-mounted goggles (Oculus Rift). In dynamic condition, a peripheral translational stimulus was used to induce vection and assess postural reactivity. Standard deviations of torque moments (SDTM) were calculated as indicative of postural stability. Fear of falling was assessed by a standardized questionnaire. The relationship between a summary score of fear of falling and postural metrics was investigated using linear regression models, adjusting for potentially confounding factors. Results Subjects with glaucoma reported greater fear of falling compared to controls (-0.21 vs. 0.27; P = 0.039). In glaucoma patients, postural metrics during dynamic visual stimulus were more associated with fear of falling (R2 = 18.8%; P = 0.001) than static (R2 = 3.0%; P = 0.005) and dark field (R2 = 5.7%; P = 0.007) conditions. In the univariable model, fear of falling was not significantly associated with binocular standard perimetry mean sensitivity (P = 0.855). In the multivariable model, each 1 Nm larger SDTM in anteroposterior direction during dynamic stimulus was associated with a worsening of 0.42 units in the fear of falling questionnaire score (P = 0.001). Conclusion In glaucoma patients, postural reactivity to a dynamic visual stimulus using a virtual reality environment was more strongly associated with fear of falling than visual field testing and traditional balance assessment. PMID:29211742

  15. Global sensitivity analysis of the BSM2 dynamic influent disturbance scenario generator.

    PubMed

    Flores-Alsina, Xavier; Gernaey, Krist V; Jeppsson, Ulf

    2012-01-01

    This paper presents the results of a global sensitivity analysis (GSA) of a phenomenological model that generates dynamic wastewater treatment plant (WWTP) influent disturbance scenarios. This influent model is part of the Benchmark Simulation Model (BSM) family and creates realistic dry/wet weather files describing diurnal, weekend and seasonal variations through the combination of different generic model blocks, i.e. households, industry, rainfall and infiltration. The GSA is carried out by combining Monte Carlo simulations and standardized regression coefficients (SRC). Cluster analysis is then applied, classifying the influence of the model parameters into strong, medium and weak. The results show that the method is able to decompose the variance of the model predictions (R(2)> 0.9) satisfactorily, thus identifying the model parameters with strongest impact on several flow rate descriptors calculated at different time resolutions. Catchment size (PE) and the production of wastewater per person equivalent (QperPE) are two parameters that strongly influence the yearly average dry weather flow rate and its variability. Wet weather conditions are mainly affected by three parameters: (1) the probability of occurrence of a rain event (Llrain); (2) the catchment size, incorporated in the model as a parameter representing the conversion from mm rain · day(-1) to m(3) · day(-1) (Qpermm); and, (3) the quantity of rain falling on permeable areas (aH). The case study also shows that in both dry and wet weather conditions the SRC ranking changes when the time scale of the analysis is modified, thus demonstrating the potential to identify the effect of the model parameters on the fast/medium/slow dynamics of the flow rate. The paper ends with a discussion on the interpretation of GSA results and of the advantages of using synthetic dynamic flow rate data for WWTP influent scenario generation. This section also includes general suggestions on how to use the proposed methodology to any influent generator to adapt the created time series to a modeller's demands.

  16. Dynamic travel time estimation using regression trees.

    DOT National Transportation Integrated Search

    2008-10-01

    This report presents a methodology for travel time estimation by using regression trees. The dissemination of travel time information has become crucial for effective traffic management, especially under congested road conditions. In the absence of c...

  17. 3D Reconstruction of Chick Embryo Vascular Geometries Using Non-invasive High-Frequency Ultrasound for Computational Fluid Dynamics Studies.

    PubMed

    Tan, Germaine Xin Yi; Jamil, Muhammad; Tee, Nicole Gui Zhen; Zhong, Liang; Yap, Choon Hwai

    2015-11-01

    Recent animal studies have provided evidence that prenatal blood flow fluid mechanics may play a role in the pathogenesis of congenital cardiovascular malformations. To further these researches, it is important to have an imaging technique for small animal embryos with sufficient resolution to support computational fluid dynamics studies, and that is also non-invasive and non-destructive to allow for subject-specific, longitudinal studies. In the current study, we developed such a technique, based on ultrasound biomicroscopy scans on chick embryos. Our technique included a motion cancelation algorithm to negate embryonic body motion, a temporal averaging algorithm to differentiate blood spaces from tissue spaces, and 3D reconstruction of blood volumes in the embryo. The accuracy of the reconstructed models was validated with direct stereoscopic measurements. A computational fluid dynamics simulation was performed to model fluid flow in the generated construct of a Hamburger-Hamilton (HH) stage 27 embryo. Simulation results showed that there were divergent streamlines and a low shear region at the carotid duct, which may be linked to the carotid duct's eventual regression and disappearance by HH stage 34. We show that our technique has sufficient resolution to produce accurate geometries for computational fluid dynamics simulations to quantify embryonic cardiovascular fluid mechanics.

  18. Linking family dynamics and the mental health of Colombian dementia caregivers.

    PubMed

    Sutter, Megan; Perrin, Paul B; Chang, Yu-Ping; Hoyos, Guillermo Ramirez; Buraye, Jaqueline Arabia; Arango-Lasprilla, Juan Carlos

    2014-02-01

    This cross-sectional, quantitative, self-report study examined the relationship between family dynamics (cohesion, flexibility, pathology/ functioning, communication, family satisfaction, and empathy) and mental health (depression, burden, stress, and satisfaction with life [SWL]) in 90 dementia caregivers from Colombia. Hierarchical multiple regressions controlling for caregiver demographics found that family dynamics were significantly associated with caregiver depression, stress, and SWL and marginally associated with burden. Within these regressions, empathy was uniquely associated with stress; flexibility with depression and marginally with SWL; and family communication marginally with burden and stress. Nearly all family dynamic variables were bivariately associated with caregiver mental health variables, such that caregivers had stronger mental health when their family dynamics were healthy. Family-systems interventions in global regions with high levels of familism like that in the current study may improve family empathy, flexibility, and communication, thereby producing better caregiver mental health and better informal care for people with dementia.

  19. Spatial and temporal synchrony in reptile population dynamics in variable environments.

    PubMed

    Greenville, Aaron C; Wardle, Glenda M; Nguyen, Vuong; Dickman, Chris R

    2016-10-01

    Resources are seldom distributed equally across space, but many species exhibit spatially synchronous population dynamics. Such synchrony suggests the operation of large-scale external drivers, such as rainfall or wildfire, or the influence of oasis sites that provide water, shelter, or other resources. However, testing the generality of these factors is not easy, especially in variable environments. Using a long-term dataset (13-22 years) from a large (8000 km(2)) study region in arid Central Australia, we tested firstly for regional synchrony in annual rainfall and the dynamics of six reptile species across nine widely separated sites. For species that showed synchronous spatial dynamics, we then used multivariate follow a multivariate auto-regressive state-space (MARSS) models to predict that regional rainfall would be positively associated with their populations. For asynchronous species, we used MARSS models to explore four other possible population structures: (1) populations were asynchronous, (2) differed between oasis and non-oasis sites, (3) differed between burnt and unburnt sites, or (4) differed between three sub-regions with different rainfall gradients. Only one species showed evidence of spatial population synchrony and our results provide little evidence that rainfall synchronizes reptile populations. The oasis or the wildfire hypotheses were the best-fitting models for the other five species. Thus, our six study species appear generally to be structured in space into one or two populations across the study region. Our findings suggest that for arid-dwelling reptile populations, spatial and temporal dynamics are structured by abiotic events, but individual responses to covariates at smaller spatial scales are complex and poorly understood.

  20. Twist Model Development and Results from the Active Aeroelastic Wing F/A-18 Aircraft

    NASA Technical Reports Server (NTRS)

    Lizotte, Andrew M.; Allen, Michael J.

    2007-01-01

    Understanding the wing twist of the active aeroelastic wing (AAW) F/A-18 aircraft is a fundamental research objective for the program and offers numerous benefits. In order to clearly understand the wing flexibility characteristics, a model was created to predict real-time wing twist. A reliable twist model allows the prediction of twist for flight simulation, provides insight into aircraft performance uncertainties, and assists with computational fluid dynamic and aeroelastic issues. The left wing of the aircraft was heavily instrumented during the first phase of the active aeroelastic wing program allowing deflection data collection. Traditional data processing steps were taken to reduce flight data, and twist predictions were made using linear regression techniques. The model predictions determined a consistent linear relationship between the measured twist and aircraft parameters, such as surface positions and aircraft state variables. Error in the original model was reduced in some cases by using a dynamic pressure-based assumption. This technique produced excellent predictions for flight between the standard test points and accounted for nonlinearities in the data. This report discusses data processing techniques and twist prediction validation, and provides illustrative and quantitative results.

  1. Twist Model Development and Results From the Active Aeroelastic Wing F/A-18 Aircraft

    NASA Technical Reports Server (NTRS)

    Lizotte, Andrew; Allen, Michael J.

    2005-01-01

    Understanding the wing twist of the active aeroelastic wing F/A-18 aircraft is a fundamental research objective for the program and offers numerous benefits. In order to clearly understand the wing flexibility characteristics, a model was created to predict real-time wing twist. A reliable twist model allows the prediction of twist for flight simulation, provides insight into aircraft performance uncertainties, and assists with computational fluid dynamic and aeroelastic issues. The left wing of the aircraft was heavily instrumented during the first phase of the active aeroelastic wing program allowing deflection data collection. Traditional data processing steps were taken to reduce flight data, and twist predictions were made using linear regression techniques. The model predictions determined a consistent linear relationship between the measured twist and aircraft parameters, such as surface positions and aircraft state variables. Error in the original model was reduced in some cases by using a dynamic pressure-based assumption and by using neural networks. These techniques produced excellent predictions for flight between the standard test points and accounted for nonlinearities in the data. This report discusses data processing techniques and twist prediction validation, and provides illustrative and quantitative results.

  2. A Bayesian approach to modelling the impact of hydrodynamic shear stress on biofilm deformation

    PubMed Central

    Wilkinson, Darren J.; Jayathilake, Pahala Gedara; Rushton, Steve P.; Bridgens, Ben; Li, Bowen; Zuliani, Paolo

    2018-01-01

    We investigate the feasibility of using a surrogate-based method to emulate the deformation and detachment behaviour of a biofilm in response to hydrodynamic shear stress. The influence of shear force, growth rate and viscoelastic parameters on the patterns of growth, structure and resulting shape of microbial biofilms was examined. We develop a statistical modelling approach to this problem, using combination of Bayesian Poisson regression and dynamic linear models for the emulation. We observe that the hydrodynamic shear force affects biofilm deformation in line with some literature. Sensitivity results also showed that the expected number of shear events, shear flow, yield coefficient for heterotrophic bacteria and extracellular polymeric substance (EPS) stiffness per unit EPS mass are the four principal mechanisms governing the bacteria detachment in this study. The sensitivity of the model parameters is temporally dynamic, emphasising the significance of conducting the sensitivity analysis across multiple time points. The surrogate models are shown to perform well, and produced ≈ 480 fold increase in computational efficiency. We conclude that a surrogate-based approach is effective, and resulting biofilm structure is determined primarily by a balance between bacteria growth, viscoelastic parameters and applied shear stress. PMID:29649240

  3. A perspective on bridging scales and design of models using low-dimensional manifolds and data-driven model inference

    PubMed Central

    Zenil, Hector; Kiani, Narsis A.; Ball, Gordon; Gomez-Cabrero, David

    2016-01-01

    Systems in nature capable of collective behaviour are nonlinear, operating across several scales. Yet our ability to account for their collective dynamics differs in physics, chemistry and biology. Here, we briefly review the similarities and differences between mathematical modelling of adaptive living systems versus physico-chemical systems. We find that physics-based chemistry modelling and computational neuroscience have a shared interest in developing techniques for model reductions aiming at the identification of a reduced subsystem or slow manifold, capturing the effective dynamics. By contrast, as relations and kinetics between biological molecules are less characterized, current quantitative analysis under the umbrella of bioinformatics focuses on signal extraction, correlation, regression and machine-learning analysis. We argue that model reduction analysis and the ensuing identification of manifolds bridges physics and biology. Furthermore, modelling living systems presents deep challenges as how to reconcile rich molecular data with inherent modelling uncertainties (formalism, variables selection and model parameters). We anticipate a new generative data-driven modelling paradigm constrained by identified governing principles extracted from low-dimensional manifold analysis. The rise of a new generation of models will ultimately connect biology to quantitative mechanistic descriptions, thereby setting the stage for investigating the character of the model language and principles driving living systems. This article is part of the themed issue ‘Multiscale modelling at the physics–chemistry–biology interface’. PMID:27698038

  4. Predicting infant cortical surface development using a 4D varifold-based learning framework and local topography-based shape morphing.

    PubMed

    Rekik, Islem; Li, Gang; Lin, Weili; Shen, Dinggang

    2016-02-01

    Longitudinal neuroimaging analysis methods have remarkably advanced our understanding of early postnatal brain development. However, learning predictive models to trace forth the evolution trajectories of both normal and abnormal cortical shapes remains broadly absent. To fill this critical gap, we pioneered the first prediction model for longitudinal developing cortical surfaces in infants using a spatiotemporal current-based learning framework solely from the baseline cortical surface. In this paper, we detail this prediction model and even further improve its performance by introducing two key variants. First, we use the varifold metric to overcome the limitations of the current metric for surface registration that was used in our preliminary study. We also extend the conventional varifold-based surface registration model for pairwise registration to a spatiotemporal surface regression model. Second, we propose a morphing process of the baseline surface using its topographic attributes such as normal direction and principal curvature sign. Specifically, our method learns from longitudinal data both the geometric (vertices positions) and dynamic (temporal evolution trajectories) features of the infant cortical surface, comprising a training stage and a prediction stage. In the training stage, we use the proposed varifold-based shape regression model to estimate geodesic cortical shape evolution trajectories for each training subject. We then build an empirical mean spatiotemporal surface atlas. In the prediction stage, given an infant, we select the best learnt features from training subjects to simultaneously predict the cortical surface shapes at all later timepoints, based on similarity metrics between this baseline surface and the learnt baseline population average surface atlas. We used a leave-one-out cross validation method to predict the inner cortical surface shape at 3, 6, 9 and 12 months of age from the baseline cortical surface shape at birth. Our method attained a higher prediction accuracy and better captured the spatiotemporal dynamic change of the highly folded cortical surface than the previous proposed prediction method. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Using species abundance distribution models and diversity indices for biogeographical analyses

    NASA Astrophysics Data System (ADS)

    Fattorini, Simone; Rigal, François; Cardoso, Pedro; Borges, Paulo A. V.

    2016-01-01

    We examine whether Species Abundance Distribution models (SADs) and diversity indices can describe how species colonization status influences species community assembly on oceanic islands. Our hypothesis is that, because of the lack of source-sink dynamics at the archipelago scale, Single Island Endemics (SIEs), i.e. endemic species restricted to only one island, should be represented by few rare species and consequently have abundance patterns that differ from those of more widespread species. To test our hypothesis, we used arthropod data from the Azorean archipelago (North Atlantic). We divided the species into three colonization categories: SIEs, archipelagic endemics (AZEs, present in at least two islands) and native non-endemics (NATs). For each category, we modelled rank-abundance plots using both the geometric series and the Gambin model, a measure of distributional amplitude. We also calculated Shannon entropy and Buzas and Gibson's evenness. We show that the slopes of the regression lines modelling SADs were significantly higher for SIEs, which indicates a relative predominance of a few highly abundant species and a lack of rare species, which also depresses diversity indices. This may be a consequence of two factors: (i) some forest specialist SIEs may be at advantage over other, less adapted species; (ii) the entire populations of SIEs are by definition concentrated on a single island, without possibility for inter-island source-sink dynamics; hence all populations must have a minimum number of individuals to survive natural, often unpredictable, fluctuations. These findings are supported by higher values of the α parameter of the Gambin mode for SIEs. In contrast, AZEs and NATs had lower regression slopes, lower α but higher diversity indices, resulting from their widespread distribution over several islands. We conclude that these differences in the SAD models and diversity indices demonstrate that the study of these metrics is useful for biogeographical purposes.

  6. Using High Resolution Model Data to Improve Lightning Forecasts across Southern California

    NASA Astrophysics Data System (ADS)

    Capps, S. B.; Rolinski, T.

    2014-12-01

    Dry lightning often results in a significant amount of fire starts in areas where the vegetation is dry and continuous. Meteorologists from the USDA Forest Service Predictive Services' program in Riverside, California are tasked to provide southern and central California's fire agencies with fire potential outlooks. Logistic regression equations were developed by these meteorologists several years ago, which forecast probabilities of lightning as well as lightning amounts, out to seven days across southern California. These regression equations were developed using ten years of historical gridded data from the Global Forecast System (GFS) model on a coarse scale (0.5 degree resolution), correlated with historical lightning strike data. These equations do a reasonably good job of capturing a lightning episode (3-5 consecutive days or greater of lightning), but perform poorly regarding more detailed information such as exact location and amounts. It is postulated that the inadequacies in resolving the finer details of episodic lightning events is due to the coarse resolution of the GFS data, along with limited predictors. Stability parameters, such as the Lifted Index (LI), the Total Totals index (TT), Convective Available Potential Energy (CAPE), along with Precipitable Water (PW) are the only parameters being considered as predictors. It is hypothesized that the statistical forecasts will benefit from higher resolution data both in training and implementing the statistical model. We have dynamically downscaled NCEP FNL (Final) reanalysis data using the Weather Research and Forecasting model (WRF) to 3km spatial and hourly temporal resolution across a decade. This dataset will be used to evaluate the contribution to the success of the statistical model of additional predictors in higher vertical, spatial and temporal resolution. If successful, we will implement an operational dynamically downscaled GFS forecast product to generate predictors for the resulting statistical lightning model. This data will help fire agencies be better prepared to pre-deploy resources in advance of these events. Specific information regarding duration, amount, and location will be especially valuable.

  7. Clinical Model of Exercise-Related Dyspnea in Adult Patients With Cystic Fibrosis.

    PubMed

    Stevens, Daniel; Neyedli, Heather F

    2018-05-01

    Dyspnea is a highly distressing symptom of pulmonary disease that can make performing physical activities challenging. However, little is known regarding the strongest predictors of exercise-related dyspnea in adult cystic fibrosis (CF). Therefore, the purpose of the present study was to determine the best clinical model of exercise-related dyspnea in this patient group. A retrospective analysis of pulmonary function and cardiopulmonary exercise testing data from patients with CF being followed up at the Adult CF Program at St Michael's Hospital, Toronto, Canada, from 2002 to 2008 were used for the analysis. Patients (n = 88) were male 66%; aged 30.4 ± 9.4 years; body mass index (BMI) 23.1 ± 3.3 kg/m; forced expiratory volume in 1 second (FEV1) 70% ± 19% predicted; and peak oxygen uptake 74% ± 20% predicted. A multivariate linear regression model assessing the effects of age, sex, BMI, airway obstruction (FEV1), perceived muscular leg fatigue, and dynamic hyperinflation explained 54% of the variance in dyspnea severity at peak exercise (P < .01). Relative importance analysis showed that the presence of dynamic hyperinflation and perceived muscular leg fatigue were the largest contributors. Pulmonary rehabilitation programs may consider strategies to reduce dynamic hyperinflation and promote muscular function to best improve exercise-related dyspnea in this patient group.

  8. Gastrointestinal Spatiotemporal mRNA Expression of Ghrelin vs Growth Hormone Receptor and New Growth Yield Machine Learning Model Based on Perturbation Theory.

    PubMed

    Ran, Tao; Liu, Yong; Li, Hengzhi; Tang, Shaoxun; He, Zhixiong; Munteanu, Cristian R; González-Díaz, Humberto; Tan, Zhiliang; Zhou, Chuanshe

    2016-07-27

    The management of ruminant growth yield has economic importance. The current work presents a study of the spatiotemporal dynamic expression of Ghrelin and GHR at mRNA levels throughout the gastrointestinal tract (GIT) of kid goats under housing and grazing systems. The experiments show that the feeding system and age affected the expression of either Ghrelin or GHR with different mechanisms. Furthermore, the experimental data are used to build new Machine Learning models based on the Perturbation Theory, which can predict the effects of perturbations of Ghrelin and GHR mRNA expression on the growth yield. The models consider eight longitudinal GIT segments (rumen, abomasum, duodenum, jejunum, ileum, cecum, colon and rectum), seven time points (0, 7, 14, 28, 42, 56 and 70 d) and two feeding systems (Supplemental and Grazing feeding) as perturbations from the expected values of the growth yield. The best regression model was obtained using Random Forest, with the coefficient of determination R(2) of 0.781 for the test subset. The current results indicate that the non-linear regression model can accurately predict the growth yield and the key nodes during gastrointestinal development, which is helpful to optimize the feeding management strategies in ruminant production system.

  9. Gastrointestinal Spatiotemporal mRNA Expression of Ghrelin vs Growth Hormone Receptor and New Growth Yield Machine Learning Model Based on Perturbation Theory

    PubMed Central

    Ran, Tao; Liu, Yong; Li, Hengzhi; Tang, Shaoxun; He, Zhixiong; Munteanu, Cristian R.; González-Díaz, Humberto; Tan, Zhiliang; Zhou, Chuanshe

    2016-01-01

    The management of ruminant growth yield has economic importance. The current work presents a study of the spatiotemporal dynamic expression of Ghrelin and GHR at mRNA levels throughout the gastrointestinal tract (GIT) of kid goats under housing and grazing systems. The experiments show that the feeding system and age affected the expression of either Ghrelin or GHR with different mechanisms. Furthermore, the experimental data are used to build new Machine Learning models based on the Perturbation Theory, which can predict the effects of perturbations of Ghrelin and GHR mRNA expression on the growth yield. The models consider eight longitudinal GIT segments (rumen, abomasum, duodenum, jejunum, ileum, cecum, colon and rectum), seven time points (0, 7, 14, 28, 42, 56 and 70 d) and two feeding systems (Supplemental and Grazing feeding) as perturbations from the expected values of the growth yield. The best regression model was obtained using Random Forest, with the coefficient of determination R2 of 0.781 for the test subset. The current results indicate that the non-linear regression model can accurately predict the growth yield and the key nodes during gastrointestinal development, which is helpful to optimize the feeding management strategies in ruminant production system. PMID:27460882

  10. Viscosity, relaxation time, and dynamics within a model asphalt of larger molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Derek D.; Greenfield, Michael L., E-mail: greenfield@egr.uri.edu

    2014-01-21

    The dynamics properties of a new “next generation” model asphalt system that represents SHRP AAA-1 asphalt using larger molecules than past models is studied using molecular simulation. The system contains 72 molecules distributed over 12 molecule types that range from nonpolar branched alkanes to polar resins and asphaltenes. Molecular weights range from 290 to 890 g/mol. All-atom molecular dynamics simulations conducted at six temperatures from 298.15 to 533.15 K provide a wealth of correlation data. The modified Kohlrausch-Williams-Watts equation was regressed to reorientation time correlation functions and extrapolated to calculate average rotational relaxation times for individual molecules. The rotational relaxationmore » rate of molecules decreased significantly with increasing size and decreasing temperature. Translational self-diffusion coefficients followed an Arrhenius dependence. Similar activation energies of ∼42 kJ/mol were found for all 12 molecules in the model system, while diffusion prefactors spanned an order of magnitude. Viscosities calculated directly at 533.15 K and estimated at lower temperatures using the Debye-Stokes-Einstein relationship were consistent with experimental data for asphalts. The product of diffusion coefficient and rotational relaxation time showed only small changes with temperature above 358.15 K, indicating rotation and translation that couple self-consistently with viscosity. At lower temperatures, rotation slowed more than diffusion.« less

  11. Performance of an Axisymmetric Rocket Based Combined Cycle Engine During Rocket Only Operation Using Linear Regression Analysis

    NASA Technical Reports Server (NTRS)

    Smith, Timothy D.; Steffen, Christopher J., Jr.; Yungster, Shaye; Keller, Dennis J.

    1998-01-01

    The all rocket mode of operation is shown to be a critical factor in the overall performance of a rocket based combined cycle (RBCC) vehicle. An axisymmetric RBCC engine was used to determine specific impulse efficiency values based upon both full flow and gas generator configurations. Design of experiments methodology was used to construct a test matrix and multiple linear regression analysis was used to build parametric models. The main parameters investigated in this study were: rocket chamber pressure, rocket exit area ratio, injected secondary flow, mixer-ejector inlet area, mixer-ejector area ratio, and mixer-ejector length-to-inlet diameter ratio. A perfect gas computational fluid dynamics analysis, using both the Spalart-Allmaras and k-omega turbulence models, was performed with the NPARC code to obtain values of vacuum specific impulse. Results from the multiple linear regression analysis showed that for both the full flow and gas generator configurations increasing mixer-ejector area ratio and rocket area ratio increase performance, while increasing mixer-ejector inlet area ratio and mixer-ejector length-to-diameter ratio decrease performance. Increasing injected secondary flow increased performance for the gas generator analysis, but was not statistically significant for the full flow analysis. Chamber pressure was found to be not statistically significant.

  12. Predictability of tick-borne encephalitis fluctuations.

    PubMed

    Zeman, P

    2017-10-01

    Tick-borne encephalitis is a serious arboviral infection with unstable dynamics and profound inter-annual fluctuations in case numbers. A dependable predictive model has been sought since the discovery of the disease. The present study demonstrates that four superimposed cycles, approximately 2·4, 3, 5·4, and 10·4 years long, can account for three-fifths of the variation in the disease fluctuations over central Europe. Using harmonic regression, these cycles can be projected into the future, yielding forecasts of sufficient accuracy for up to 4 years ahead. For the years 2016-2018, this model predicts elevated incidence levels in most parts of the region.

  13. Using decision tree analysis to identify risk factors for relapse to smoking

    PubMed Central

    Piper, Megan E.; Loh, Wei-Yin; Smith, Stevens S.; Japuntich, Sandra J.; Baker, Timothy B.

    2010-01-01

    This research used classification tree analysis and logistic regression models to identify risk factors related to short- and long-term abstinence. Baseline and cessation outcome data from two smoking cessation trials, conducted from 2001 to 2002, in two Midwestern urban areas, were analyzed. There were 928 participants (53.1% women, 81.8% white) with complete data. Both analyses suggest that relapse risk is produced by interactions of risk factors and that early and late cessation outcomes reflect different vulnerability factors. The results illustrate the dynamic nature of relapse risk and suggest the importance of efficient modeling of interactions in relapse prediction. PMID:20397871

  14. Multi-Site and Multi-Variables Statistical Downscaling Technique in the Monsoon Dominated Region of Pakistan

    NASA Astrophysics Data System (ADS)

    Khan, Firdos; Pilz, Jürgen

    2016-04-01

    South Asia is under the severe impacts of changing climate and global warming. The last two decades showed that climate change or global warming is happening and the first decade of 21st century is considered as the warmest decade over Pakistan ever in history where temperature reached 53 0C in 2010. Consequently, the spatio-temporal distribution and intensity of precipitation is badly effected and causes floods, cyclones and hurricanes in the region which further have impacts on agriculture, water, health etc. To cope with the situation, it is important to conduct impact assessment studies and take adaptation and mitigation remedies. For impact assessment studies, we need climate variables at higher resolution. Downscaling techniques are used to produce climate variables at higher resolution; these techniques are broadly divided into two types, statistical downscaling and dynamical downscaling. The target location of this study is the monsoon dominated region of Pakistan. One reason for choosing this area is because the contribution of monsoon rains in this area is more than 80 % of the total rainfall. This study evaluates a statistical downscaling technique which can be then used for downscaling climatic variables. Two statistical techniques i.e. quantile regression and copula modeling are combined in order to produce realistic results for climate variables in the area under-study. To reduce the dimension of input data and deal with multicollinearity problems, empirical orthogonal functions will be used. Advantages of this new method are: (1) it is more robust to outliers as compared to ordinary least squares estimates and other estimation methods based on central tendency and dispersion measures; (2) it preserves the dependence among variables and among sites and (3) it can be used to combine different types of distributions. This is important in our case because we are dealing with climatic variables having different distributions over different meteorological stations. The proposed model will be validated by using the (National Centers for Environmental Prediction / National Center for Atmospheric Research) NCEP/NCAR predictors for the period of 1960-1990 and validated for 1990-2000. To investigate the efficiency of the proposed model, it will be compared with the multivariate multiple regression model and with dynamical downscaling climate models by using different climate indices that describe the frequency, intensity and duration of the variables of interest. KEY WORDS: Climate change, Copula, Monsoon, Quantile regression, Spatio-temporal distribution.

  15. Accounting for individual differences and timing of events: estimating the effect of treatment on criminal convictions in heroin users.

    PubMed

    Røislien, Jo; Clausen, Thomas; Gran, Jon Michael; Bukten, Anne

    2014-05-17

    The reduction of crime is an important outcome of opioid maintenance treatment (OMT). Criminal intensity and treatment regimes vary among OMT patients, but this is rarely adjusted for in statistical analyses, which tend to focus on cohort incidence rates and rate ratios. The purpose of this work was to estimate the relationship between treatment and criminal convictions among OMT patients, adjusting for individual covariate information and timing of events, fitting time-to-event regression models of increasing complexity. National criminal records were cross linked with treatment data on 3221 patients starting OMT in Norway 1997-2003. In addition to calculating cohort incidence rates, criminal convictions was modelled as a recurrent event dependent variable, and treatment a time-dependent covariate, in Cox proportional hazards, Aalen's additive hazards, and semi-parametric additive hazards regression models. Both fixed and dynamic covariates were included. During OMT, the number of days with criminal convictions for the cohort as a whole was 61% lower than when not in treatment. OMT was associated with reduced number of days with criminal convictions in all time-to-event regression models, but the hazard ratio (95% CI) was strongly attenuated when adjusting for covariates; from 0.40 (0.35, 0.45) in a univariate model to 0.79 (0.72, 0.87) in a fully adjusted model. The hazard was lower for females and decreasing with older age, while increasing with high numbers of criminal convictions prior to application to OMT (all p < 0.001). The strongest predictors were level of criminal activity prior to entering into OMT, and having a recent criminal conviction (both p < 0.001). The effect of several predictors was significantly time-varying with their effects diminishing over time. Analyzing complex observational data regarding to fixed factors only overlooks important temporal information, and naïve cohort level incidence rates might result in biased estimates of the effect of interventions. Applying time-to-event regression models, properly adjusting for individual covariate information and timing of various events, allows for more precise and reliable effect estimates, as well as painting a more nuanced picture that can aid health care professionals and policy makers.

  16. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    NASA Astrophysics Data System (ADS)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  17. A structured analysis of uncertainty surrounding modeled impacts of groundwater-extraction rules

    NASA Astrophysics Data System (ADS)

    Guillaume, Joseph H. A.; Qureshi, M. Ejaz; Jakeman, Anthony J.

    2012-08-01

    Integrating economic and groundwater models for groundwater-management can help improve understanding of trade-offs involved between conflicting socioeconomic and biophysical objectives. However, there is significant uncertainty in most strategic decision-making situations, including in the models constructed to represent them. If not addressed, this uncertainty may be used to challenge the legitimacy of the models and decisions made using them. In this context, a preliminary uncertainty analysis was conducted of a dynamic coupled economic-groundwater model aimed at assessing groundwater extraction rules. The analysis demonstrates how a variety of uncertainties in such a model can be addressed. A number of methods are used including propagation of scenarios and bounds on parameters, multiple models, block bootstrap time-series sampling and robust linear regression for model calibration. These methods are described within the context of a theoretical uncertainty management framework, using a set of fundamental uncertainty management tasks and an uncertainty typology.

  18. Quasi steady-state aerodynamic model development for race vehicle simulations

    NASA Astrophysics Data System (ADS)

    Mohrfeld-Halterman, J. A.; Uddin, M.

    2016-01-01

    Presented in this paper is a procedure to develop a high fidelity quasi steady-state aerodynamic model for use in race car vehicle dynamic simulations. Developed to fit quasi steady-state wind tunnel data, the aerodynamic model is regressed against three independent variables: front ground clearance, rear ride height, and yaw angle. An initial dual range model is presented and then further refined to reduce the model complexity while maintaining a high level of predictive accuracy. The model complexity reduction decreases the required amount of wind tunnel data thereby reducing wind tunnel testing time and cost. The quasi steady-state aerodynamic model for the pitch moment degree of freedom is systematically developed in this paper. This same procedure can be extended to the other five aerodynamic degrees of freedom to develop a complete six degree of freedom quasi steady-state aerodynamic model for any vehicle.

  19. A Semiparametric Approach for Composite Functional Mapping of Dynamic Quantitative Traits

    PubMed Central

    Yang, Runqing; Gao, Huijiang; Wang, Xin; Zhang, Ji; Zeng, Zhao-Bang; Wu, Rongling

    2007-01-01

    Functional mapping has emerged as a powerful tool for mapping quantitative trait loci (QTL) that control developmental patterns of complex dynamic traits. Original functional mapping has been constructed within the context of simple interval mapping, without consideration of separate multiple linked QTL for a dynamic trait. In this article, we present a statistical framework for mapping QTL that affect dynamic traits by capitalizing on the strengths of functional mapping and composite interval mapping. Within this so-called composite functional-mapping framework, functional mapping models the time-dependent genetic effects of a QTL tested within a marker interval using a biologically meaningful parametric function, whereas composite interval mapping models the time-dependent genetic effects of the markers outside the test interval to control the genome background using a flexible nonparametric approach based on Legendre polynomials. Such a semiparametric framework was formulated by a maximum-likelihood model and implemented with the EM algorithm, allowing for the estimation and the test of the mathematical parameters that define the QTL effects and the regression coefficients of the Legendre polynomials that describe the marker effects. Simulation studies were performed to investigate the statistical behavior of composite functional mapping and compare its advantage in separating multiple linked QTL as compared to functional mapping. We used the new mapping approach to analyze a genetic mapping example in rice, leading to the identification of multiple QTL, some of which are linked on the same chromosome, that control the developmental trajectory of leaf age. PMID:17947431

  20. Measuring and modeling stemflow by two xerophytic shrubs in the Loess Plateau: The role of dynamic canopy structure

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Li, X.; Li, W.; Shi, F.; Wu, H.; WU, X.; Pei, T.

    2016-12-01

    Stemflow plays an important role in hydrological processes in dryland shrubs, but it still remains poorly understood, especially regarding the effects of dynamic canopy structure on stemflow. This study aimed to measure and model the stemflow of two dominant xerophytic shrub (Hippophae rhamnoides and Spiraea pubescens) communities and to identify the key controlling factors of stemflow yield. We quantified and scaled-up stemflow from branches and leaves to stand levels. Correlations and stepwise regression analysis between stemflow and meteorological and biological factors indicated that at branch level, the rainfall amount and the branch diameter were the best variables for modelling and predicting stemflow for Hippophae rhamnoides, while the rainfall amount and the aboveground biomass were the best variables for modelling and predicting stemflow for Spiraea pubescens. At stand level, the stemflow yield is mostly affected by rainfall amount and leaf area index for both shrubs. The stemflow fluxes account for 3.5±0.9% of incident rainfall for H. rhamnoides community and 9.4±2.1% for S. pubescens community, respectively. The differences in percentages of stemflow between the two shrub communities was attributed to differences in canopy structures and water storage capacities. This evaluation of the effects of canopy structure dynamics on stemflow, and of the developed model, provided a better understanding of the effect of the canopy structure on the water cycles in dryland shrub ecosystems.

  1. Poisson Mixture Regression Models for Heart Disease Prediction.

    PubMed

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  2. Poisson Mixture Regression Models for Heart Disease Prediction

    PubMed Central

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  3. Analysis of year-to-year ozone variation over the subtropical western Pacific region using EP_TOMS data and CCSR/NIES nudging CTM

    NASA Astrophysics Data System (ADS)

    Zhou, L. B.; Akiyoshi, H.; Kawahira, K.

    2003-10-01

    The year-to-year ozone variation over the subtropical western Pacific region is studied, especially the ozone lows in the 1996/1997, 1998/1999, and 2001/2002 winters, using the Earth Probe Total Ozone Mapping Spectrometer (EP_TOMS) ozone data from August 1996 to July 2002. Regression analyses show that dynamical signals, such as the quasi-biennial oscillation, play an important role in determining total ozone variation. A nudging chemical transport model (CTM) is used to simulate the year-to-year ozone variation and explain the mechanism for producing ozone lows in a three-dimensional distribution of ozone. The CTM was developed using the Center for Climate System Research/National Institute for Environmental Studies (CCSR/NIES) atmospheric general circulation model and introducing a nudging process for temperature and horizontal wind velocity. The year-to-year ozone variation, especially the winter ozone low, is well simulated by the model excluding heterogeneous reaction processes between 45°S and 45°N latitude. Results show that the year-to-year ozone variation is mainly controlled by dynamical transport processes.

  4. Dynamic Dimensionality Selection for Bayesian Classifier Ensembles

    DTIC Science & Technology

    2015-03-19

    learning of weights in an otherwise generatively learned naive Bayes classifier. WANBIA-C is very cometitive to Logistic Regression but much more...classifier, Generative learning, Discriminative learning, Naïve Bayes, Feature selection, Logistic regression , higher order attribute independence 16...discriminative learning of weights in an otherwise generatively learned naive Bayes classifier. WANBIA-C is very cometitive to Logistic Regression but

  5. Aerial robot intelligent control method based on back-stepping

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Xue, Qian

    2018-05-01

    The aerial robot is characterized as strong nonlinearity, high coupling and parameter uncertainty, a self-adaptive back-stepping control method based on neural network is proposed in this paper. The uncertain part of the aerial robot model is compensated online by the neural network of Cerebellum Model Articulation Controller and robust control items are designed to overcome the uncertainty error of the system during online learning. At the same time, particle swarm algorithm is used to optimize and fix parameters so as to improve the dynamic performance, and control law is obtained by the recursion of back-stepping regression. Simulation results show that the designed control law has desired attitude tracking performance and good robustness in case of uncertainties and large errors in the model parameters.

  6. Concepts for a theoretical and experimental study of lifting rotor random loads and vibrations (further experiments with progressing/regressing rotor flapping modes), Phase 7-C

    NASA Technical Reports Server (NTRS)

    Hohenemser, K. H.; Crews, S. T.

    1973-01-01

    The experiments with progressing/regressing forced rotor flapping modes have been extended in several directions and the data processing method has been considerably refined. The 16 inch hingeless 2-bladed rotor model was equipped with a new set of high precision blades which removed previously encountered tracking difficulties at high advance ratio, so that tests up to .8 rotor advance ratio could be conducted. In addition to data with 1.20 blade natural flapping frequency data at 1.10 flapping frequency were obtained. Outside the wind tunnel, tests with a ground plate located at different distances below the rotor were conducted while recording the dynamic downflow at a station .2R below the rotor plane with a hot wire anemometer.

  7. Water Quality and Herbivory Interactively Drive Coral-Reef Recovery Patterns in American Samoa

    PubMed Central

    Houk, Peter; Musburger, Craig; Wiles, Phil

    2010-01-01

    Background Compared with a wealth of information regarding coral-reef recovery patterns following major disturbances, less insight exists to explain the cause(s) of spatial variation in the recovery process. Methodology/Principal Findings This study quantifies the influence of herbivory and water quality upon coral reef assemblages through space and time in Tutuila, American Samoa, a Pacific high island. Widespread declines in dominant corals (Acropora and Montipora) resulted from cyclone Heta at the end of 2003, shortly after the study began. Four sites that initially had similar coral reef assemblages but differential temporal dynamics four years following the disturbance event were classified by standardized measures of ‘recovery status’, defined by rates of change in ecological measures that are known to be sensitive to localized stressors. Status was best predicted, interactively, by water quality and herbivory. Expanding upon temporal trends, this study examined if similar dependencies existed through space; building multiple regression models to identify linkages between similar status measures and local stressors for 17 localities around Tutuila. The results highlighted consistent, interactive interdependencies for coral reef assemblages residing upon two unique geological reef types. Finally, the predictive regression models produced at the island scale were graphically interpreted with respect to hypothesized site-specific recovery thresholds. Conclusions/Significance Cumulatively, our study purports that moving away from describing relatively well-known patterns behind recovery, and focusing upon understanding causes, improves our foundation to predict future ecological dynamics, and thus improves coral reef management. PMID:21085715

  8. Water quality and herbivory interactively drive coral-reef recovery patterns in American Samoa.

    PubMed

    Houk, Peter; Musburger, Craig; Wiles, Phil

    2010-11-10

    Compared with a wealth of information regarding coral-reef recovery patterns following major disturbances, less insight exists to explain the cause(s) of spatial variation in the recovery process. This study quantifies the influence of herbivory and water quality upon coral reef assemblages through space and time in Tutuila, American Samoa, a Pacific high island. Widespread declines in dominant corals (Acropora and Montipora) resulted from cyclone Heta at the end of 2003, shortly after the study began. Four sites that initially had similar coral reef assemblages but differential temporal dynamics four years following the disturbance event were classified by standardized measures of 'recovery status', defined by rates of change in ecological measures that are known to be sensitive to localized stressors. Status was best predicted, interactively, by water quality and herbivory. Expanding upon temporal trends, this study examined if similar dependencies existed through space; building multiple regression models to identify linkages between similar status measures and local stressors for 17 localities around Tutuila. The results highlighted consistent, interactive interdependencies for coral reef assemblages residing upon two unique geological reef types. Finally, the predictive regression models produced at the island scale were graphically interpreted with respect to hypothesized site-specific recovery thresholds. Cumulatively, our study purports that moving away from describing relatively well-known patterns behind recovery, and focusing upon understanding causes, improves our foundation to predict future ecological dynamics, and thus improves coral reef management.

  9. Visualization of Monocytic Cells in Regressing Atherosclerotic Plaques by Intravital 2-Photon and Positron Emission Tomography-Based Imaging-Brief Report.

    PubMed

    Li, Wenjun; Luehmann, Hannah P; Hsiao, Hsi-Min; Tanaka, Satona; Higashikubo, Ryuji; Gauthier, Jason M; Sultan, Deborah; Lavine, Kory J; Brody, Steven L; Gelman, Andrew E; Gropler, Robert J; Liu, Yongjian; Kreisel, Daniel

    2018-05-01

    Aortic arch transplants have advanced our understanding of processes that contribute to progression and regression of atherosclerotic plaques. To characterize the dynamic behavior of monocytes and macrophages in atherosclerotic plaques over time, we developed a new model of cervical aortic arch transplantation in mice that is amenable to intravital imaging. Vascularized aortic arch grafts were transplanted heterotropically to the right carotid arteries of recipient mice using microsurgical suture techniques. To image immune cells in atherosclerotic lesions during regression, plaque-bearing aortic arch grafts from B6 ApoE-deficient donors were transplanted into syngeneic CX 3 CR1 GFP reporter mice. Grafts were evaluated histologically, and monocytic cells in atherosclerotic plaques in ApoE-deficient grafts were imaged intravitally by 2-photon microscopy in serial fashion. In complementary experiments, CCR2 + cells in plaques were serially imaged by positron emission tomography using specific molecular probes. Plaques in ApoE-deficient grafts underwent regression after transplantation into normolipidemic hosts. Intravital imaging revealed clusters of largely immotile CX 3 CR1 + monocytes/macrophages in regressing plaques that had been recruited from the periphery. We observed a progressive decrease in CX 3 CR1 + monocytic cells in regressing plaques and a decrease in CCR2 + positron emission tomography signal during 4 months. Cervical transplantation of atherosclerotic mouse aortic arches represents a novel experimental tool to investigate cellular mechanisms that contribute to the remodeling of atherosclerotic plaques. © 2018 American Heart Association, Inc.

  10. Equilibrium Climate Sensitivity Obtained From Multimillennial Runs of Two GFDL Climate Models

    NASA Astrophysics Data System (ADS)

    Paynter, D.; Frölicher, T. L.; Horowitz, L. W.; Silvers, L. G.

    2018-02-01

    Equilibrium climate sensitivity (ECS), defined as the long-term change in global mean surface air temperature in response to doubling atmospheric CO2, is usually computed from short atmospheric simulations over a mixed layer ocean, or inferred using a linear regression over a short-time period of adjustment. We report the actual ECS from multimillenial simulations of two Geophysical Fluid Dynamics Laboratory (GFDL) general circulation models (GCMs), ESM2M, and CM3 of 3.3 K and 4.8 K, respectively. Both values are 1 K higher than estimates for the same models reported in the Fifth Assessment Report of the Intergovernmental Panel on Climate Change obtained by regressing the Earth's energy imbalance against temperature. This underestimate is mainly due to changes in the climate feedback parameter (-α) within the first century after atmospheric CO2 has stabilized. For both GCMs it is possible to estimate ECS with linear regression to within 0.3 K by increasing CO2 at 1% per year to doubling and using years 51-350 after CO2 is constant. We show that changes in -α differ between the two GCMs and are strongly tied to the changes in both vertical velocity at 500 hPa (ω500) and estimated inversion strength that the GCMs experience during the progression toward the equilibrium. This suggests that while cloud physics parametrizations are important for determining the strength of -α, the substantially different atmospheric state resulting from a changed sea surface temperature pattern may be of equal importance.

  11. Dynamic and Regression Modeling of Ocean Variability in the Tide-Gauge Record at Seasonal and Longer Periods

    NASA Technical Reports Server (NTRS)

    Hill, Emma M.; Ponte, Rui M.; Davis, James L.

    2007-01-01

    Comparison of monthly mean tide-gauge time series to corresponding model time series based on a static inverted barometer (IB) for pressure-driven fluctuations and a ocean general circulation model (OM) reveals that the combined model successfully reproduces seasonal and interannual changes in relative sea level at many stations. Removal of the OM and IB from the tide-gauge record produces residual time series with a mean global variance reduction of 53%. The OM is mis-scaled for certain regions, and 68% of the residual time series contain a significant seasonal variability after removal of the OM and IB from the tide-gauge data. Including OM admittance parameters and seasonal coefficients in a regression model for each station, with IB also removed, produces residual time series with mean global variance reduction of 71%. Examination of the regional improvement in variance caused by scaling the OM, including seasonal terms, or both, indicates weakness in the model at predicting sea-level variation for constricted ocean regions. The model is particularly effective at reproducing sea-level variation for stations in North America, Europe, and Japan. The RMS residual for many stations in these areas is 25-35 mm. The production of "cleaner" tide-gauge time series, with oceanographic variability removed, is important for future analysis of nonsecular and regionally differing sea-level variations. Understanding the ocean model's strengths and weaknesses will allow for future improvements of the model.

  12. Uncertainty quantification analysis of the dynamics of an electrostatically actuated microelectromechanical switch model

    NASA Astrophysics Data System (ADS)

    Snow, Michael G.; Bajaj, Anil K.

    2015-08-01

    This work presents an uncertainty quantification (UQ) analysis of a comprehensive model for an electrostatically actuated microelectromechanical system (MEMS) switch. The goal is to elucidate the effects of parameter variations on certain key performance characteristics of the switch. A sufficiently detailed model of the electrostatically actuated switch in the basic configuration of a clamped-clamped beam is developed. This multi-physics model accounts for various physical effects, including the electrostatic fringing field, finite length of electrodes, squeeze film damping, and contact between the beam and the dielectric layer. The performance characteristics of immediate interest are the static and dynamic pull-in voltages for the switch. Numerical approaches for evaluating these characteristics are developed and described. Using Latin Hypercube Sampling and other sampling methods, the model is evaluated to find these performance characteristics when variability in the model's geometric and physical parameters is specified. Response surfaces of these results are constructed via a Multivariate Adaptive Regression Splines (MARS) technique. Using a Direct Simulation Monte Carlo (DSMC) technique on these response surfaces gives smooth probability density functions (PDFs) of the outputs characteristics when input probability characteristics are specified. The relative variation in the two pull-in voltages due to each of the input parameters is used to determine the critical parameters.

  13. Parametric regression model for survival data: Weibull regression model as an example

    PubMed Central

    2016-01-01

    Weibull regression model is one of the most popular forms of parametric regression model that it provides estimate of baseline hazard function, as well as coefficients for covariates. Because of technical difficulties, Weibull regression model is seldom used in medical literature as compared to the semi-parametric proportional hazard model. To make clinical investigators familiar with Weibull regression model, this article introduces some basic knowledge on Weibull regression model and then illustrates how to fit the model with R software. The SurvRegCensCov package is useful in converting estimated coefficients to clinical relevant statistics such as hazard ratio (HR) and event time ratio (ETR). Model adequacy can be assessed by inspecting Kaplan-Meier curves stratified by categorical variable. The eha package provides an alternative method to model Weibull regression model. The check.dist() function helps to assess goodness-of-fit of the model. Variable selection is based on the importance of a covariate, which can be tested using anova() function. Alternatively, backward elimination starting from a full model is an efficient way for model development. Visualization of Weibull regression model after model development is interesting that it provides another way to report your findings. PMID:28149846

  14. Prediction of fishing effort distributions using boosted regression trees.

    PubMed

    Soykan, Candan U; Eguchi, Tomoharu; Kohin, Suzanne; Dewar, Heidi

    2014-01-01

    Concerns about bycatch of protected species have become a dominant factor shaping fisheries management. However, efforts to mitigate bycatch are often hindered by a lack of data on the distributions of fishing effort and protected species. One approach to overcoming this problem has been to overlay the distribution of past fishing effort with known locations of protected species, often obtained through satellite telemetry and occurrence data, to identify potential bycatch hotspots. This approach, however, generates static bycatch risk maps, calling into question their ability to forecast into the future, particularly when dealing with spatiotemporally dynamic fisheries and highly migratory bycatch species. In this study, we use boosted regression trees to model the spatiotemporal distribution of fishing effort for two distinct fisheries in the North Pacific Ocean, the albacore (Thunnus alalunga) troll fishery and the California drift gillnet fishery that targets swordfish (Xiphias gladius). Our results suggest that it is possible to accurately predict fishing effort using < 10 readily available predictor variables (cross-validated correlations between model predictions and observed data -0.6). Although the two fisheries are quite different in their gears and fishing areas, their respective models had high predictive ability, even when input data sets were restricted to a fraction of the full time series. The implications for conservation and management are encouraging: Across a range of target species, fishing methods, and spatial scales, even a relatively short time series of fisheries data may suffice to accurately predict the location of fishing effort into the future. In combination with species distribution modeling of bycatch species, this approach holds promise as a mitigation tool when observer data are limited. Even in data-rich regions, modeling fishing effort and bycatch may provide more accurate estimates of bycatch risk than partial observer coverage for fisheries and bycatch species that are heavily influenced by dynamic oceanographic conditions.

  15. Calibration of the maximum carboxylation velocity (vcmax) for the Caatinga for use in dynamic global vegetation models (DGVMs)

    NASA Astrophysics Data System (ADS)

    Rezende, L. C.; Arenque, B.; von Randow, C.; Moura, M. S.; Aidar, S. D.; Buckeridge, M. S.; Menezes, R.; Souza, L. S.; Ometto, J. P.

    2013-12-01

    The Caatinga biome in the semi-arid region of northeastern Brazil is extremely important due to its biodiversity and endemism. This biome, which is under high anthropogenic influences, presents high levels of environmental degradation, land use being among the main causes of such degradation. The simulations of land cover and the vegetation dynamic under different climate scenarios are important features for prediction of environmental risks and determination of sustainable pathways for the planet in the future. Modeling of the vegetation can be performed by use of dynamic global vegetation models (DGVMs). The DGVMs simulate the surface processes (e.g. transfer of energy, water, CO2 and momentum); plant physiology (e.g. photosynthesis, stomatal conductance) phenology; gross and net primary productivity, respiration, plant species classified by functional traits; competition for light, water and nutrients, soil characteristics and processes (e.g. nutrients, heterotrophic respiration). Currently, most of the parameters used in DGVMs are static pre-defined values, and the lack of observational information to aid choosing the most adequate values for these parameters is particularly critical for the semi-arid regions in the world. Through historical meteorological data and measurements of carbon assimilation we aim to calibrate the maximum carboxylation velocity (Vcmax), for the native species Poincianella microphylla, abundant in the Caatinga region. The field data (collected at Lat: 90 2' S, Lon: 40019' W) displayed two contrasting meteorological conditions, with precipitations of 16 mm and 104 mm prior to the sampling campaigns (April 9-13, 2012 and February 4-8, 2013; respectively). Calibration (obtaining values of Vcmax more suitable for vegetation of Caatinga) has been performed through an algorithm of pattern recognition: Classification And Regression Tree (CART) and calculation of the vapor pressure deficit (VPD), which was used as attribute for discrimination of data. CART can be utilized for classification or regression, being used in the context of this work for non-linear regression. Our results show that CART algorithm correctly classified data according to the two contrasting periods (i.e. correctly distinguished assimilation data measured during drier or rainy periods), and suggest average Vcmax values of 14.2 μmol CO2 m-2 s-1 for the drier period and of 102.5 μmol CO2 m-2 s-1 for the rainy period. Comparing the values obtained in this work with values obtained through a traditional parameter optimization technique, it is possible to gauge pros and cons of such a combination of field measurements and machine learning technique.

  16. Introduction to the use of regression models in epidemiology.

    PubMed

    Bender, Ralf

    2009-01-01

    Regression modeling is one of the most important statistical techniques used in analytical epidemiology. By means of regression models the effect of one or several explanatory variables (e.g., exposures, subject characteristics, risk factors) on a response variable such as mortality or cancer can be investigated. From multiple regression models, adjusted effect estimates can be obtained that take the effect of potential confounders into account. Regression methods can be applied in all epidemiologic study designs so that they represent a universal tool for data analysis in epidemiology. Different kinds of regression models have been developed in dependence on the measurement scale of the response variable and the study design. The most important methods are linear regression for continuous outcomes, logistic regression for binary outcomes, Cox regression for time-to-event data, and Poisson regression for frequencies and rates. This chapter provides a nontechnical introduction to these regression models with illustrating examples from cancer research.

  17. HOW POPULATION STRUCTURE SHAPES NEIGHBORHOOD SEGREGATION*

    PubMed Central

    Bruch, Elizabeth E.

    2014-01-01

    This study investigates how choices about social affiliation based on one attribute can exacerbate or attenuate segregation on another correlated attribute. The specific application is the role of racial and economic factors in generating patterns of racial residential segregation. I identify three population parameters—between-group inequality, within-group inequality, and relative group size—that determine how income inequality between race groups affects racial segregation. I use data from the Panel Study of Income Dynamics to estimate models of individual-level residential mobility, and incorporate these estimates into agent-based models. I then simulate segregation dynamics under alternative assumptions about: (1) the relative size of minority groups; and (2) the degree of correlation between race and income among individuals. I find that income inequality can have offsetting effects at the high and low ends of the income distribution. I demonstrate the empirical relevance of the simulation results using fixed-effects, metro-level regressions applied to 1980-2000 U.S. Census data. PMID:25009360

  18. Cognitive predictors of balance in Parkinson's disease.

    PubMed

    Fernandes, Ângela; Mendes, Andreia; Rocha, Nuno; Tavares, João Manuel R S

    2016-06-01

    Postural instability is one of the most incapacitating symptoms of Parkinson's disease (PD) and appears to be related to cognitive deficits. This study aims to determine the cognitive factors that can predict deficits in static and dynamic balance in individuals with PD. A sociodemographic questionnaire characterized 52 individuals with PD for this work. The Trail Making Test, Rule Shift Cards Test, and Digit Span Test assessed the executive functions. The static balance was assessed using a plantar pressure platform, and dynamic balance was based on the Timed Up and Go Test. The results were statistically analysed using SPSS Statistics software through linear regression analysis. The results show that a statistically significant model based on cognitive outcomes was able to explain the variance of motor variables. Also, the explanatory value of the model tended to increase with the addition of individual and clinical variables, although the resulting model was not statistically significant The model explained 25-29% of the variability of the Timed Up and Go Test, while for the anteroposterior displacement it was 23-34%, and for the mediolateral displacement it was 24-39%. From the findings, we conclude that the cognitive performance, especially the executive functions, is a predictor of balance deficit in individuals with PD.

  19. Statistical approach to the analysis of olive long-term pollen season trends in southern Spain.

    PubMed

    García-Mozo, H; Yaezel, L; Oteros, J; Galán, C

    2014-03-01

    Analysis of long-term airborne pollen counts makes it possible not only to chart pollen-season trends but also to track changing patterns in flowering phenology. Changes in higher plant response over a long interval are considered among the most valuable bioindicators of climate change impact. Phenological-trend models can also provide information regarding crop production and pollen-allergen emission. The interest of this information makes essential the election of the statistical analysis for time series study. We analysed trends and variations in the olive flowering season over a 30-year period (1982-2011) in southern Europe (Córdoba, Spain), focussing on: annual Pollen Index (PI); Pollen Season Start (PSS), Peak Date (PD), Pollen Season End (PSE) and Pollen Season Duration (PSD). Apart from the traditional Linear Regression analysis, a Seasonal-Trend Decomposition procedure based on Loess (STL) and an ARIMA model were performed. Linear regression results indicated a trend toward delayed PSE and earlier PSS and PD, probably influenced by the rise in temperature. These changes are provoking longer flowering periods in the study area. The use of the STL technique provided a clearer picture of phenological behaviour. Data decomposition on pollination dynamics enabled the trend toward an alternate bearing cycle to be distinguished from the influence of other stochastic fluctuations. Results pointed to show a rising trend in pollen production. With a view toward forecasting future phenological trends, ARIMA models were constructed to predict PSD, PSS and PI until 2016. Projections displayed a better goodness of fit than those derived from linear regression. Findings suggest that olive reproductive cycle is changing considerably over the last 30years due to climate change. Further conclusions are that STL improves the effectiveness of traditional linear regression in trend analysis, and ARIMA models can provide reliable trend projections for future years taking into account the internal fluctuations in time series. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Impact of dynamic distribution of floc particles on flocculation effect.

    PubMed

    Nan, Jun; He, Weipeng; Song, Xinin; Li, Guibai

    2009-01-01

    Polyaluminum chloride (PAC) was used as coagulant and suspended particles in kaolin water. Online instruments including turbidimeter and particle counter were used to monitor the flocculation process. An evaluation model for demonstrating the impact on the flocculation effect was established based on the multiple linear regression analysis method. The parameter of the index weight of channels quantitatively described how the variation of floc particle population in different size ranges cause the decrement of turbidity. The study showed that the floc particles in different size ranges contributed differently to the decrease of turbidity and that the index weight of channel could excellently indicate the impact degree of floc particles dynamic distribution on flocculation effect. Therefore, the parameter may significantly benefit the development of coagulation and sedimentation techniques as well as the optimal coagulant selection.

  1. Modelling hourly dissolved oxygen concentration (DO) using dynamic evolving neural-fuzzy inference system (DENFIS)-based approach: case study of Klamath River at Miller Island Boat Ramp, OR, USA.

    PubMed

    Heddam, Salim

    2014-01-01

    In this study, we present application of an artificial intelligence (AI) technique model called dynamic evolving neural-fuzzy inference system (DENFIS) based on an evolving clustering method (ECM), for modelling dissolved oxygen concentration in a river. To demonstrate the forecasting capability of DENFIS, a one year period from 1 January 2009 to 30 December 2009, of hourly experimental water quality data collected by the United States Geological Survey (USGS Station No: 420853121505500) station at Klamath River at Miller Island Boat Ramp, OR, USA, were used for model development. Two DENFIS-based models are presented and compared. The two DENFIS systems are: (1) offline-based system named DENFIS-OF, and (2) online-based system, named DENFIS-ON. The input variables used for the two models are water pH, temperature, specific conductance, and sensor depth. The performances of the models are evaluated using root mean square errors (RMSE), mean absolute error (MAE), Willmott index of agreement (d) and correlation coefficient (CC) statistics. The lowest root mean square error and highest correlation coefficient values were obtained with the DENFIS-ON method. The results obtained with DENFIS models are compared with linear (multiple linear regression, MLR) and nonlinear (multi-layer perceptron neural networks, MLPNN) methods. This study demonstrates that DENFIS-ON investigated herein outperforms all the proposed techniques for DO modelling.

  2. Emulation of the MBM-MEDUSA model: exploring the sea level and the basin-to-shelf transfer influence on the system dynamics

    NASA Astrophysics Data System (ADS)

    Ermakov, Ilya; Crucifix, Michel; Munhoven, Guy

    2013-04-01

    Complex climate models require high computational burden. However, computational limitations may be avoided by using emulators. In this work we present several approaches for dynamical emulation (also called metamodelling) of the Multi-Box Model (MBM) coupled to the Model of Early Diagenesis in the Upper Sediment A (MEDUSA) that simulates the carbon cycle of the ocean and atmosphere [1]. We consider two experiments performed on the MBM-MEDUSA that explore the Basin-to-Shelf Transfer (BST) dynamics. In both experiments the sea level is varied according to a paleo sea level reconstruction. Such experiments are interesting because the BST is an important cause of the CO2 variation and the dynamics is potentially nonlinear. The output that we are interested in is the variation of the carbon dioxide partial pressure in the atmosphere over the Pleistocene. The first experiment considers that the BST is fixed constant during the simulation. In the second experiment the BST is interactively adjusted according to the sea level, since the sea level is the primary control of the growth and decay of coral reefs and other shelf carbon reservoirs. The main aim of the present contribution is to create a metamodel of the MBM-MEDUSA using the Dynamic Emulation Modelling methodology [2] and compare the results obtained using linear and non-linear methods. The first step in the emulation methodology used in this work is to identify the structure of the metamodel. In order to select an optimal approach for emulation we compare the results of identification obtained by the simple linear and more complex nonlinear models. In order to identify the metamodel in the first experiment the simple linear regression and the least-squares method is sufficient to obtain a 99,9% fit between the temporal outputs of the model and the metamodel. For the second experiment the MBM's output is highly nonlinear. In this case we apply nonlinear models, such as, NARX, Hammerstein model, and an 'ad-hoc' switching model. After the identification we perform the parameter mapping using spline interpolation and validate the emulator on a new set of parameters. References: [1] G. Munhoven, "Glacial-interglacial rain ratio changes: Implications for atmospheric CO2 and ocean-sediment interaction," Deep-Sea Res Pt II, vol. 54, pp. 722-746, 2007. [2] A. Castelletti et al., "A general framework for Dynamic Emulation Modelling in environmental problems," Environ Modell Softw, vol. 34, pp. 5-18, 2012.

  3. Reconstruction of Local Sea Levels at South West Pacific Islands—A Multiple Linear Regression Approach (1988-2014)

    NASA Astrophysics Data System (ADS)

    Kumar, V.; Melet, A.; Meyssignac, B.; Ganachaud, A.; Kessler, W. S.; Singh, A.; Aucan, J.

    2018-02-01

    Rising sea levels are a critical concern in small island nations. The problem is especially serious in the western south Pacific, where the total sea level rise over the last 60 years has been up to 3 times the global average. In this study, we aim at reconstructing sea levels at selected sites in the region (Suva, Lautoka—Fiji, and Nouméa—New Caledonia) as a multilinear regression (MLR) of atmospheric and oceanic variables. We focus on sea level variability at interannual-to-interdecadal time scales, and trend over the 1988-2014 period. Local sea levels are first expressed as a sum of steric and mass changes. Then a dynamical approach is used based on wind stress curl as a proxy for the thermosteric component, as wind stress curl anomalies can modulate the thermocline depth and resultant sea levels via Rossby wave propagation. Statistically significant predictors among wind stress curl, halosteric sea level, zonal/meridional wind stress components, and sea surface temperature are used to construct a MLR model simulating local sea levels. Although we are focusing on the local scale, the global mean sea level needs to be adjusted for. Our reconstructions provide insights on key drivers of sea level variability at the selected sites, showing that while local dynamics and the global signal modulate sea level to a given extent, most of the variance is driven by regional factors. On average, the MLR model is able to reproduce 82% of the variance in island sea level, and could be used to derive local sea level projections via downscaling of climate models.

  4. Prediction of rainfall anomalies during the dry to wet transition season over the Southern Amazonia using machine learning tools

    NASA Astrophysics Data System (ADS)

    Shan, X.; Zhang, K.; Zhuang, Y.; Fu, R.; Hong, Y.

    2017-12-01

    Seasonal prediction of rainfall during the dry-to-wet transition season in austral spring (September-November) over southern Amazonia is central for improving planting crops and fire mitigation in that region. Previous studies have identified the key large-scale atmospheric dynamic and thermodynamics pre-conditions during the dry season (June-August) that influence the rainfall anomalies during the dry to wet transition season over Southern Amazonia. Based on these key pre-conditions during dry season, we have evaluated several statistical models and developed a Neural Network based statistical prediction system to predict rainfall during the dry to wet transition for Southern Amazonia (5-15°S, 50-70°W). Multivariate Empirical Orthogonal Function (EOF) Analysis is applied to the following four fields during JJA from the ECMWF Reanalysis (ERA-Interim) spanning from year 1979 to 2015: geopotential height at 200 hPa, surface relative humidity, convective inhibition energy (CIN) index and convective available potential energy (CAPE), to filter out noise and highlight the most coherent spatial and temporal variations. The first 10 EOF modes are retained for inputs to the statistical models, accounting for at least 70% of the total variance in the predictor fields. We have tested several linear and non-linear statistical methods. While the regularized Ridge Regression and Lasso Regression can generally capture the spatial pattern and magnitude of rainfall anomalies, we found that that Neural Network performs best with an accuracy greater than 80%, as expected from the non-linear dependence of the rainfall on the large-scale atmospheric thermodynamic conditions and circulation. Further tests of various prediction skill metrics and hindcasts also suggest this Neural Network prediction approach can significantly improve seasonal prediction skill than the dynamic predictions and regression based statistical predictions. Thus, this statistical prediction system could have shown potential to improve real-time seasonal rainfall predictions in the future.

  5. Interpretation of commonly used statistical regression models.

    PubMed

    Kasza, Jessica; Wolfe, Rory

    2014-01-01

    A review of some regression models commonly used in respiratory health applications is provided in this article. Simple linear regression, multiple linear regression, logistic regression and ordinal logistic regression are considered. The focus of this article is on the interpretation of the regression coefficients of each model, which are illustrated through the application of these models to a respiratory health research study. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.

  6. Quantifying Intracranial Aneurysm Wall Permeability for Risk Assessment Using Dynamic Contrast-Enhanced MRI: A Pilot Study.

    PubMed

    Vakil, P; Ansari, S A; Cantrell, C G; Eddleman, C S; Dehkordi, F H; Vranic, J; Hurley, M C; Batjer, H H; Bendok, B R; Carroll, T J

    2015-05-01

    Pathological changes in the intracranial aneurysm wall may lead to increases in its permeability; however the clinical significance of such changes has not been explored. The purpose of this pilot study was to quantify intracranial aneurysm wall permeability (K(trans), VL) to contrast agent as a measure of aneurysm rupture risk and compare these parameters against other established measures of rupture risk. We hypothesized K(trans) would be associated with intracranial aneurysm rupture risk as defined by various anatomic, imaging, and clinical risk factors. Twenty-seven unruptured intracranial aneurysms in 23 patients were imaged with dynamic contrast-enhanced MR imaging, and wall permeability parameters (K(trans), VL) were measured in regions adjacent to the aneurysm wall and along the paired control MCA by 2 blinded observers. K(trans) and VL were evaluated as markers of rupture risk by comparing them against established clinical (symptomatic lesions) and anatomic (size, location, morphology, multiplicity) risk metrics. Interobserver agreement was strong as shown in regression analysis (R(2) > 0.84) and intraclass correlation (intraclass correlation coefficient >0.92), indicating that the K(trans) can be reliably assessed clinically. All intracranial aneurysms had a pronounced increase in wall permeability compared with the paired healthy MCA (P < .001). Regression analysis demonstrated a significant trend toward an increased K(trans) with increasing aneurysm size (P < .001). Logistic regression showed that K(trans) also predicted risk in anatomic (P = .02) and combined anatomic/clinical (P = .03) groups independent of size. We report the first evidence of dynamic contrast-enhanced MR imaging-modeled contrast permeability in intracranial aneurysms. We found that contrast agent permeability across the aneurysm wall correlated significantly with both aneurysm size and size-independent anatomic risk factors. In addition, K(trans) was a significant and size-independent predictor of morphologically and clinically defined high-risk aneurysms. © 2015 by American Journal of Neuroradiology.

  7. Investigation on extracellular polymeric substances, sludge flocs morphology, bound water release and dewatering performance of sewage sludge under pretreatment with modified phosphogypsum.

    PubMed

    Dai, Quxiu; Ma, Liping; Ren, Nanqi; Ning, Ping; Guo, Zhiying; Xie, Longgui; Gao, Haijun

    2018-06-06

    Modified phosphogypsum (MPG) was developed to improve dewaterability of sewage sludge, and dewatering performance, properties of treated sludge, composition and morphology distribution of EPS, dynamic analysis and multiple regression model on bound water release were investigated. The results showed that addition of MPG caused extracellular polymeric substances (EPS) disintegration through charge neutralization. Destruction of EPS promoted the formation of larger sludge flocs and the release of bound water into supernatant. Simultaneously, content of organics with molecular weight between 1000 and 7000 Da in soluble EPS (SB-EPS) increased with increasing of EPS dissolved into the liquid phase. Besides, about 8.8 kg•kg -1 DS of bound water was released after pretreatment with 40%DS MPG dosage. Additionally, a multiple linear regression model for bound water release was established, showing that lower loosely bond EPS (LB-EPS) content and specific resistance of filtration (SRF) may improve dehydration performance, and larger sludge flocs may be beneficial for sludge dewatering. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Racism: A Symptom of the Narcissistic Personality

    PubMed Central

    Bell, Carl C.

    1980-01-01

    Despite the criticism that psychoanalytic models are not applicable to social phenomena, knowledge of the dynamics of narcissistic development aids in understanding a particular kind of racist individual. Specifically, racist attitudes may be indicative of a narcissistic personality disorder or of a regression to primitive narcissistic functioning secondary to environmental forces. The differentiation between the narcissistic racist, the stress-induced racist, and the socially misinformed racist is discussed utilizing clinical paradigms discovered in psychotherapy. Life experiences and religion are discussed as possible aids in the transformation of primary narcissism into secondary narcissism. PMID:7392083

  9. A new approach to correct the QT interval for changes in heart rate using a nonparametric regression model in beagle dogs.

    PubMed

    Watanabe, Hiroyuki; Miyazaki, Hiroyasu

    2006-01-01

    Over- and/or under-correction of QT intervals for changes in heart rate may lead to misleading conclusions and/or masking the potential of a drug to prolong the QT interval. This study examines a nonparametric regression model (Loess Smoother) to adjust the QT interval for differences in heart rate, with an improved fitness over a wide range of heart rates. 240 sets of (QT, RR) observations collected from each of 8 conscious and non-treated beagle dogs were used as the materials for investigation. The fitness of the nonparametric regression model to the QT-RR relationship was compared with four models (individual linear regression, common linear regression, and Bazett's and Fridericia's correlation models) with reference to Akaike's Information Criterion (AIC). Residuals were visually assessed. The bias-corrected AIC of the nonparametric regression model was the best of the models examined in this study. Although the parametric models did not fit, the nonparametric regression model improved the fitting at both fast and slow heart rates. The nonparametric regression model is the more flexible method compared with the parametric method. The mathematical fit for linear regression models was unsatisfactory at both fast and slow heart rates, while the nonparametric regression model showed significant improvement at all heart rates in beagle dogs.

  10. Network Approach to Understanding Emotion Dynamics in Relation to Childhood Trauma and Genetic Liability to Psychopathology: Replication of a Prospective Experience Sampling Analysis

    PubMed Central

    Hasmi, Laila; Drukker, Marjan; Guloksuz, Sinan; Menne-Lothmann, Claudia; Decoster, Jeroen; van Winkel, Ruud; Collip, Dina; Delespaul, Philippe; De Hert, Marc; Derom, Catherine; Thiery, Evert; Jacobs, Nele; Rutten, Bart P. F.; Wichers, Marieke; van Os, Jim

    2017-01-01

    Background: The network analysis of intensive time series data collected using the Experience Sampling Method (ESM) may provide vital information in gaining insight into the link between emotion regulation and vulnerability to psychopathology. The aim of this study was to apply the network approach to investigate whether genetic liability (GL) to psychopathology and childhood trauma (CT) are associated with the network structure of the emotions “cheerful,” “insecure,” “relaxed,” “anxious,” “irritated,” and “down”—collected using the ESM method. Methods: Using data from a population-based sample of twin pairs and siblings (704 individuals), we examined whether momentary emotion network structures differed across strata of CT and GL. GL was determined empirically using the level of psychopathology in monozygotic and dizygotic co-twins. Network models were generated using multilevel time-lagged regression analysis and were compared across three strata (low, medium, and high) of CT and GL, respectively. Permutations were utilized to calculate p values and compare regressions coefficients, density, and centrality indices. Regression coefficients were presented as connections, while variables represented the nodes in the network. Results: In comparison to the low GL stratum, the high GL stratum had significantly denser overall (p = 0.018) and negative affect network density (p < 0.001). The medium GL stratum also showed a directionally similar (in-between high and low GL strata) but statistically inconclusive association with network density. In contrast to GL, the results of the CT analysis were less conclusive, with increased positive affect density (p = 0.021) and overall density (p = 0.042) in the high CT stratum compared to the medium CT stratum but not to the low CT stratum. The individual node comparisons across strata of GL and CT yielded only very few significant results, after adjusting for multiple testing. Conclusions: The present findings demonstrate that the network approach may have some value in understanding the relation between established risk factors for mental disorders (particularly GL) and the dynamic interplay between emotions. The present finding partially replicates an earlier analysis, suggesting it may be instructive to model negative emotional dynamics as a function of genetic influence. PMID:29163289

  11. Ranking Theory and Conditional Reasoning.

    PubMed

    Skovgaard-Olsen, Niels

    2016-05-01

    Ranking theory is a formal epistemology that has been developed in over 600 pages in Spohn's recent book The Laws of Belief, which aims to provide a normative account of the dynamics of beliefs that presents an alternative to current probabilistic approaches. It has long been received in the AI community, but it has not yet found application in experimental psychology. The purpose of this paper is to derive clear, quantitative predictions by exploiting a parallel between ranking theory and a statistical model called logistic regression. This approach is illustrated by the development of a model for the conditional inference task using Spohn's (2013) ranking theoretic approach to conditionals. Copyright © 2015 Cognitive Science Society, Inc.

  12. Bayesian B-spline mapping for dynamic quantitative traits.

    PubMed

    Xing, Jun; Li, Jiahan; Yang, Runqing; Zhou, Xiaojing; Xu, Shizhong

    2012-04-01

    Owing to their ability and flexibility to describe individual gene expression at different time points, random regression (RR) analyses have become a popular procedure for the genetic analysis of dynamic traits whose phenotypes are collected over time. Specifically, when modelling the dynamic patterns of gene expressions in the RR framework, B-splines have been proved successful as an alternative to orthogonal polynomials. In the so-called Bayesian B-spline quantitative trait locus (QTL) mapping, B-splines are used to characterize the patterns of QTL effects and individual-specific time-dependent environmental errors over time, and the Bayesian shrinkage estimation method is employed to estimate model parameters. Extensive simulations demonstrate that (1) in terms of statistical power, Bayesian B-spline mapping outperforms the interval mapping based on the maximum likelihood; (2) for the simulated dataset with complicated growth curve simulated by B-splines, Legendre polynomial-based Bayesian mapping is not capable of identifying the designed QTLs accurately, even when higher-order Legendre polynomials are considered and (3) for the simulated dataset using Legendre polynomials, the Bayesian B-spline mapping can find the same QTLs as those identified by Legendre polynomial analysis. All simulation results support the necessity and flexibility of B-spline in Bayesian mapping of dynamic traits. The proposed method is also applied to a real dataset, where QTLs controlling the growth trajectory of stem diameters in Populus are located.

  13. Assimilation of Biophysical Neuronal Dynamics in Neuromorphic VLSI.

    PubMed

    Wang, Jun; Breen, Daniel; Akinin, Abraham; Broccard, Frederic; Abarbanel, Henry D I; Cauwenberghs, Gert

    2017-12-01

    Representing the biophysics of neuronal dynamics and behavior offers a principled analysis-by-synthesis approach toward understanding mechanisms of nervous system functions. We report on a set of procedures assimilating and emulating neurobiological data on a neuromorphic very large scale integrated (VLSI) circuit. The analog VLSI chip, NeuroDyn, features 384 digitally programmable parameters specifying for 4 generalized Hodgkin-Huxley neurons coupled through 12 conductance-based chemical synapses. The parameters also describe reversal potentials, maximal conductances, and spline regressed kinetic functions for ion channel gating variables. In one set of experiments, we assimilated membrane potential recorded from one of the neurons on the chip to the model structure upon which NeuroDyn was designed using the known current input sequence. We arrived at the programmed parameters except for model errors due to analog imperfections in the chip fabrication. In a related set of experiments, we replicated songbird individual neuron dynamics on NeuroDyn by estimating and configuring parameters extracted using data assimilation from intracellular neural recordings. Faithful emulation of detailed biophysical neural dynamics will enable the use of NeuroDyn as a tool to probe electrical and molecular properties of functional neural circuits. Neuroscience applications include studying the relationship between molecular properties of neurons and the emergence of different spike patterns or different brain behaviors. Clinical applications include studying and predicting effects of neuromodulators or neurodegenerative diseases on ion channel kinetics.

  14. Near-infrared hyperspectral imaging of water evaporation dynamics for early detection of incipient caries.

    PubMed

    Usenik, Peter; Bürmen, Miran; Fidler, Aleš; Pernuš, Franjo; Likar, Boštjan

    2014-10-01

    Incipient caries is characterized as demineralization of the tooth enamel reflecting in increased porosity of enamel structure. As a result, the demineralized enamel may contain increased amount of water, and exhibit different water evaporation dynamics than the sound enamel. The objective of this paper is to assess the applicability of water evaporation dynamics of sound and demineralized enamel for detection and quantification of incipient caries using near-infrared hyperspectral imaging. The time lapse of water evaporation from enamel samples with artificial and natural caries lesions of different stages was imaged by a near-infrared hyperspectral imaging system. Partial least squares regression was used to predict the water content from the acquired spectra. The water evaporation dynamics was characterized by a first order logarithmic drying model. The calculated time constants of the logarithmic drying model were used as the discriminative feature. The conducted measurements showed that demineralized enamel contains more water and exhibits significantly faster water evaporation than the sound enamel. By appropriate modelling of the water evaporation process from the enamel surface, the contrast between the sound and demineralized enamel observed in the individual near infrared spectral images can be substantially enhanced. The presented results indicate that near-infrared based prediction of water content combined with an appropriate drying model presents a strong foundation for development of novel diagnostic tools for incipient caries detection. The results of the study enhance the understanding of the water evaporation process from the sound and demineralized enamel and have significant implications for the detection of incipient caries by near-infrared hyperspectral imaging. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Seasonally-Dynamic SPARROW Modeling of Nitrogen Flux Using Earth Observation Data

    NASA Astrophysics Data System (ADS)

    Smith, R. A.; Schwarz, G. E.; Brakebill, J. W.; Hoos, A. B.; Moore, R. B.; Shih, J.; Nolin, A. W.; Macauley, M.; Alexander, R. B.

    2013-12-01

    SPARROW models are widely used to identify and quantify the sources of contaminants in watersheds and to predict their flux and concentration at specified locations downstream. Conventional SPARROW models describe the average relationship between sources and stream conditions based on long-term water quality monitoring data and spatially-referenced explanatory information. But many watershed management issues stem from intra- and inter-annual changes in contaminant sources, hydrologic forcing, or other environmental conditions which cause a temporary imbalance between inputs and stream water quality. Dynamic behavior of the system relating to changes in watershed storage and processing then becomes important. In this study, we describe dynamically calibrated SPARROW models of total nitrogen flux in three sub-regional watersheds: the Potomac River Basin, Long Island Sound drainage, and coastal South Carolina drainage. The models are based on seasonal water quality and watershed input data for a total 170 monitoring stations for the period 2001 to 2008. Frequently-reported, spatially-detailed input data on the phenology of agricultural production, terrestrial vegetation growth, and snow melt are often challenging requirements of seasonal modeling of reactive nitrogen. In this NASA-funded research, we use Enhanced Vegetation Index (EVI), gross primary production and snow/ice cover data from MODIS to parameterize seasonal uptake and release of nitrogen from vegetation and snowpack. The spatial reference frames of the models are 1:100,000-scale stream networks, and the computational time steps are 0.25-year seasons. Precipitation and temperature data are from PRISM. The model formulation accounts for storage of nitrogen from nonpoint sources including fertilized cropland, pasture, urban land, and atmospheric deposition. Model calibration is by non-linear regression. Once calibrated, model source terms based on previous season export allow for recursive dynamic simulation of stream flux: gradual increases or decreases in export occur as source supply rates and hydrologic forcing change. Based on an assumption that removal of nitrogen from watershed storage to stream channels and to 'permanent' sinks (e.g. the atmosphere and deep groundwater) occur as parallel first-order processes, the models can be used to estimate the approximate residence times of nonpoint source nitrogen in the watersheds.

  16. Unemployment and inflation dynamics prior to the economic downturn of 2007-2008.

    PubMed

    Guastello, Stephen J; Myers, Adam

    2009-10-01

    This article revisits a long-standing theoretical issue as to whether a "natural rate" of unemployment exists in the sense of an exogenously driven fixed-point Walrasian equilibrium or attractor, or whether more complex dynamics such as hysteresis or chaos characterize an endogenous dynamical process instead. The same questions are posed regarding a possible natural rate of inflation along with an investigation of the actual relationship between inflation and unemployment for which extent theories differ. Time series of unemployment and inflation for US data - were analyzed using the exponential model series and nonlinear regression for capturing Lyapunov exponents and transfer effects from other variables. The best explanation for unemployment was that it is a chaotic variable that is driven in part by inflation. The best explanation for inflation is that it is also a chaotic variable driven in part by unemployment and the prices of treasury bills. Estimates of attractors' epicenters were calculated in lieu of classical natural rates.

  17. Simulation Analysis of Helicopter Ground Resonance Nonlinear Dynamics

    NASA Astrophysics Data System (ADS)

    Zhu, Yan; Lu, Yu-hui; Ling, Ai-min

    2017-07-01

    In order to accurately predict the dynamic instability of helicopter ground resonance, a modeling and simulation method of helicopter ground resonance considering nonlinear dynamic characteristics of components (rotor lead-lag damper, landing gear wheel and absorber) is presented. The numerical integral method is used to calculate the transient responses of the body and rotor, simulating some disturbance. To obtain quantitative instabilities, Fast Fourier Transform (FFT) is conducted to estimate the modal frequencies, and the mobile rectangular window method is employed in the predictions of the modal damping in terms of the response time history. Simulation results show that ground resonance simulation test can exactly lead up the blade lead-lag regressing mode frequency, and the modal damping obtained according to attenuation curves are close to the test results. The simulation test results are in accordance with the actual accident situation, and prove the correctness of the simulation method. This analysis method used for ground resonance simulation test can give out the results according with real helicopter engineering tests.

  18. Seasonal ENSO forecasting: Where does a simple model stand amongst other operational ENSO models?

    NASA Astrophysics Data System (ADS)

    Halide, Halmar

    2017-01-01

    We apply a simple linear multiple regression model called IndOzy for predicting ENSO up to 7 seasonal lead times. The model still used 5 (five) predictors of the past seasonal Niño 3.4 ENSO indices derived from chaos theory and it was rolling-validated to give a one-step ahead forecast. The model skill was evaluated against data from the season of May-June-July (MJJ) 2003 to November-December-January (NDJ) 2015/2016. There were three skill measures such as: Pearson correlation, RMSE, and Euclidean distance were used for forecast verification. The skill of this simple model was than compared to those of combined Statistical and Dynamical models compiled at the IRI (International Research Institute) website. It was found that the simple model was only capable of producing a useful ENSO prediction only up to 3 seasonal leads, while the IRI statistical and Dynamical model skill were still useful up to 4 and 6 seasonal leads, respectively. Even with its short-range seasonal prediction skills, however, the simple model still has a potential to give ENSO-derived tailored products such as probabilistic measures of precipitation and air temperature. Both meteorological conditions affect the presence of wild-land fire hot-spots in Sumatera and Kalimantan. It is suggested that to improve its long-range skill, the simple INDOZY model needs to incorporate a nonlinear model such as an artificial neural network technique.

  19. A nonlinear autoregressive Volterra model of the Hodgkin-Huxley equations.

    PubMed

    Eikenberry, Steffen E; Marmarelis, Vasilis Z

    2013-02-01

    We propose a new variant of Volterra-type model with a nonlinear auto-regressive (NAR) component that is a suitable framework for describing the process of AP generation by the neuron membrane potential, and we apply it to input-output data generated by the Hodgkin-Huxley (H-H) equations. Volterra models use a functional series expansion to describe the input-output relation for most nonlinear dynamic systems, and are applicable to a wide range of physiologic systems. It is difficult, however, to apply the Volterra methodology to the H-H model because is characterized by distinct subthreshold and suprathreshold dynamics. When threshold is crossed, an autonomous action potential (AP) is generated, the output becomes temporarily decoupled from the input, and the standard Volterra model fails. Therefore, in our framework, whenever membrane potential exceeds some threshold, it is taken as a second input to a dual-input Volterra model. This model correctly predicts membrane voltage deflection both within the subthreshold region and during APs. Moreover, the model naturally generates a post-AP afterpotential and refractory period. It is known that the H-H model converges to a limit cycle in response to a constant current injection. This behavior is correctly predicted by the proposed model, while the standard Volterra model is incapable of generating such limit cycle behavior. The inclusion of cross-kernels, which describe the nonlinear interactions between the exogenous and autoregressive inputs, is found to be absolutely necessary. The proposed model is general, non-parametric, and data-derived.

  20. Progressive and regressive developmental changes in neural substrates for face processing: Testing specific predictions of the Interactive Specialization account

    PubMed Central

    Joseph, Jane E.; Gathers, Ann D.; Bhatt, Ramesh S.

    2010-01-01

    Face processing undergoes a fairly protracted developmental time course but the neural underpinnings are not well understood. Prior fMRI studies have only examined progressive changes (i.e., increases in specialization in certain regions with age), which would be predicted by both the Interactive Specialization (IS) and maturational theories of neural development. To differentiate between these accounts, the present study also examined regressive changes (i.e., decreases in specialization in certain regions with age), which is predicted by the IS but not maturational account. The fMRI results show that both progressive and regressive changes occur, consistent with IS. Progressive changes mostly occurred in occipital-fusiform and inferior frontal cortex whereas regressive changes largely emerged in parietal and lateral temporal cortices. Moreover, inconsistent with the maturational account, all of the regions involved in face viewing in adults were active in children, with some regions already specialized for face processing by 5 years of age and other regions activated in children but not specifically for faces. Thus, neurodevelopment of face processing involves dynamic interactions among brain regions including age-related increases and decreases in specialization and the involvement of different regions at different ages. These results are more consistent with IS than maturational models of neural development. PMID:21399706

  1. [Optimization of diagnosis indicator selection and inspection plan by 3.0T MRI in breast cancer].

    PubMed

    Jiang, Zhongbiao; Wang, Yunhua; He, Zhong; Zhang, Lejun; Zheng, Kai

    2013-08-01

    To optimize 3.0T MRI diagnosis indicator in breast cancer and to select the best MRI scan program. Totally 45 patients with breast cancers were collected, and another 35 patients with benign breast tumor served as the control group. All patients underwent 3.0T MRI, including T1- weighted imaging (T1WI), fat suppression of the T2-weighted imaging (T2WI), diffusion weighted imaging (DWI), 1H magnetic resonance spectroscopy (1H-MRS) and dynamic contrast enhanced (DCE) sequence. With operation pathology results as the gold standard in the diagnosis of breast diseases, the pathological results of benign and malignant served as dependent variables, and the diagnostic indicators of MRI were taken as independent variables. We put all the indicators of MRI examination under Logistic regression analysis, established the Logistic model, and optimized the diagnosis indicators of MRI examination to further improve MRI scan of breast cancer. By Logistic regression analysis, some indicators were selected in the equation, including the edge feature of the tumor, the time-signal intensity curve (TIC) type and the apparent diffusion coefficient (ADC) value when b=500 s/mm2. The regression equation was Logit (P)=-21.936+20.478X6+3.267X7+ 21.488X3. Valuable indicators in the diagnosis of breast cancer are the edge feature of the tumor, the TIC type and the ADC value when b=500 s/mm2. Combining conventional MRI scan, DWI and dynamic enhanced MRI is a better examination program, while MRS is the complementary program when diagnosis is difficult.

  2. Modified Regression Correlation Coefficient for Poisson Regression Model

    NASA Astrophysics Data System (ADS)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  3. Concentration data and dimensionality in groundwater models: evaluation using inverse modelling

    USGS Publications Warehouse

    Barlebo, H.C.; Hill, M.C.; Rosbjerg, D.; Jensen, K.H.

    1998-01-01

    A three-dimensional inverse groundwater flow and transport model that fits hydraulic-head and concentration data simultaneously using nonlinear regression is presented and applied to a layered sand and silt groundwater system beneath the Grindsted Landfill in Denmark. The aquifer is composed of rather homogeneous hydrogeologic layers. Two issues common to groundwater flow and transport modelling are investigated: 1) The accuracy of simulated concentrations in the case of calibration with head data alone; and 2) The advantages and disadvantages of using a two-dimensional cross-sectional model instead of a three-dimensional model to simulate contaminant transport when the source is at the land surface. Results show that using only hydraulic heads in the nonlinear regression produces a simulated plume that is profoundly different from what is obtained in a calibration using both hydraulic-head and concentration data. The present study provides a well-documented example of the differences that can occur. Representing the system as a two-dimensional cross-section obviously omits some of the system dynamics. It was, however, possible to obtain a simulated plume cross-section that matched the actual plume cross-section well. The two-dimensional model execution times were about a seventh of those for the three-dimensional model, but some difficulties were encountered in representing the spatially variable source concentrations and less precise simulated concentrations were calculated by the two-dimensional model compared to the three-dimensional model. Summed up, the present study indicates that three dimensional modelling using both hydraulic heads and concentrations in the calibration should be preferred in the considered type of transport studies.

  4. Dynamic Density: An Air Traffic Management Metric

    NASA Technical Reports Server (NTRS)

    Laudeman, I. V.; Shelden, S. G.; Branstrom, R.; Brasil, C. L.

    1998-01-01

    The definition of a metric of air traffic controller workload based on air traffic characteristics is essential to the development of both air traffic management automation and air traffic procedures. Dynamic density is a proposed concept for a metric that includes both traffic density (a count of aircraft in a volume of airspace) and traffic complexity (a measure of the complexity of the air traffic in a volume of airspace). It was hypothesized that a metric that includes terms that capture air traffic complexity will be a better measure of air traffic controller workload than current measures based only on traffic density. A weighted linear dynamic density function was developed and validated operationally. The proposed dynamic density function includes a traffic density term and eight traffic complexity terms. A unit-weighted dynamic density function was able to account for an average of 22% of the variance in observed controller activity not accounted for by traffic density alone. A comparative analysis of unit weights, subjective weights, and regression weights for the terms in the dynamic density equation was conducted. The best predictor of controller activity was the dynamic density equation with regression-weighted complexity terms.

  5. A statistical model for interpreting computerized dynamic posturography data

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Metter, E. Jeffrey; Paloski, William H.

    2002-01-01

    Computerized dynamic posturography (CDP) is widely used for assessment of altered balance control. CDP trials are quantified using the equilibrium score (ES), which ranges from zero to 100, as a decreasing function of peak sway angle. The problem of how best to model and analyze ESs from a controlled study is considered. The ES often exhibits a skewed distribution in repeated trials, which can lead to incorrect inference when applying standard regression or analysis of variance models. Furthermore, CDP trials are terminated when a patient loses balance. In these situations, the ES is not observable, but is assigned the lowest possible score--zero. As a result, the response variable has a mixed discrete-continuous distribution, further compromising inference obtained by standard statistical methods. Here, we develop alternative methodology for analyzing ESs under a stochastic model extending the ES to a continuous latent random variable that always exists, but is unobserved in the event of a fall. Loss of balance occurs conditionally, with probability depending on the realized latent ES. After fitting the model by a form of quasi-maximum-likelihood, one may perform statistical inference to assess the effects of explanatory variables. An example is provided, using data from the NIH/NIA Baltimore Longitudinal Study on Aging.

  6. Accuracy of 4D Flow measurement of cerebrospinal fluid dynamics in the cervical spine: An in vitro verification against numerical simulation

    PubMed Central

    Pahlavian, Soroush Heidari; Bunck, Alexander C.; Thyagaraj, Suraj; Giese, Daniel; Loth, Francis; Hedderich, Dennis M.; Kröger, Jan Robert; Martin, Bryn A.

    2016-01-01

    Abnormal alterations in cerebrospinal fluid (CSF) flow are thought to play an important role in pathophysiology of various craniospinal disorders such as hydrocephalus and Chiari malformation. Three directional phase contrast MRI (4D Flow) has been proposed as one method for quantification of the CSF dynamics in healthy and disease states, but prior to further implementation of this technique, its accuracy in measuring CSF velocity magnitude and distribution must be evaluated. In this study, an MR-compatible experimental platform was developed based on an anatomically detailed 3D printed model of the cervical subarachnoid space and subject specific flow boundary conditions. Accuracy of 4D Flow measurements was assessed by comparison of CSF velocities obtained within the in vitro model with the numerically predicted velocities calculated from a spatially averaged computational fluid dynamics (CFD) model based on the same geometry and flow boundary conditions. Good agreement was observed between CFD and 4D Flow in terms of spatial distribution and peak magnitude of through-plane velocities with an average difference of 7.5% and 10.6% for peak systolic and diastolic velocities, respectively. Regression analysis showed lower accuracy of 4D Flow measurement at the timeframes corresponding to low CSF flow rate and poor correlation between CFD and 4D Flow in-plane velocities. PMID:27043214

  7. Development of a real-time crash risk prediction model incorporating the various crash mechanisms across different traffic states.

    PubMed

    Xu, Chengcheng; Wang, Wei; Liu, Pan; Zhang, Fangwei

    2015-01-01

    This study aimed to identify the traffic flow variables contributing to crash risks under different traffic states and to develop a real-time crash risk model incorporating the varying crash mechanisms across different traffic states. The crash, traffic, and geometric data were collected on the I-880N freeway in California in 2008 and 2009. This study considered 4 different traffic states in Wu's 4-phase traffic theory. They are free fluid traffic, bunched fluid traffic, bunched congested traffic, and standing congested traffic. Several different statistical methods were used to accomplish the research objective. The preliminary analysis showed that traffic states significantly affected crash likelihood, collision type, and injury severity. Nonlinear canonical correlation analysis (NLCCA) was conducted to identify the underlying phenomena that made certain traffic states more hazardous than others. The results suggested that different traffic states were associated with various collision types and injury severities. The matching of traffic flow characteristics and crash characteristics in NLCCA revealed how traffic states affected traffic safety. The logistic regression analyses showed that the factors contributing to crash risks were quite different across various traffic states. To incorporate the varying crash mechanisms across different traffic states, random parameters logistic regression was used to develop a real-time crash risk model. Bayesian inference based on Markov chain Monte Carlo simulations was used for model estimation. The parameters of traffic flow variables in the model were allowed to vary across different traffic states. Compared with the standard logistic regression model, the proposed model significantly improved the goodness-of-fit and predictive performance. These results can promote a better understanding of the relationship between traffic flow characteristics and crash risks, which is valuable knowledge in the pursuit of improving traffic safety on freeways through the use of dynamic safety management systems.

  8. Comparing cluster-level dynamic treatment regimens using sequential, multiple assignment, randomized trials: Regression estimation and sample size considerations.

    PubMed

    NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel

    2017-08-01

    Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.

  9. Aeromechanical stability of a hingeless rotor in hover and forward flight: Analysis and wind tunnel tests

    NASA Technical Reports Server (NTRS)

    Yeager, W. T., Jr.; Hamouda, M. N. H.; Mantay, W. R.

    1983-01-01

    A research effort of analysis and testing was conducted to investigate the ground resonance phenomenon of a soft in-plane hingeless rotor. Experimental data were obtained using a 9 ft. (2.74 m) diameter model rotor in hover and forward flight. Eight model rotor configurations were investigated. Configuration parameters included pitch flap coupling, blade sweep and droop, and precone of the blade feathering axis. An analysis based on a comprehensive analytical model of rotorcraft aerodynamics and dynamics was used. The moving block was used to experimentally determine the regressing lead lag mode damping. Good agreement was obtained between the analysis and test. Both analysis and experiment indicated ground resonance instability in hover. An outline of the analysis, a description of the experimental model and procedures, and comparison of the analytical and experimental data are presented.

  10. Modeling of estuarne chlorophyll a from an airborne scanner

    USGS Publications Warehouse

    Khorram, Siamak; Catts, Glenn P.; Cloern, James E.; Knight, Allen W.

    1987-01-01

    Near simultaneous collection of 34 surface water samples and airborne multispectral scanner data provided input for regression models developed to predict surface concentrations of estuarine chlorophyll a. Two wavelength ratios were employed in model development. The ratios werechosen to capitalize on the spectral characteristics of chlorophyll a, while minimizing atmospheric influences. Models were then applied to data previously acquired over the study area thre years earlier. Results are in the form of color-coded displays of predicted chlorophyll a concentrations and comparisons of the agreement among measured surface samples and predictions basedon coincident remotely sensed data. The influence of large variations in fresh-water inflow to the estuary are clearly apparent in the results. The synoptic view provided by remote sensing is another method of examining important estuarine dynamics difficult to observe from in situ sampling alone.

  11. Estimation of mechanical properties of nanomaterials using artificial intelligence methods

    NASA Astrophysics Data System (ADS)

    Vijayaraghavan, V.; Garg, A.; Wong, C. H.; Tai, K.

    2014-09-01

    Computational modeling tools such as molecular dynamics (MD), ab initio, finite element modeling or continuum mechanics models have been extensively applied to study the properties of carbon nanotubes (CNTs) based on given input variables such as temperature, geometry and defects. Artificial intelligence techniques can be used to further complement the application of numerical methods in characterizing the properties of CNTs. In this paper, we have introduced the application of multi-gene genetic programming (MGGP) and support vector regression to formulate the mathematical relationship between the compressive strength of CNTs and input variables such as temperature and diameter. The predictions of compressive strength of CNTs made by these models are compared to those generated using MD simulations. The results indicate that MGGP method can be deployed as a powerful method for predicting the compressive strength of the carbon nanotubes.

  12. Dynamics modeling for sugar cane sucrose estimation using time series satellite imagery

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Justina, Diego Della; Kazama, Yoriko; Rocha, Jansle Vieira; Graziano, Paulo Sergio; Lamparelli, Rubens Augusto Camargo

    2016-10-01

    Sugarcane, as one of the most mainstay crop in Brazil, plays an essential role in ethanol production. To monitor sugarcane crop growth and predict sugarcane sucrose content, remote sensing technology plays an essential role while accurate and timely crop growth information is significant, in particularly for large scale farming. We focused on the issues of sugarcane sucrose content estimation using time-series satellite image. Firstly, we calculated the spectral features and vegetation indices to make them be correspondence to the sucrose accumulation biological mechanism. Secondly, we improved the statistical regression model considering more other factors. The evaluation was performed and we got precision of 90% which is about 20% higher than the conventional method. The validation results showed that prediction accuracy using our sugarcane growth modeling and improved mix model is satisfied.

  13. Modeling habitat and environmental factors affecting mosquito abundance in Chesapeake, Virginia

    NASA Astrophysics Data System (ADS)

    Bellows, Alan Scott

    The models I present in this dissertation were designed to enable mosquito control agencies in the mid-Atlantic region that oversee large jurisdictions to rapidly track the spatial and temporal distributions of mosquito species, especially those species known to be vectors of eastern equine encephalitis and West Nile virus. I was able to keep these models streamlined, user-friendly, and not cost-prohibitive using empirically based digital data to analyze mosquito-abundance patterns in real landscapes. This research is presented in three major chapters: (II) a series of semi-static habitat suitability indices (HSI) grounded on well-documented associations between mosquito abundance and environmental variables, (III) a dynamic model for predicting both spatial and temporal mosquito abundance based on a topographic soil moisture index and recent weather patterns, and (IV) a set of protocols laid out to aid mosquito control agencies for the use of these models. The HSIs (Chapter II) were based on relationships of mosquitoes to digital surrogates of soil moisture and vegetation characteristics. These models grouped mosquitoes species derived from similarities in habitat requirements, life-cycle type, and vector competence. Quantification of relationships was determined using multiple linear regression models. As in Chapter II, relationships between mosquito abundance and environmental factors in Chapter III were quantified using regression models. However, because this model was, in part, a function of changes in weather patterns, it enables the prediction of both 'where' and 'when' mosquito outbreaks are likely to occur. This model is distinctive among similar studies in the literature because of my use of NOAA's NEXRAD Doppler radar (3-hr precipitation accumulation data) to quantify the spatial and temporal distributions in precipitation accumulation. \\ Chapter IV is unique among the chapters in this dissertation because in lieu of presenting new research, it summarizes the preprocessing steps and analyses used in the HSIs and the dynamic, weather-based, model generated in Chapters II and III. The purpose of this chapter is to provide the reader and potential users with the necessary protocols for modeling the spatial and temporal abundances and distributions of mosquitoes, with emphasis on Culiseta melanura, in a real-world landscape of the mid-Atlantic region. This chapter also provides enhancements that could easily be incorporated into an environmentally sensitive integrated pest management program.

  14. Numerical models of salt marsh evolution: ecological, geomorphic, and climatic factors

    USGS Publications Warehouse

    Fagherazzi, Sergio; Kirwan, Matthew L.; Mudd, Simon M.; Guntenspergen, Glenn R.; Temmerman, Stijn; D'Alpaos, Andrea; van de Koppel, Johan; Rybczyk, John; Reyes, Enrique; Craft, Chris; Clough, Jonathan

    2012-01-01

    Salt marshes are delicate landforms at the boundary between the sea and land. These ecosystems support a diverse biota that modifies the erosive characteristics of the substrate and mediates sediment transport processes. Here we present a broad overview of recent numerical models that quantify the formation and evolution of salt marshes under different physical and ecological drivers. In particular, we focus on the coupling between geomorphological and ecological processes and on how these feedbacks are included in predictive models of landform evolution. We describe in detail models that simulate fluxes of water, organic matter, and sediments in salt marshes. The interplay between biological and morphological processes often produces a distinct scarp between salt marshes and tidal flats. Numerical models can capture the dynamics of this boundary and the progradation or regression of the marsh in time. Tidal channels are also key features of the marsh landscape, flooding and draining the marsh platform and providing a source of sediments and nutrients to the marsh ecosystem. In recent years, several numerical models have been developed to describe the morphogenesis and long-term dynamics of salt marsh channels. Finally, salt marshes are highly sensitive to the effects of long-term climatic change. We therefore discuss in detail how numerical models have been used to determine salt marsh survival under different scenarios of sea level rise.

  15. Numerical models of salt marsh evolution: Ecological, geomorphic, and climatic factors

    USGS Publications Warehouse

    Fagherazzi, S.; Kirwan, M.L.; Mudd, S.M.; Guntenspergen, G.R.; Temmerman, S.; D'Alpaos, A.; Van De Koppel, J.; Rybczyk, J.M.; Reyes, E.; Craft, C.; Clough, J.

    2012-01-01

    Salt marshes are delicate landforms at the boundary between the sea and land. These ecosystems support a diverse biota that modifies the erosive characteristics of the substrate and mediates sediment transport processes. Here we present a broad overview of recent numerical models that quantify the formation and evolution of salt marshes under different physical and ecological drivers. In particular, we focus on the coupling between geomorphological and ecological processes and on how these feedbacks are included in predictive models of landform evolution. We describe in detail models that simulate fluxes of water, organic matter, and sediments in salt marshes. The interplay between biological and morphological processes often produces a distinct scarp between salt marshes and tidal flats. Numerical models can capture the dynamics of this boundary and the progradation or regression of the marsh in time. Tidal channels are also key features of the marsh landscape, flooding and draining the marsh platform and providing a source of sediments and nutrients to the marsh ecosystem. In recent years, several numerical models have been developed to describe the morphogenesis and long-term dynamics of salt marsh channels. Finally, salt marshes are highly sensitive to the effects of long-term climatic change. We therefore discuss in detail how numerical models have been used to determine salt marsh survival under different scenarios of sea level rise. Copyright 2012 by the American Geophysical Union.

  16. Virtual water trade patterns in relation to environmental and socioeconomic factors: A case study for Tunisia.

    PubMed

    Chouchane, Hatem; Krol, Maarten S; Hoekstra, Arjen Y

    2018-02-01

    Growing water demands put increasing pressure on local water resources, especially in water-short countries. Virtual water trade can play a key role in filling the gap between local demand and supply of water-intensive commodities. This study aims to analyse the dynamics in virtual water trade of Tunisia in relation to environmental and socio-economic factors such as GDP, irrigated land, precipitation, population and water scarcity. The water footprint of crop production is estimated using AquaCrop for six crops over the period 1981-2010. Net virtual water import (NVWI) is quantified at yearly basis. Regression models are used to investigate dynamics in NVWI in relation to the selected factors. The results show that NVWI during the study period for the selected crops is not influenced by blue water scarcity. NVWI correlates in two alternative models to either population and precipitation (model I) or to GDP and irrigated area (model II). The models are better in explaining NVWI of staple crops (wheat, barley, potatoes) than NVWI of cash crops (dates, olives, tomatoes). Using model I, we are able to explain both trends and inter-annual variability for rain-fed crops. Model II performs better for irrigated crops and is able to explain trends significantly; no significant relation is found, however, with variables hypothesized to represent inter-annual variability. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Evaluating observations in the context of predictions for the death valley regional groundwater system

    USGS Publications Warehouse

    Ely, D.M.; Hill, M.C.; Tiedeman, C.R.; O'Brien, G. M.

    2004-01-01

    When a model is calibrated by nonlinear regression, calculated diagnostic and inferential statistics provide a wealth of information about many aspects of the system. This work uses linear inferential statistics that are measures of prediction uncertainty to investigate the likely importance of continued monitoring of hydraulic head to the accuracy of model predictions. The measurements evaluated are hydraulic heads; the predictions of interest are subsurface transport from 15 locations. The advective component of transport is considered because it is the component most affected by the system dynamics represented by the regional-scale model being used. The problem is addressed using the capabilities of the U.S. Geological Survey computer program MODFLOW-2000, with its Advective Travel Observation (ADV) Package. Copyright ASCE 2004.

  18. Critical load: a novel approach to determining a sustainable intensity during resistance exercise.

    PubMed

    Arakelian, Vivian M; Mendes, Renata G; Trimer, Renata; Rossi Caruso, Flavia C; de Sousa, Nuno M; Borges, Vanessa C; do Valle Gomes Gatto, Camila; Baldissera, Vilmar; Arena, Ross; Borghi-Silva, Audrey

    2017-05-01

    A hyperbolic function as well as a linear relationship between power output and time to exhaustion (Tlim) has been consistently observed during dynamic non-resistive exercises. However, little is known about its concept to resistance exercises (RE), which could be defined as critical load (CL). This study aimed to verify the existence of CL during dynamic RE and to verify the number of workbouts necessary to determine the optimal modeling to achieve it. Fifteen healthy men (23±2.5 yrs) completed 1 repetition maximum test (1RM) on a leg press and 3 (60%, 75% and 90% of 1RM) or 4 (+ 30% of 1RM) workbouts protocols to obtain the CL by hyperbolic and linear regression models between Tlim and load performed. Blood lactate and leg fatigue were also measured. CL was obtained during RE and 3 workbouts protocol estimate it at 53% while 4 tests at 38% of 1 RM. However, based on coefficients of determination, 3 protocols provided a better fit than the 4-parameter model, respectively (R2>0.95 vs. >0.77). Moreover, all intensities increased blood lactate and leg fatigue, however, when corrected by Tlim, were significantly lower at CL. It was possible to determinate CL during dynamic lower limbs RE and that 3 exhaustive workbouts can be used to better estimate the CL, constituting a new concept of determining this threshold during dynamic RE and reducing the physically demanding nature of the protocol. These findings may have important applications for functional performance evaluation and prescription of RE programs.

  19. Evaluation of the safety performance of highway alignments based on fault tree analysis and safety boundaries.

    PubMed

    Chen, Yikai; Wang, Kai; Xu, Chengcheng; Shi, Qin; He, Jie; Li, Peiqing; Shi, Ting

    2018-05-19

    To overcome the limitations of previous highway alignment safety evaluation methods, this article presents a highway alignment safety evaluation method based on fault tree analysis (FTA) and the characteristics of vehicle safety boundaries, within the framework of dynamic modeling of the driver-vehicle-road system. Approaches for categorizing the vehicle failure modes while driving on highways and the corresponding safety boundaries were comprehensively investigated based on vehicle system dynamics theory. Then, an overall crash probability model was formulated based on FTA considering the risks of 3 failure modes: losing steering capability, losing track-holding capability, and rear-end collision. The proposed method was implemented on a highway segment between Bengbu and Nanjing in China. A driver-vehicle-road multibody dynamics model was developed based on the 3D alignments of the Bengbu to Nanjing section of Ning-Luo expressway using Carsim, and the dynamics indices, such as sideslip angle and, yaw rate were obtained. Then, the average crash probability of each road section was calculated with a fixed-length method. Finally, the average crash probability was validated against the crash frequency per kilometer to demonstrate the accuracy of the proposed method. The results of the regression analysis and correlation analysis indicated good consistency between the results of the safety evaluation and the crash data and that it outperformed the safety evaluation methods used in previous studies. The proposed method has the potential to be used in practical engineering applications to identify crash-prone locations and alignment deficiencies on highways in the planning and design phases, as well as those in service.

  20. Finite Element Vibration Modeling and Experimental Validation for an Aircraft Engine Casing

    NASA Astrophysics Data System (ADS)

    Rabbitt, Christopher

    This thesis presents a procedure for the development and validation of a theoretical vibration model, applies this procedure to a pair of aircraft engine casings, and compares select parameters from experimental testing of those casings to those from a theoretical model using the Modal Assurance Criterion (MAC) and linear regression coefficients. A novel method of determining the optimal MAC between axisymmetric results is developed and employed. It is concluded that the dynamic finite element models developed as part of this research are fully capable of modelling the modal parameters within the frequency range of interest. Confidence intervals calculated in this research for correlation coefficients provide important information regarding the reliability of predictions, and it is recommended that these intervals be calculated for all comparable coefficients. The procedure outlined for aligning mode shapes around an axis of symmetry proved useful, and the results are promising for the development of further optimization techniques.

  1. Multivariate random regression analysis for body weight and main morphological traits in genetically improved farmed tilapia (Oreochromis niloticus).

    PubMed

    He, Jie; Zhao, Yunfeng; Zhao, Jingli; Gao, Jin; Han, Dandan; Xu, Pao; Yang, Runqing

    2017-11-02

    Because of their high economic importance, growth traits in fish are under continuous improvement. For growth traits that are recorded at multiple time-points in life, the use of univariate and multivariate animal models is limited because of the variable and irregular timing of these measures. Thus, the univariate random regression model (RRM) was introduced for the genetic analysis of dynamic growth traits in fish breeding. We used a multivariate random regression model (MRRM) to analyze genetic changes in growth traits recorded at multiple time-point of genetically-improved farmed tilapia. Legendre polynomials of different orders were applied to characterize the influences of fixed and random effects on growth trajectories. The final MRRM was determined by optimizing the univariate RRM for the analyzed traits separately via penalizing adaptively the likelihood statistical criterion, which is superior to both the Akaike information criterion and the Bayesian information criterion. In the selected MRRM, the additive genetic effects were modeled by Legendre polynomials of three orders for body weight (BWE) and body length (BL) and of two orders for body depth (BD). By using the covariance functions of the MRRM, estimated heritabilities were between 0.086 and 0.628 for BWE, 0.155 and 0.556 for BL, and 0.056 and 0.607 for BD. Only heritabilities for BD measured from 60 to 140 days of age were consistently higher than those estimated by the univariate RRM. All genetic correlations between growth time-points exceeded 0.5 for either single or pairwise time-points. Moreover, correlations between early and late growth time-points were lower. Thus, for phenotypes that are measured repeatedly in aquaculture, an MRRM can enhance the efficiency of the comprehensive selection for BWE and the main morphological traits.

  2. Symbolic dynamics marker of heart rate variability combined with clinical variables enhance obstructive sleep apnea screening

    NASA Astrophysics Data System (ADS)

    Ravelo-García, A. G.; Saavedra-Santana, P.; Juliá-Serdá, G.; Navarro-Mesa, J. L.; Navarro-Esteva, J.; Álvarez-López, X.; Gapelyuk, A.; Penzel, T.; Wessel, N.

    2014-06-01

    Many sleep centres try to perform a reduced portable test in order to decrease the number of overnight polysomnographies that are expensive, time-consuming, and disturbing. With some limitations, heart rate variability (HRV) has been useful in this task. The aim of this investigation was to evaluate if inclusion of symbolic dynamics variables to a logistic regression model integrating clinical and physical variables, can improve the detection of subjects for further polysomnographies. To our knowledge, this is the first contribution that innovates in that strategy. A group of 133 patients has been referred to the sleep center for suspected sleep apnea. Clinical assessment of the patients consisted of a sleep related questionnaire and a physical examination. The clinical variables related to apnea and selected in the statistical model were age (p < 10-3), neck circumference (p < 10-3), score on a questionnaire scale intended to quantify daytime sleepiness (p < 10-3), and intensity of snoring (p < 10-3). The validation of this model demonstrated an increase in classification performance when a variable based on non-linear dynamics of HRV (p < 0.01) was used additionally to the other variables. For diagnostic rule based only on clinical and physical variables, the corresponding area under the receiver operating characteristic (ROC) curve was 0.907 (95% confidence interval (CI) = 0.848, 0.967), (sensitivity 87.10% and specificity 80%). For the model including the average of a symbolic dynamic variable, the area under the ROC curve was increased to 0.941 (95% = 0.897, 0.985), (sensitivity 88.71% and specificity 82.86%). In conclusion, symbolic dynamics, coupled with significant clinical and physical variables can help to prioritize polysomnographies in patients with a high probability of apnea. In addition, the processing of the HRV is a well established low cost and robust technique.

  3. Regression modeling of ground-water flow

    USGS Publications Warehouse

    Cooley, R.L.; Naff, R.L.

    1985-01-01

    Nonlinear multiple regression methods are developed to model and analyze groundwater flow systems. Complete descriptions of regression methodology as applied to groundwater flow models allow scientists and engineers engaged in flow modeling to apply the methods to a wide range of problems. Organization of the text proceeds from an introduction that discusses the general topic of groundwater flow modeling, to a review of basic statistics necessary to properly apply regression techniques, and then to the main topic: exposition and use of linear and nonlinear regression to model groundwater flow. Statistical procedures are given to analyze and use the regression models. A number of exercises and answers are included to exercise the student on nearly all the methods that are presented for modeling and statistical analysis. Three computer programs implement the more complex methods. These three are a general two-dimensional, steady-state regression model for flow in an anisotropic, heterogeneous porous medium, a program to calculate a measure of model nonlinearity with respect to the regression parameters, and a program to analyze model errors in computed dependent variables such as hydraulic head. (USGS)

  4. The impact of global signal regression on resting state correlations: Are anti-correlated networks introduced?

    PubMed Central

    Murphy, Kevin; Birn, Rasmus M.; Handwerker, Daniel A.; Jones, Tyler B.; Bandettini, Peter A.

    2009-01-01

    Low-frequency fluctuations in fMRI signal have been used to map several consistent resting state networks in the brain. Using the posterior cingulate cortex as a seed region, functional connectivity analyses have found not only positive correlations in the default mode network but negative correlations in another resting state network related to attentional processes. The interpretation is that the human brain is intrinsically organized into dynamic, anti-correlated functional networks. Global variations of the BOLD signal are often considered nuisance effects and are commonly removed using a general linear model (GLM) technique. This global signal regression method has been shown to introduce negative activation measures in standard fMRI analyses. The topic of this paper is whether such a correction technique could be the cause of anti-correlated resting state networks in functional connectivity analyses. Here we show that, after global signal regression, correlation values to a seed voxel must sum to a negative value. Simulations also show that small phase differences between regions can lead to spurious negative correlation values. A combination breath holding and visual task demonstrates that the relative phase of global and local signals can affect connectivity measures and that, experimentally, global signal regression leads to bell-shaped correlation value distributions, centred on zero. Finally, analyses of negatively correlated networks in resting state data show that global signal regression is most likely the cause of anti-correlations. These results call into question the interpretation of negatively correlated regions in the brain when using global signal regression as an initial processing step. PMID:18976716

  5. The impact of global signal regression on resting state correlations: are anti-correlated networks introduced?

    PubMed

    Murphy, Kevin; Birn, Rasmus M; Handwerker, Daniel A; Jones, Tyler B; Bandettini, Peter A

    2009-02-01

    Low-frequency fluctuations in fMRI signal have been used to map several consistent resting state networks in the brain. Using the posterior cingulate cortex as a seed region, functional connectivity analyses have found not only positive correlations in the default mode network but negative correlations in another resting state network related to attentional processes. The interpretation is that the human brain is intrinsically organized into dynamic, anti-correlated functional networks. Global variations of the BOLD signal are often considered nuisance effects and are commonly removed using a general linear model (GLM) technique. This global signal regression method has been shown to introduce negative activation measures in standard fMRI analyses. The topic of this paper is whether such a correction technique could be the cause of anti-correlated resting state networks in functional connectivity analyses. Here we show that, after global signal regression, correlation values to a seed voxel must sum to a negative value. Simulations also show that small phase differences between regions can lead to spurious negative correlation values. A combination breath holding and visual task demonstrates that the relative phase of global and local signals can affect connectivity measures and that, experimentally, global signal regression leads to bell-shaped correlation value distributions, centred on zero. Finally, analyses of negatively correlated networks in resting state data show that global signal regression is most likely the cause of anti-correlations. These results call into question the interpretation of negatively correlated regions in the brain when using global signal regression as an initial processing step.

  6. The Application of the Cumulative Logistic Regression Model to Automated Essay Scoring

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; Sinharay, Sandip

    2010-01-01

    Most automated essay scoring programs use a linear regression model to predict an essay score from several essay features. This article applied a cumulative logit model instead of the linear regression model to automated essay scoring. Comparison of the performances of the linear regression model and the cumulative logit model was performed on a…

  7. Design and Development of a Model to Simulate 0-G Treadmill Running Using the European Space Agency's Subject Loading System

    NASA Technical Reports Server (NTRS)

    Caldwell, E. C.; Cowley, M. S.; Scott-Pandorf, M. M.

    2010-01-01

    Develop a model that simulates a human running in 0 G using the European Space Agency s (ESA) Subject Loading System (SLS). The model provides ground reaction forces (GRF) based on speed and pull-down forces (PDF). DESIGN The theoretical basis for the Running Model was based on a simple spring-mass model. The dynamic properties of the spring-mass model express theoretical vertical GRF (GRFv) and shear GRF in the posterior-anterior direction (GRFsh) during running gait. ADAMs VIEW software was used to build the model, which has a pelvis, thigh segment, shank segment, and a spring foot (see Figure 1).the model s movement simulates the joint kinematics of a human running at Earth gravity with the aim of generating GRF data. DEVELOPMENT & VERIFICATION ESA provided parabolic flight data of subjects running while using the SLS, for further characterization of the model s GRF. Peak GRF data were fit to a linear regression line dependent on PDF and speed. Interpolation and extrapolation of the regression equation provided a theoretical data matrix, which is used to drive the model s motion equations. Verification of the model was conducted by running the model at 4 different speeds, with each speed accounting for 3 different PDF. The model s GRF data fell within a 1-standard-deviation boundary derived from the empirical ESA data. CONCLUSION The Running Model aids in conducting various simulations (potential scenarios include a fatigued runner or a powerful runner generating high loads at a fast cadence) to determine limitations for the T2 vibration isolation system (VIS) aboard the International Space Station. This model can predict how running with the ESA SLS affects the T2 VIS and may be used for other exercise analyses in the future.

  8. Interpreting experimental data on egg production--applications of dynamic differential equations.

    PubMed

    France, J; Lopez, S; Kebreab, E; Dijkstra, J

    2013-09-01

    This contribution focuses on applying mathematical models based on systems of ordinary first-order differential equations to synthesize and interpret data from egg production experiments. Models based on linear systems of differential equations are contrasted with those based on nonlinear systems. Regression equations arising from analytical solutions to linear compartmental schemes are considered as candidate functions for describing egg production curves, together with aspects of parameter estimation. Extant candidate functions are reviewed, a role for growth functions such as the Gompertz equation suggested, and a function based on a simple new model outlined. Structurally, the new model comprises a single pool with an inflow and an outflow. Compartmental simulation models based on nonlinear systems of differential equations, and thus requiring numerical solution, are next discussed, and aspects of parameter estimation considered. This type of model is illustrated in relation to development and evaluation of a dynamic model of calcium and phosphorus flows in layers. The model consists of 8 state variables representing calcium and phosphorus pools in the crop, stomachs, plasma, and bone. The flow equations are described by Michaelis-Menten or mass action forms. Experiments that measure Ca and P uptake in layers fed different calcium concentrations during shell-forming days are used to evaluate the model. In addition to providing a useful management tool, such a simulation model also provides a means to evaluate feeding strategies aimed at reducing excretion of potential pollutants in poultry manure to the environment.

  9. Determining the spill flow discharge of combined sewer overflows using rating curves based on computational fluid dynamics instead of the standard weir equation.

    PubMed

    Fach, S; Sitzenfrei, R; Rauch, W

    2009-01-01

    It is state of the art to evaluate and optimise sewer systems with urban drainage models. Since spill flow data is essential in the calibration process of conceptual models it is important to enhance the quality of such data. A wide spread approach is to calculate the spill flow volume by using standard weir equations together with measured water levels. However, these equations are only applicable to combined sewer overflow (CSO) structures, whose weir constructions correspond with the standard weir layout. The objective of this work is to outline an alternative approach to obtain spill flow discharge data based on measurements with a sonic depth finder. The idea is to determine the relation between water level and rate of spill flow by running a detailed 3D computational fluid dynamics (CFD) model. Two real world CSO structures have been chosen due to their complex structure, especially with respect to the weir construction. In a first step the simulation results were analysed to identify flow conditions for discrete steady states. It will be shown that the flow conditions in the CSO structure change after the spill flow pipe acts as a controlled outflow and therefore the spill flow discharge cannot be described with a standard weir equation. In a second step the CFD results will be used to derive rating curves which can be easily applied in everyday practice. Therefore the rating curves are developed on basis of the standard weir equation and the equation for orifice-type outlets. Because the intersection of both equations is not known, the coefficients of discharge are regressed from CFD simulation results. Furthermore, the regression of the CFD simulation results are compared with the one of the standard weir equation by using historic water levels and hydrographs generated with a hydrodynamic model. The uncertainties resulting of the wide spread use of the standard weir equation are demonstrated.

  10. The predictive power of local properties of financial networks

    NASA Astrophysics Data System (ADS)

    Caraiani, Petre

    2017-01-01

    The literature on analyzing the dynamics of financial networks has focused so far on the predictive power of global measures of networks like entropy or index cohesive force. In this paper, I show that the local network properties have similar predictive power. I focus on key network measures like average path length, average degree or cluster coefficient, and also consider the diameter and the s-metric. Using Granger causality tests, I show that some of these measures have statistically significant prediction power with respect to the dynamics of aggregate stock market. Average path length is most robust relative to the frequency of data used or specification (index or growth rate). Most measures are found to have predictive power only for monthly frequency. Further evidences that support this view are provided through a simple regression model.

  11. Prospective in-patient cohort study of moves between levels of therapeutic security: the DUNDRUM-1 triage security, DUNDRUM-3 programme completion and DUNDRUM-4 recovery scales and the HCR-20.

    PubMed

    Davoren, Mary; O'Dwyer, Sarah; Abidin, Zareena; Naughton, Leena; Gibbons, Olivia; Doyle, Elaine; McDonnell, Kim; Monks, Stephen; Kennedy, Harry G

    2012-07-13

    We examined whether new structured professional judgment instruments for assessing need for therapeutic security, treatment completion and recovery in forensic settings were related to moves from higher to lower levels of therapeutic security and added anything to assessment of risk. This was a prospective naturalistic twelve month observational study of a cohort of patients in a forensic hospital placed according to their need for therapeutic security along a pathway of moves from high to progressively less secure units in preparation for discharge. Patients were assessed using the DUNDRUM-1 triage security scale, the DUNDRUM-3 programme completion scale and the DUNDRUM-4 recovery scale and assessments of risk of violence, self harm and suicide, symptom severity and global function. Patients were subsequently observed for positive moves to less secure units and negative moves to more secure units. There were 86 male patients at baseline with mean follow-up 0.9 years, 11 positive and 9 negative moves. For positive moves, logistic regression indicated that along with location at baseline, the DUNDRUM-1, HCR-20 dynamic and PANSS general symptom scores were associated with subsequent positive moves. The receiver operating characteristic was significant for the DUNDRUM-1 while ANOVA co-varying for both location at baseline and HCR-20 dynamic score was significant for DUNDRUM-1. For negative moves, logistic regression showed DUNDRUM-1 and HCR-20 dynamic scores were associated with subsequent negative moves, along with DUNDRUM-3 and PANSS negative symptoms in some models. The receiver operating characteristic was significant for the DUNDRUM-4 recovery and HCR-20 dynamic scores with DUNDRUM-1, DUNDRUM-3, PANSS general and GAF marginal. ANOVA co-varying for both location at baseline and HCR-20 dynamic scores showed only DUNDRUM-1 and PANSS negative symptoms associated with subsequent negative moves. Clinicians appear to decide moves based on combinations of current and imminent (dynamic) risk measured by HCR-20 dynamic score and historical seriousness of risk as measured by need for therapeutic security (DUNDRUM-1) in keeping with Scott's formulation of risk and seriousness. The DUNDRUM-3 programme completion and DUNDRUM-4 recovery scales have utility as dynamic measures that can off-set perceived 'dangerousness'.

  12. Prospective in-patient cohort study of moves between levels of therapeutic security: the DUNDRUM-1 triage security, DUNDRUM-3 programme completion and DUNDRUM-4 recovery scales and the HCR-20

    PubMed Central

    2012-01-01

    Background We examined whether new structured professional judgment instruments for assessing need for therapeutic security, treatment completion and recovery in forensic settings were related to moves from higher to lower levels of therapeutic security and added anything to assessment of risk. Methods This was a prospective naturalistic twelve month observational study of a cohort of patients in a forensic hospital placed according to their need for therapeutic security along a pathway of moves from high to progressively less secure units in preparation for discharge. Patients were assessed using the DUNDRUM-1 triage security scale, the DUNDRUM-3 programme completion scale and the DUNDRUM-4 recovery scale and assessments of risk of violence, self harm and suicide, symptom severity and global function. Patients were subsequently observed for positive moves to less secure units and negative moves to more secure units. Results There were 86 male patients at baseline with mean follow-up 0.9 years, 11 positive and 9 negative moves. For positive moves, logistic regression indicated that along with location at baseline, the DUNDRUM-1, HCR-20 dynamic and PANSS general symptom scores were associated with subsequent positive moves. The receiver operating characteristic was significant for the DUNDRUM-1 while ANOVA co-varying for both location at baseline and HCR-20 dynamic score was significant for DUNDRUM-1. For negative moves, logistic regression showed DUNDRUM-1 and HCR-20 dynamic scores were associated with subsequent negative moves, along with DUNDRUM-3 and PANSS negative symptoms in some models. The receiver operating characteristic was significant for the DUNDRUM-4 recovery and HCR-20 dynamic scores with DUNDRUM-1, DUNDRUM-3, PANSS general and GAF marginal. ANOVA co-varying for both location at baseline and HCR-20 dynamic scores showed only DUNDRUM-1 and PANSS negative symptoms associated with subsequent negative moves. Conclusions Clinicians appear to decide moves based on combinations of current and imminent (dynamic) risk measured by HCR-20 dynamic score and historical seriousness of risk as measured by need for therapeutic security (DUNDRUM-1) in keeping with Scott's formulation of risk and seriousness. The DUNDRUM-3 programme completion and DUNDRUM-4 recovery scales have utility as dynamic measures that can off-set perceived 'dangerousness'. PMID:22794187

  13. Moderation analysis using a two-level regression model.

    PubMed

    Yuan, Ke-Hai; Cheng, Ying; Maxwell, Scott

    2014-10-01

    Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.

  14. Mapping Shallow Landslide Slope Inestability at Large Scales Using Remote Sensing and GIS

    NASA Astrophysics Data System (ADS)

    Avalon Cullen, C.; Kashuk, S.; Temimi, M.; Suhili, R.; Khanbilvardi, R.

    2015-12-01

    Rainfall induced landslides are one of the most frequent hazards on slanted terrains. They lead to great economic losses and fatalities worldwide. Most factors inducing shallow landslides are local and can only be mapped with high levels of uncertainty at larger scales. This work presents an attempt to determine slope instability at large scales. Buffer and threshold techniques are used to downscale areas and minimize uncertainties. Four static parameters (slope angle, soil type, land cover and elevation) for 261 shallow rainfall-induced landslides in the continental United States are examined. ASTER GDEM is used as bases for topographical characterization of slope and buffer analysis. Slope angle threshold assessment at the 50, 75, 95, 98, and 99 percentiles is tested locally. Further analysis of each threshold in relation to other parameters is investigated in a logistic regression environment for the continental U.S. It is determined that lower than 95-percentile thresholds under-estimate slope angles. Best regression fit can be achieved when utilizing the 99-threshold slope angle. This model predicts the highest number of cases correctly at 87.0% accuracy. A one-unit rise in the 99-threshold range increases landslide likelihood by 11.8%. The logistic regression model is carried over to ArcGIS where all variables are processed based on their corresponding coefficients. A regional slope instability map for the continental United States is created and analyzed against the available landslide records and their spatial distributions. It is expected that future inclusion of dynamic parameters like precipitation and other proxies like soil moisture into the model will further improve accuracy.

  15. Sickness absence and psychosocial job quality: an analysis from a longitudinal survey of working Australians, 2005-2012.

    PubMed

    Milner, Allison; Butterworth, Peter; Bentley, Rebecca; Kavanagh, Anne M; LaMontagne, Anthony D

    2015-05-15

    Sickness absence is associated with adverse health, organizational, and societal outcomes. Using data from a longitudinal cohort study of working Australians (the Household, Income and Labour Dynamics in Australia (HILDA) Survey), we examined the relationship between changes in individuals' overall psychosocial job quality and variation in sickness absence. The outcome variables were paid sickness absence (yes/no) and number of days of paid sickness absence in the past year (2005-2012). The main exposure variable was psychosocial job quality, measured using a psychosocial job quality index (levels of job control, demands and complexity, insecurity, and perceptions of unfair pay). Analysis was conducted using longitudinal fixed-effects logistic regression models and negative binomial regression models. There was a dose-response relationship between the number of psychosocial job stressors reported by an individual and the odds of paid sickness absence (1 adversity: odds ratio (OR) = 1.26, 95% confidence interval (CI): 1.09, 1.45 (P = 0.002); 2 adversities: OR = 1.28, 95% CI: 1.09, 1.51 (P = 0.002); ≥3 adversities: OR = 1.58, 95% CI: 1.29, 1.94 (P < 0.001)). The negative binomial regression models also indicated that respondents reported a greater number of days of sickness absence in response to worsening psychosocial job quality. These results suggest that workplace interventions aiming to improve the quality of work could help reduce sickness absence. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Estimation of the stand ages of tropical secondary forests after shifting cultivation based on the combination of WorldView-2 and time-series Landsat images

    NASA Astrophysics Data System (ADS)

    Fujiki, Shogoro; Okada, Kei-ichi; Nishio, Shogo; Kitayama, Kanehiro

    2016-09-01

    We developed a new method to estimate stand ages of secondary vegetation in the Bornean montane zone, where local people conduct traditional shifting cultivation and protected areas are surrounded by patches of recovering secondary vegetation of various ages. Identifying stand ages at the landscape level is critical to improve conservation policies. We combined a high-resolution satellite image (WorldView-2) with time-series Landsat images. We extracted stand ages (the time elapsed since the most recent slash and burn) from a change-detection analysis with Landsat time-series images and superimposed the derived stand ages on the segments classified by object-based image analysis using WorldView-2. We regarded stand ages as a response variable, and object-based metrics as independent variables, to develop regression models that explain stand ages. Subsequently, we classified the vegetation of the target area into six age units and one rubber plantation unit (1-3 yr, 3-5 yr, 5-7 yr, 7-30 yr, 30-50 yr, >50 yr and 'rubber plantation') using regression models and linear discriminant analyses. Validation demonstrated an accuracy of 84.3%. Our approach is particularly effective in classifying highly dynamic pioneer vegetation younger than 7 years into 2-yr intervals, suggesting that rapid changes in vegetation canopies can be detected with high accuracy. The combination of a spectral time-series analysis and object-based metrics based on high-resolution imagery enabled the classification of dynamic vegetation under intensive shifting cultivation and yielded an informative land cover map based on stand ages.

  17. Relationship between Auditory and Cognitive Abilities in Older Adults

    PubMed Central

    Sheft, Stanley

    2015-01-01

    Objective The objective was to evaluate the association of peripheral and central hearing abilities with cognitive function in older adults. Methods Recruited from epidemiological studies of aging and cognition at the Rush Alzheimer’s Disease Center, participants were a community-dwelling cohort of older adults (range 63–98 years) without diagnosis of dementia. The cohort contained roughly equal numbers of Black (n=61) and White (n=63) subjects with groups similar in terms of age, gender, and years of education. Auditory abilities were measured with pure-tone audiometry, speech-in-noise perception, and discrimination thresholds for both static and dynamic spectral patterns. Cognitive performance was evaluated with a 12-test battery assessing episodic, semantic, and working memory, perceptual speed, and visuospatial abilities. Results Among the auditory measures, only the static and dynamic spectral-pattern discrimination thresholds were associated with cognitive performance in a regression model that included the demographic covariates race, age, gender, and years of education. Subsequent analysis indicated substantial shared variance among the covariates race and both measures of spectral-pattern discrimination in accounting for cognitive performance. Among cognitive measures, working memory and visuospatial abilities showed the strongest interrelationship to spectral-pattern discrimination performance. Conclusions For a cohort of older adults without diagnosis of dementia, neither hearing thresholds nor speech-in-noise ability showed significant association with a summary measure of global cognition. In contrast, the two auditory metrics of spectral-pattern discrimination ability significantly contributed to a regression model prediction of cognitive performance, demonstrating association of central auditory ability to cognitive status using auditory metrics that avoided the confounding effect of speech materials. PMID:26237423

  18. The microcomputer scientific software series 2: general linear model--regression.

    Treesearch

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  19. WASP (Write a Scientific Paper) using Excel - 13: Correlation and Regression.

    PubMed

    Grech, Victor

    2018-07-01

    Correlation and regression measure the closeness of association between two continuous variables. This paper explains how to perform these tests in Microsoft Excel and their interpretation, as well as how to apply these tests dynamically using Excel's functions. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Climate variations and salmonellosis transmission in Adelaide, South Australia: a comparison between regression models

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Bi, Peng; Hiller, Janet

    2008-01-01

    This is the first study to identify appropriate regression models for the association between climate variation and salmonellosis transmission. A comparison between different regression models was conducted using surveillance data in Adelaide, South Australia. By using notified salmonellosis cases and climatic variables from the Adelaide metropolitan area over the period 1990-2003, four regression methods were examined: standard Poisson regression, autoregressive adjusted Poisson regression, multiple linear regression, and a seasonal autoregressive integrated moving average (SARIMA) model. Notified salmonellosis cases in 2004 were used to test the forecasting ability of the four models. Parameter estimation, goodness-of-fit and forecasting ability of the four regression models were compared. Temperatures occurring 2 weeks prior to cases were positively associated with cases of salmonellosis. Rainfall was also inversely related to the number of cases. The comparison of the goodness-of-fit and forecasting ability suggest that the SARIMA model is better than the other three regression models. Temperature and rainfall may be used as climatic predictors of salmonellosis cases in regions with climatic characteristics similar to those of Adelaide. The SARIMA model could, thus, be adopted to quantify the relationship between climate variations and salmonellosis transmission.

  1. Dynamic prediction of patient outcomes during ongoing cardiopulmonary resuscitation.

    PubMed

    Kim, Joonghee; Kim, Kyuseok; Callaway, Clifton W; Doh, Kibbeum; Choi, Jungho; Park, Jongdae; Jo, You Hwan; Lee, Jae Hyuk

    2017-02-01

    The probability of the return of spontaneous circulation (ROSC) and subsequent favourable outcomes changes dynamically during advanced cardiac life support (ACLS). We sought to model these changes using time-to-event analysis in out-of-hospital cardiac arrest (OHCA) patients. Adult (≥18 years old), non-traumatic OHCA patients without prehospital ROSC were included. Utstein variables and initial arterial blood gas measurements were used as predictors. The incidence rate of ROSC during the first 30min of ACLS in the emergency department (ED) was modelled using spline-based parametric survival analysis. Conditional probabilities of subsequent outcomes after ROSC (1-week and 1-month survival and 6-month neurologic recovery) were modelled using multivariable logistic regression. The ROSC and conditional probability models were then combined to estimate the likelihood of achieving ROSC and subsequent outcomes by providing k additional minutes of effort. A total of 727 patients were analyzed. The incidence rate of ROSC increased rapidly until the 10th minute of ED ACLS, and it subsequently decreased. The conditional probabilities of subsequent outcomes after ROSC were also dependent on the duration of resuscitation with odds ratios for 1-week and 1-month survival and neurologic recovery of 0.93 (95% CI: 0.90-0.96, p<0.001), 0.93 (0.88-0.97, p=0.001) and 0.93 (0.87-0.99, p=0.031) per 1-min increase, respectively. Calibration testing of the combined models showed good correlation between mean predicted probability and actual prevalence. The probability of ROSC and favourable subsequent outcomes changed according to a multiphasic pattern over the first 30min of ACLS, and modelling of the dynamic changes was feasible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Dynamic whole body PET parametric imaging: II. Task-oriented statistical estimation

    PubMed Central

    Karakatsanis, Nicolas A.; Lodge, Martin A.; Zhou, Y.; Wahl, Richard L.; Rahmim, Arman

    2013-01-01

    In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (~15–20cm) of a single bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study, was employed along with extensive Monte Carlo simulations and an initial clinical FDG patient dataset to validate and demonstrate the potential of the proposed statistical estimation methods. Both simulated and clinical results suggest that hybrid regression in the context of whole-body Patlak Ki imaging considerably reduces MSE without compromising high CNR. Alternatively, for a given CNR, hybrid regression enables larger reductions than OLS in the number of dynamic frames per bed, allowing for even shorter acquisitions of ~30min, thus further contributing to the clinical adoption of the proposed framework. Compared to the SUV approach, whole body parametric imaging can provide better tumor quantification, and can act as a complement to SUV, for the task of tumor detection. PMID:24080994

  3. Dynamic whole-body PET parametric imaging: II. Task-oriented statistical estimation.

    PubMed

    Karakatsanis, Nicolas A; Lodge, Martin A; Zhou, Y; Wahl, Richard L; Rahmim, Arman

    2013-10-21

    In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (~15-20 cm) of a single-bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole-body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole-body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study, was employed along with extensive Monte Carlo simulations and an initial clinical (18)F-deoxyglucose patient dataset to validate and demonstrate the potential of the proposed statistical estimation methods. Both simulated and clinical results suggest that hybrid regression in the context of whole-body Patlak Ki imaging considerably reduces MSE without compromising high CNR. Alternatively, for a given CNR, hybrid regression enables larger reductions than OLS in the number of dynamic frames per bed, allowing for even shorter acquisitions of ~30 min, thus further contributing to the clinical adoption of the proposed framework. Compared to the SUV approach, whole-body parametric imaging can provide better tumor quantification, and can act as a complement to SUV, for the task of tumor detection.

  4. Accounting for individual differences and timing of events: estimating the effect of treatment on criminal convictions in heroin users

    PubMed Central

    2014-01-01

    Background The reduction of crime is an important outcome of opioid maintenance treatment (OMT). Criminal intensity and treatment regimes vary among OMT patients, but this is rarely adjusted for in statistical analyses, which tend to focus on cohort incidence rates and rate ratios. The purpose of this work was to estimate the relationship between treatment and criminal convictions among OMT patients, adjusting for individual covariate information and timing of events, fitting time-to-event regression models of increasing complexity. Methods National criminal records were cross linked with treatment data on 3221 patients starting OMT in Norway 1997–2003. In addition to calculating cohort incidence rates, criminal convictions was modelled as a recurrent event dependent variable, and treatment a time-dependent covariate, in Cox proportional hazards, Aalen’s additive hazards, and semi-parametric additive hazards regression models. Both fixed and dynamic covariates were included. Results During OMT, the number of days with criminal convictions for the cohort as a whole was 61% lower than when not in treatment. OMT was associated with reduced number of days with criminal convictions in all time-to-event regression models, but the hazard ratio (95% CI) was strongly attenuated when adjusting for covariates; from 0.40 (0.35, 0.45) in a univariate model to 0.79 (0.72, 0.87) in a fully adjusted model. The hazard was lower for females and decreasing with older age, while increasing with high numbers of criminal convictions prior to application to OMT (all p < 0.001). The strongest predictors were level of criminal activity prior to entering into OMT, and having a recent criminal conviction (both p < 0.001). The effect of several predictors was significantly time-varying with their effects diminishing over time. Conclusions Analyzing complex observational data regarding to fixed factors only overlooks important temporal information, and naïve cohort level incidence rates might result in biased estimates of the effect of interventions. Applying time-to-event regression models, properly adjusting for individual covariate information and timing of various events, allows for more precise and reliable effect estimates, as well as painting a more nuanced picture that can aid health care professionals and policy makers. PMID:24886472

  5. Regionalization of monthly rainfall erosivity patternsin Switzerland

    NASA Astrophysics Data System (ADS)

    Schmidt, Simon; Alewell, Christine; Panagos, Panos; Meusburger, Katrin

    2016-10-01

    One major controlling factor of water erosion is rainfall erosivity, which is quantified as the product of total storm energy and a maximum 30 min intensity (I30). Rainfall erosivity is often expressed as R-factor in soil erosion risk models like the Universal Soil Loss Equation (USLE) and its revised version (RUSLE). As rainfall erosivity is closely correlated with rainfall amount and intensity, the rainfall erosivity of Switzerland can be expected to have a regional characteristic and seasonal dynamic throughout the year. This intra-annual variability was mapped by a monthly modeling approach to assess simultaneously spatial and monthly patterns of rainfall erosivity. So far only national seasonal means and regional annual means exist for Switzerland. We used a network of 87 precipitation gauging stations with a 10 min temporal resolution to calculate long-term monthly mean R-factors. Stepwise generalized linear regression (GLM) and leave-one-out cross-validation (LOOCV) were used to select spatial covariates which explain the spatial and temporal patterns of the R-factor for each month across Switzerland. The monthly R-factor is mapped by summarizing the predicted R-factor of the regression equation and the corresponding residues of the regression, which are interpolated by ordinary kriging (regression-kriging). As spatial covariates, a variety of precipitation indicator data has been included such as snow depths, a combination product of hourly precipitation measurements and radar observations (CombiPrecip), daily Alpine precipitation (EURO4M-APGD), and monthly precipitation sums (RhiresM). Topographic parameters (elevation, slope) were also significant explanatory variables for single months. The comparison of the 12 monthly rainfall erosivity maps showed a distinct seasonality with the highest rainfall erosivity in summer (June, July, and August) influenced by intense rainfall events. Winter months have the lowest rainfall erosivity. A proportion of 62 % of the total annual rainfall erosivity is identified within four months only (June-September). The highest erosion risk can be expected in July, where not only rainfall erosivity but also erosivity density is high. In addition to the intra-annual temporal regime, a spatial variability of this seasonality was detectable between different regions of Switzerland. The assessment of the dynamic behavior of the R-factor is valuable for the identification of susceptible seasons and regions.

  6. Statistical Modeling Reveals the Effect of Absolute Humidity on Dengue in Singapore

    PubMed Central

    Xu, Hai-Yan; Fu, Xiuju; Lee, Lionel Kim Hock; Ma, Stefan; Goh, Kee Tai; Wong, Jiancheng; Habibullah, Mohamed Salahuddin; Lee, Gary Kee Khoon; Lim, Tian Kuay; Tambyah, Paul Anantharajah; Lim, Chin Leong; Ng, Lee Ching

    2014-01-01

    Weather factors are widely studied for their effects on indicating dengue incidence trends. However, these studies have been limited due to the complex epidemiology of dengue, which involves dynamic interplay of multiple factors such as herd immunity within a population, distinct serotypes of the virus, environmental factors and intervention programs. In this study, we investigate the impact of weather factors on dengue in Singapore, considering the disease epidemiology and profile of virus serotypes. A Poisson regression combined with Distributed Lag Non-linear Model (DLNM) was used to evaluate and compare the impact of weekly Absolute Humidity (AH) and other weather factors (mean temperature, minimum temperature, maximum temperature, rainfall, relative humidity and wind speed) on dengue incidence from 2001 to 2009. The same analysis was also performed on three sub-periods, defined by predominant circulating serotypes. The performance of DLNM regression models were then evaluated through the Akaike's Information Criterion. From the correlation and DLNM regression modeling analyses of the studied period, AH was found to be a better predictor for modeling dengue incidence than the other unique weather variables. Whilst mean temperature (MeanT) also showed significant correlation with dengue incidence, the relationship between AH or MeanT and dengue incidence, however, varied in the three sub-periods. Our results showed that AH had a more stable impact on dengue incidence than temperature when virological factors were taken into consideration. AH appeared to be the most consistent factor in modeling dengue incidence in Singapore. Considering the changes in dominant serotypes, the improvements in vector control programs and the inconsistent weather patterns observed in the sub-periods, the impact of weather on dengue is modulated by these other factors. Future studies on the impact of climate change on dengue need to take all the other contributing factors into consideration in order to make meaningful public policy recommendations. PMID:24786517

  7. Field Scale Spatial Modelling of Surface Soil Quality Attributes in Controlled Traffic Farming

    NASA Astrophysics Data System (ADS)

    Guenette, Kris; Hernandez-Ramirez, Guillermo

    2017-04-01

    The employment of controlled traffic farming (CTF) can yield improvements to soil quality attributes through the confinement of equipment traffic to tramlines with the field. There is a need to quantify and explain the spatial heterogeneity of soil quality attributes affected by CTF to further improve our understanding and modelling ability of field scale soil dynamics. Soil properties such as available nitrogen (AN), pH, soil total nitrogen (STN), soil organic carbon (SOC), bulk density, macroporosity, soil quality S-Index, plant available water capacity (PAWC) and unsaturated hydraulic conductivity (Km) were analysed and compared among trafficked and un-trafficked areas. We contrasted standard geostatistical methods such as ordinary kriging (OK) and covariate kriging (COK) as well as the hybrid method of regression kriging (ROK) to predict the spatial distribution of soil properties across two annual cropland sites actively employing CTF in Alberta, Canada. Field scale variability was quantified more accurately through the inclusion of covariates; however, the use of ROK was shown to improve model accuracy despite the regression model composition limiting the robustness of the ROK method. The exclusion of traffic from the un-trafficked areas displayed significant improvements to bulk density, macroporosity and Km while subsequently enhancing AN, STN and SOC. The ability of the regression models and the ROK method to account for spatial trends led to the highest goodness-of-fit and lowest error achieved for the soil physical properties, as the rigid traffic regime of CTF altered their spatial distribution at the field scale. Conversely, the COK method produced the most optimal predictions for the soil nutrient properties and Km. The use of terrain covariates derived from light ranging and detection (LiDAR), such as of elevation and topographic position index (TPI), yielded the best models in the COK method at the field scale.

  8. Statistical modeling reveals the effect of absolute humidity on dengue in Singapore.

    PubMed

    Xu, Hai-Yan; Fu, Xiuju; Lee, Lionel Kim Hock; Ma, Stefan; Goh, Kee Tai; Wong, Jiancheng; Habibullah, Mohamed Salahuddin; Lee, Gary Kee Khoon; Lim, Tian Kuay; Tambyah, Paul Anantharajah; Lim, Chin Leong; Ng, Lee Ching

    2014-05-01

    Weather factors are widely studied for their effects on indicating dengue incidence trends. However, these studies have been limited due to the complex epidemiology of dengue, which involves dynamic interplay of multiple factors such as herd immunity within a population, distinct serotypes of the virus, environmental factors and intervention programs. In this study, we investigate the impact of weather factors on dengue in Singapore, considering the disease epidemiology and profile of virus serotypes. A Poisson regression combined with Distributed Lag Non-linear Model (DLNM) was used to evaluate and compare the impact of weekly Absolute Humidity (AH) and other weather factors (mean temperature, minimum temperature, maximum temperature, rainfall, relative humidity and wind speed) on dengue incidence from 2001 to 2009. The same analysis was also performed on three sub-periods, defined by predominant circulating serotypes. The performance of DLNM regression models were then evaluated through the Akaike's Information Criterion. From the correlation and DLNM regression modeling analyses of the studied period, AH was found to be a better predictor for modeling dengue incidence than the other unique weather variables. Whilst mean temperature (MeanT) also showed significant correlation with dengue incidence, the relationship between AH or MeanT and dengue incidence, however, varied in the three sub-periods. Our results showed that AH had a more stable impact on dengue incidence than temperature when virological factors were taken into consideration. AH appeared to be the most consistent factor in modeling dengue incidence in Singapore. Considering the changes in dominant serotypes, the improvements in vector control programs and the inconsistent weather patterns observed in the sub-periods, the impact of weather on dengue is modulated by these other factors. Future studies on the impact of climate change on dengue need to take all the other contributing factors into consideration in order to make meaningful public policy recommendations.

  9. Predicting the dynamics of ascospore maturation of Venturia pirina based on environmental factors.

    PubMed

    Rossi, V; Salinari, F; Pattori, E; Giosuè, S; Bugiani, R

    2009-04-01

    Airborne ascospores of Venturia pirina were trapped at two sites in northern Italy in 2002 to 2008. The cumulative proportion of ascospores trapped at each discharge was regressed against the physiological time. The best fit (R(2) = 0.90, standard error of estimates [SEest] = 0.11) was obtained using a Gompertz equation and the degree-days (>0 degrees C) accumulated after the day on which the first ascospore of the season was trapped (biofix day), but only for the days with > or =0.2 mm rain or < or =4 hPa vapor pressure deficit (DDwet). This Italian model performed better than the models developed in Oregon, United States (R(2) = 0.69, SEest = 0.16) or Victoria, Australia (R(2) = 0.74, SEest = 0.18), which consider only the effect of temperature. When the Italian model was evaluated against data not used in its elaboration, it accurately predicted ascospore maturation (R(2) = 0.92, SEest = 0.10). A logistic regression model was also developed to estimate the biofix for initiating the accumulation of degree-days (biofix model). The probability of the first ascospore discharge of the season increased as DDwet (calculated from 1 January) increased. Based on this model, there is low probability of the first ascospore discharge when DDwet < or =268.5 (P = 0.03) and high probability (P = 0.83) of discharge on the first day with >0.2 mm rain after such a DDwet threshold.

  10. Developing a Novel Parameter Estimation Method for Agent-Based Model in Immune System Simulation under the Framework of History Matching: A Case Study on Influenza A Virus Infection

    PubMed Central

    Li, Tingting; Cheng, Zhengguo; Zhang, Le

    2017-01-01

    Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency. PMID:29194393

  11. Developing a Novel Parameter Estimation Method for Agent-Based Model in Immune System Simulation under the Framework of History Matching: A Case Study on Influenza A Virus Infection.

    PubMed

    Li, Tingting; Cheng, Zhengguo; Zhang, Le

    2017-12-01

    Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency.

  12. Modeling a full-scale primary sedimentation tank using artificial neural networks.

    PubMed

    Gamal El-Din, A; Smith, D W

    2002-05-01

    Modeling the performance of full-scale primary sedimentation tanks has been commonly done using regression-based models, which are empirical relationships derived strictly from observed daily average influent and effluent data. Another approach to model a sedimentation tank is using a hydraulic efficiency model that utilizes tracer studies to characterize the performance of model sedimentation tanks based on eddy diffusion. However, the use of hydraulic efficiency models to predict the dynamic behavior of a full-scale sedimentation tank is very difficult as the development of such models has been done using controlled studies of model tanks. In this paper, another type of model, namely artificial neural network modeling approach, is used to predict the dynamic response of a full-scale primary sedimentation tank. The neuralmodel consists of two separate networks, one uses flow and influent total suspended solids data in order to predict the effluent total suspended solids from the tank, and the other makes predictions of the effluent chemical oxygen demand using data of the flow and influent chemical oxygen demand as inputs. An extensive sampling program was conducted in order to collect a data set to be used in training and validating the networks. A systematic approach was used in the building process of the model which allowed the identification of a parsimonious neural model that is able to learn (and not memorize) from past data and generalize very well to unseen data that were used to validate the model. Theresults seem very promising. The potential of using the model as part of a real-time process control system isalso discussed.

  13. Transient modeling in simulation of hospital operations for emergency response.

    PubMed

    Paul, Jomon Aliyas; George, Santhosh K; Yi, Pengfei; Lin, Li

    2006-01-01

    Rapid estimates of hospital capacity after an event that may cause a disaster can assist disaster-relief efforts. Due to the dynamics of hospitals, following such an event, it is necessary to accurately model the behavior of the system. A transient modeling approach using simulation and exponential functions is presented, along with its applications in an earthquake situation. The parameters of the exponential model are regressed using outputs from designed simulation experiments. The developed model is capable of representing transient, patient waiting times during a disaster. Most importantly, the modeling approach allows real-time capacity estimation of hospitals of various sizes and capabilities. Further, this research is an analysis of the effects of priority-based routing of patients within the hospital and the effects on patient waiting times determined using various patient mixes. The model guides the patients based on the severity of injuries and queues the patients requiring critical care depending on their remaining survivability time. The model also accounts the impact of prehospital transport time on patient waiting time.

  14. [Evaluation of estimation of prevalence ratio using bayesian log-binomial regression model].

    PubMed

    Gao, W L; Lin, H; Liu, X N; Ren, X W; Li, J S; Shen, X P; Zhu, S L

    2017-03-10

    To evaluate the estimation of prevalence ratio ( PR ) by using bayesian log-binomial regression model and its application, we estimated the PR of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea in their infants by using bayesian log-binomial regression model in Openbugs software. The results showed that caregivers' recognition of infant' s risk signs of diarrhea was associated significantly with a 13% increase of medical care-seeking. Meanwhile, we compared the differences in PR 's point estimation and its interval estimation of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea and convergence of three models (model 1: not adjusting for the covariates; model 2: adjusting for duration of caregivers' education, model 3: adjusting for distance between village and township and child month-age based on model 2) between bayesian log-binomial regression model and conventional log-binomial regression model. The results showed that all three bayesian log-binomial regression models were convergence and the estimated PRs were 1.130(95 %CI : 1.005-1.265), 1.128(95 %CI : 1.001-1.264) and 1.132(95 %CI : 1.004-1.267), respectively. Conventional log-binomial regression model 1 and model 2 were convergence and their PRs were 1.130(95 % CI : 1.055-1.206) and 1.126(95 % CI : 1.051-1.203), respectively, but the model 3 was misconvergence, so COPY method was used to estimate PR , which was 1.125 (95 %CI : 1.051-1.200). In addition, the point estimation and interval estimation of PRs from three bayesian log-binomial regression models differed slightly from those of PRs from conventional log-binomial regression model, but they had a good consistency in estimating PR . Therefore, bayesian log-binomial regression model can effectively estimate PR with less misconvergence and have more advantages in application compared with conventional log-binomial regression model.

  15. The Systems Analysis of Problems of An Evaluation,prognosis and Optimization of Graund Water Management In Reclamated Areas

    NASA Astrophysics Data System (ADS)

    Vakhonin, N.

    The problems connected to ground water dynamics take place in various areas of the people economic activity. For want of it in some branches the greatest interest rep- resents creation of quantitative parameters of a water mode, in one cases - of a level mode H(t) (land reclamation, agricultural and forest economy, ecology), in other - cre- ation of water discharge Q (t) (water-intakes for a water-supply or on the contrary - protective system from mines and quarry submerge). In a number of branches the ex- treme interest is represented by ground water quality dynamics (potable water-supply, inflow to rivers and lakes used for fish-breeding etc.). Each from these cases requires the account of features for want of creation of ground water dynamics models, re- alization of monitoring for their identification. With allowance for aid above in the report, the problems connected to ground water dynamics are bro-ken on three types: 1) evaluations of the system state and matching with normative parameters or match- ing of various objects among themselves; 2) prognosises of dynamics of ground water amount and quality; 3) problems of decision making support (optimization of ground water management), formalized as: The analysis is conducted for each of them and the conditions of choice of a generality level of a variable state describing ground stream dynamics (soil humidity, ground wa- ter level, water volume in the camera) and alternate variants, appropriate to them, of models describing water dynamics are shown: physical with the distributed parameters (equation of joint filtering of the bicomponent environment water-air, water-transfer equation, Boussinesk equation; with lumped parameters (chamber model with cam- eras of a various degree aggregating down to "a black box"), and also non-physical (statistical, regressive and neural networks) model. The possibility of using by each from these models for three selected types of problems is shown. On an example of reclamated agricultural object the correlation of ground water dynamics with other sub-systems is shown: by open channel streams, drains etc. also by processes: evapo- transpiration, infiltra-tion, snow melting etc. The possible versions of the processes in- teraction description with ground wa-ter dynamics are analyzed. The kinds of bound- ary conditions 1-st, 2-nd, 3-rd kind for internal and external boundaries of filtration stream are typed. There is formulated the request about necessity of monitoring real- 1 ization for the statistical task of source effects on more broad boundary of filtration stream, than optimized system. The model identification is carried out on the base of 24-year's monitoring data in the reclamated catchment of the river Yaselda. 2

  16. IN11B-1621: Quantifying How Climate Affects Vegetation in the Amazon Rainforest

    NASA Technical Reports Server (NTRS)

    Das, Kamalika; Kodali, Anuradha; Szubert, Marcin; Ganguly, Sangram; Bongard, Joshua

    2016-01-01

    Amazon droughts in 2005 and 2010 have raised serious concern about the future of the rainforest. Amazon forests are crucial because of their role as the largest carbon sink in the world which would effect the global warming phenomena with decreased photosynthesis activity. Especially, after a decline in plant growth in 1.68 million km2 forest area during the once-in-a-century severe drought in 2010, it is of primary importance to understand the relationship between different climatic variables and vegetation. In an earlier study, we have shown that non-linear models are better at capturing the relation dynamics of vegetation and climate variables such as temperature and precipitation, compared to linear models. In this research, we learn precise models between vegetation and climatic variables (temperature, precipitation) for normal conditions in the Amazon region using genetic programming based symbolic regression. This is done by removing high elevation and drought affected areas and also considering the slope of the region as one of the important factors while building the model. The model learned reveals new and interesting ways historical and current climate variables affect the vegetation at any location. MAIAC data has been used as a vegetation surrogate in our study. For temperature and precipitation, we have used TRMM and MODIS Land Surface Temperature data sets while learning the non-linear regression model. However, to generalize the model to make it independent of the data source, we perform transfer learning where we regress a regularized least squares to learn the parameters of the non-linear model using other data sources such as the precipitation and temperature from the Climatic Research Center (CRU). This new model is very similar in structure and performance compared to the original learned model and verifies the same claims about the nature of dependency between these climate variables and the vegetation in the Amazon region. As a result of this study, we are able to learn, for the very first time how exactly different climate factors influence vegetation at any location in the Amazon rainforests, independent of the specific sources from which the data has been obtained.

  17. Quantifying How Climate Affects Vegetation in the Amazon Rainforest

    NASA Astrophysics Data System (ADS)

    Das, K.; Kodali, A.; Szubert, M.; Ganguly, S.; Bongard, J.

    2016-12-01

    Amazon droughts in 2005 and 2010 have raised serious concern about the future of the rainforest. Amazon forests are crucial because of their role as the largest carbon sink in the world which would effect the global warming phenomena with decreased photosynthesis activity. Especially, after a decline in plant growth in 1.68 million km2 forest area during the once-in-a-century severe drought in 2010, it is of primary importance to understand the relationship between different climatic variables and vegetation. In an earlier study, we have shown that non-linear models are better at capturing the relation dynamics of vegetation and climate variables such as temperature and precipitation, compared to linear models. In this research, we learn precise models between vegetation and climatic variables (temperature, precipitation) for normal conditions in the Amazon region using genetic programming based symbolic regression. This is done by removing high elevation and drought affected areas and also considering the slope of the region as one of the important factors while building the model. The model learned reveals new and interesting ways historical and current climate variables affect the vegetation at any location. MAIAC data has been used as a vegetation surrogate in our study. For temperature and precipitation, we have used TRMM and MODIS Land Surface Temperature data sets while learning the non-linear regression model. However, to generalize the model to make it independent of the data source, we perform transfer learning where we regress a regularized least squares to learn the parameters of the non-linear model using other data sources such as the precipitation and temperature from the Climatic Research Center (CRU). This new model is very similar in structure and performance compared to the original learned model and verifies the same claims about the nature of dependency between these climate variables and the vegetation in the Amazon region. As a result of this study, we are able to learn, for the very first time how exactly different climate factors influence vegetation at any location in the Amazon rainforests, independent of the specific sources from which the data has been obtained.

  18. Temporal Topic Modeling to Assess Associations between News Trends and Infectious Disease Outbreaks.

    PubMed

    Ghosh, Saurav; Chakraborty, Prithwish; Nsoesie, Elaine O; Cohn, Emily; Mekaru, Sumiko R; Brownstein, John S; Ramakrishnan, Naren

    2017-01-19

    In retrospective assessments, internet news reports have been shown to capture early reports of unknown infectious disease transmission prior to official laboratory confirmation. In general, media interest and reporting peaks and wanes during the course of an outbreak. In this study, we quantify the extent to which media interest during infectious disease outbreaks is indicative of trends of reported incidence. We introduce an approach that uses supervised temporal topic models to transform large corpora of news articles into temporal topic trends. The key advantages of this approach include: applicability to a wide range of diseases and ability to capture disease dynamics, including seasonality, abrupt peaks and troughs. We evaluated the method using data from multiple infectious disease outbreaks reported in the United States of America (U.S.), China, and India. We demonstrate that temporal topic trends extracted from disease-related news reports successfully capture the dynamics of multiple outbreaks such as whooping cough in U.S. (2012), dengue outbreaks in India (2013) and China (2014). Our observations also suggest that, when news coverage is uniform, efficient modeling of temporal topic trends using time-series regression techniques can estimate disease case counts with increased precision before official reports by health organizations.

  19. Temporal Topic Modeling to Assess Associations between News Trends and Infectious Disease Outbreaks

    NASA Astrophysics Data System (ADS)

    Ghosh, Saurav; Chakraborty, Prithwish; Nsoesie, Elaine O.; Cohn, Emily; Mekaru, Sumiko R.; Brownstein, John S.; Ramakrishnan, Naren

    2017-01-01

    In retrospective assessments, internet news reports have been shown to capture early reports of unknown infectious disease transmission prior to official laboratory confirmation. In general, media interest and reporting peaks and wanes during the course of an outbreak. In this study, we quantify the extent to which media interest during infectious disease outbreaks is indicative of trends of reported incidence. We introduce an approach that uses supervised temporal topic models to transform large corpora of news articles into temporal topic trends. The key advantages of this approach include: applicability to a wide range of diseases and ability to capture disease dynamics, including seasonality, abrupt peaks and troughs. We evaluated the method using data from multiple infectious disease outbreaks reported in the United States of America (U.S.), China, and India. We demonstrate that temporal topic trends extracted from disease-related news reports successfully capture the dynamics of multiple outbreaks such as whooping cough in U.S. (2012), dengue outbreaks in India (2013) and China (2014). Our observations also suggest that, when news coverage is uniform, efficient modeling of temporal topic trends using time-series regression techniques can estimate disease case counts with increased precision before official reports by health organizations.

  20. Empirical analysis of storm-time energetic electron enhancements

    NASA Astrophysics Data System (ADS)

    O'Brien, Thomas Paul, III

    This Ph.D. thesis documents a program for studying the appearance of energetic electrons in the Earth's outer radiation belts that is associated with many geomagnetic storms. The dynamic evolution of the electron radiation belts is an outstanding empirical problem in both theoretical space physics and its applied sibling, space weather. The project emphasizes the development of empirical tools and their use in testing several theoretical models of the energization of the electron belts. First, I develop the Statistical Asynchronous Regression technique to provide proxy electron fluxes throughout the parts of the radiation belts explored by geosynchronous and GPS spacecraft. Next, I show that a theoretical adiabatic model can relate the local time asymmetry of the proxy geosynchronous fluxes to the asymmetry of the geomagnetic field. Then, I perform a superposed epoch analysis on the proxy fluxes at local noon to identify magnetospheric and interplanetary precursors of relativistic electron enhancements. Finally, I use statistical and neural network phase space analyses to determine the hourly evolution of flux at a virtual stationary monitor. The dynamic equation quantitatively identifies the importance of different drivers of the electron belts. This project provides empirical constraints on theoretical models of electron acceleration.

  1. Temporal Topic Modeling to Assess Associations between News Trends and Infectious Disease Outbreaks

    PubMed Central

    Ghosh, Saurav; Chakraborty, Prithwish; Nsoesie, Elaine O.; Cohn, Emily; Mekaru, Sumiko R.; Brownstein, John S.; Ramakrishnan, Naren

    2017-01-01

    In retrospective assessments, internet news reports have been shown to capture early reports of unknown infectious disease transmission prior to official laboratory confirmation. In general, media interest and reporting peaks and wanes during the course of an outbreak. In this study, we quantify the extent to which media interest during infectious disease outbreaks is indicative of trends of reported incidence. We introduce an approach that uses supervised temporal topic models to transform large corpora of news articles into temporal topic trends. The key advantages of this approach include: applicability to a wide range of diseases and ability to capture disease dynamics, including seasonality, abrupt peaks and troughs. We evaluated the method using data from multiple infectious disease outbreaks reported in the United States of America (U.S.), China, and India. We demonstrate that temporal topic trends extracted from disease-related news reports successfully capture the dynamics of multiple outbreaks such as whooping cough in U.S. (2012), dengue outbreaks in India (2013) and China (2014). Our observations also suggest that, when news coverage is uniform, efficient modeling of temporal topic trends using time-series regression techniques can estimate disease case counts with increased precision before official reports by health organizations. PMID:28102319

  2. Simulation of RCC Crack Growth Due to Carbon Oxidation in High-Temperature Gas Environments

    NASA Technical Reports Server (NTRS)

    Titov, E. V.; Levin, D. A.; Picetti, Donald J.; Anderson, Brian P.

    2009-01-01

    The carbon wall oxidation technique coupled with a CFD technique was employed to study the flow in the expanding crack channel caused by the oxidation of the channel carbon walls. The recessing 3D surface morphing procedure was developed and tested in comparison with the arcjet experimental results. The multi-block structured adaptive meshing was used to model the computational domain changes due to the wall recession. Wall regression rates for a reinforced carbon-carbon (RCC) samples, that were tested in a high enthalpy arcjet environment, were computationally obtained and used to assess the channel expansion. The test geometry and flow conditions render the flow regime as the transitional to continuum, therefore Navier-Stokes gas dynamic approach with the temperature jump and velocity slip correction to the boundary conditions was used. The modeled mechanism for wall material loss was atomic oxygen reaction with bare carbon. The predicted channel growth was found to agree with arcjet observations. Local gas flow field results were found to affect the oxidation rate in a manner that cannot be predicted by previous mass loss correlations. The method holds promise for future modeling of materials gas-dynamic interactions for hypersonic flight.

  3. Evaluation of weighted regression and sample size in developing a taper model for loblolly pine

    Treesearch

    Kenneth L. Cormier; Robin M. Reich; Raymond L. Czaplewski; William A. Bechtold

    1992-01-01

    A stem profile model, fit using pseudo-likelihood weighted regression, was used to estimate merchantable volume of loblolly pine (Pinus taeda L.) in the southeast. The weighted regression increased model fit marginally, but did not substantially increase model performance. In all cases, the unweighted regression models performed as well as the...

  4. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    NASA Astrophysics Data System (ADS)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  5. Modelling fourier regression for time series data- a case study: modelling inflation in foods sector in Indonesia

    NASA Astrophysics Data System (ADS)

    Prahutama, Alan; Suparti; Wahyu Utami, Tiani

    2018-03-01

    Regression analysis is an analysis to model the relationship between response variables and predictor variables. The parametric approach to the regression model is very strict with the assumption, but nonparametric regression model isn’t need assumption of model. Time series data is the data of a variable that is observed based on a certain time, so if the time series data wanted to be modeled by regression, then we should determined the response and predictor variables first. Determination of the response variable in time series is variable in t-th (yt), while the predictor variable is a significant lag. In nonparametric regression modeling, one developing approach is to use the Fourier series approach. One of the advantages of nonparametric regression approach using Fourier series is able to overcome data having trigonometric distribution. In modeling using Fourier series needs parameter of K. To determine the number of K can be used Generalized Cross Validation method. In inflation modeling for the transportation sector, communication and financial services using Fourier series yields an optimal K of 120 parameters with R-square 99%. Whereas if it was modeled by multiple linear regression yield R-square 90%.

  6. Brain Dynamics in Predicting Driving Fatigue Using a Recurrent Self-Evolving Fuzzy Neural Network.

    PubMed

    Liu, Yu-Ting; Lin, Yang-Yin; Wu, Shang-Lin; Chuang, Chun-Hsiang; Lin, Chin-Teng

    2016-02-01

    This paper proposes a generalized prediction system called a recurrent self-evolving fuzzy neural network (RSEFNN) that employs an on-line gradient descent learning rule to address the electroencephalography (EEG) regression problem in brain dynamics for driving fatigue. The cognitive states of drivers significantly affect driving safety; in particular, fatigue driving, or drowsy driving, endangers both the individual and the public. For this reason, the development of brain-computer interfaces (BCIs) that can identify drowsy driving states is a crucial and urgent topic of study. Many EEG-based BCIs have been developed as artificial auxiliary systems for use in various practical applications because of the benefits of measuring EEG signals. In the literature, the efficacy of EEG-based BCIs in recognition tasks has been limited by low resolutions. The system proposed in this paper represents the first attempt to use the recurrent fuzzy neural network (RFNN) architecture to increase adaptability in realistic EEG applications to overcome this bottleneck. This paper further analyzes brain dynamics in a simulated car driving task in a virtual-reality environment. The proposed RSEFNN model is evaluated using the generalized cross-subject approach, and the results indicate that the RSEFNN is superior to competing models regardless of the use of recurrent or nonrecurrent structures.

  7. Challenges and opportunities for integrating lake ecosystem modelling approaches

    USGS Publications Warehouse

    Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.

    2010-01-01

    A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative view on the functioning of lake ecosystems. We end with a set of specific recommendations that may be of help in the further development of lake ecosystem models.

  8. Effect of Interpersonal Interaction on Festinating Gait Rehabilitation in Patients with Parkinson’s Disease

    PubMed Central

    Uchitomi, Hirotaka; Ogawa, Ken-ichiro; Orimo, Satoshi; Wada, Yoshiaki; Miyake, Yoshihiro

    2016-01-01

    Although human walking gait rhythms are generated by native individual gait dynamics, these gait dynamics change during interactions between humans. A typical phenomenon is synchronization of gait rhythms during cooperative walking. Our previous research revealed that fluctuation characteristics in stride interval of subjects with Parkinson’s disease changed from random to 1/f fluctuation as fractal characteristics during cooperative walking with the gait assist system Walk-Mate, which emulates a human interaction using interactive rhythmic cues. Moreover, gait dynamics were relearned through Walk-Mate gait training. However, the system’s clinical efficacy was unclear because the previous studies did not focus on specific gait rhythm disorder symptoms. Therefore, this study aimed to evaluate the effect of Walk-Mate on festinating gait among subjects with Parkinson’s disease. Three within-subject experimental conditions were used: (1) preinteraction condition, (2) interaction condition, and (3) postinteraction condition. The only difference between conditions was the interactive rhythmic cues generated by Walk-Mate. Because subjects with festinating gait gradually and involuntarily decreased their stride interval, the regression slope of stride interval as an index of severity of preinteraction festinating gait was elevated. The regression slope in the interaction condition was more gradual than during the preinteraction condition, indicating that the interactive rhythmic cues contributed to relieving festinating gait and stabilizing gait dynamics. Moreover, the gradual regression slope was carried over to the postinteraction condition, indicating that subjects with festinating gait have the potential to relearn stable gait dynamics. These results suggest that disordered gait dynamics are clinically restored through interactive rhythmic cues and that Walk-Mate may have the potential to assist therapists in more effective rehabilitation. Trial Registration: UMIN Clinical Trials Registry UMIN000012591 PMID:27253376

  9. Effect of Interpersonal Interaction on Festinating Gait Rehabilitation in Patients with Parkinson's Disease.

    PubMed

    Uchitomi, Hirotaka; Ogawa, Ken-Ichiro; Orimo, Satoshi; Wada, Yoshiaki; Miyake, Yoshihiro

    2016-01-01

    Although human walking gait rhythms are generated by native individual gait dynamics, these gait dynamics change during interactions between humans. A typical phenomenon is synchronization of gait rhythms during cooperative walking. Our previous research revealed that fluctuation characteristics in stride interval of subjects with Parkinson's disease changed from random to 1/f fluctuation as fractal characteristics during cooperative walking with the gait assist system Walk-Mate, which emulates a human interaction using interactive rhythmic cues. Moreover, gait dynamics were relearned through Walk-Mate gait training. However, the system's clinical efficacy was unclear because the previous studies did not focus on specific gait rhythm disorder symptoms. Therefore, this study aimed to evaluate the effect of Walk-Mate on festinating gait among subjects with Parkinson's disease. Three within-subject experimental conditions were used: (1) preinteraction condition, (2) interaction condition, and (3) postinteraction condition. The only difference between conditions was the interactive rhythmic cues generated by Walk-Mate. Because subjects with festinating gait gradually and involuntarily decreased their stride interval, the regression slope of stride interval as an index of severity of preinteraction festinating gait was elevated. The regression slope in the interaction condition was more gradual than during the preinteraction condition, indicating that the interactive rhythmic cues contributed to relieving festinating gait and stabilizing gait dynamics. Moreover, the gradual regression slope was carried over to the postinteraction condition, indicating that subjects with festinating gait have the potential to relearn stable gait dynamics. These results suggest that disordered gait dynamics are clinically restored through interactive rhythmic cues and that Walk-Mate may have the potential to assist therapists in more effective rehabilitation. UMIN Clinical Trials Registry UMIN000012591.

  10. Construction and analysis of a modular model of caspase activation in apoptosis

    PubMed Central

    Harrington, Heather A; Ho, Kenneth L; Ghosh, Samik; Tung, KC

    2008-01-01

    Background A key physiological mechanism employed by multicellular organisms is apoptosis, or programmed cell death. Apoptosis is triggered by the activation of caspases in response to both extracellular (extrinsic) and intracellular (intrinsic) signals. The extrinsic and intrinsic pathways are characterized by the formation of the death-inducing signaling complex (DISC) and the apoptosome, respectively; both the DISC and the apoptosome are oligomers with complex formation dynamics. Additionally, the extrinsic and intrinsic pathways are coupled through the mitochondrial apoptosis-induced channel via the Bcl-2 family of proteins. Results A model of caspase activation is constructed and analyzed. The apoptosis signaling network is simplified through modularization methodologies and equilibrium abstractions for three functional modules. The mathematical model is composed of a system of ordinary differential equations which is numerically solved. Multiple linear regression analysis investigates the role of each module and reduced models are constructed to identify key contributions of the extrinsic and intrinsic pathways in triggering apoptosis for different cell lines. Conclusion Through linear regression techniques, we identified the feedbacks, dissociation of complexes, and negative regulators as the key components in apoptosis. The analysis and reduced models for our model formulation reveal that the chosen cell lines predominately exhibit strong extrinsic caspase, typical of type I cell, behavior. Furthermore, under the simplified model framework, the selected cells lines exhibit different modes by which caspase activation may occur. Finally the proposed modularized model of apoptosis may generalize behavior for additional cells and tissues, specifically identifying and predicting components responsible for the transition from type I to type II cell behavior. PMID:19077196

  11. Collaborative simulations and experiments for a novel yield model of coal devolatilization in oxy-coal combustion conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iavarone, Salvatore; Smith, Sean T.; Smith, Philip J.

    Oxy-coal combustion is an emerging low-cost “clean coal” technology for emissions reduction and Carbon Capture and Sequestration (CCS). The use of Computational Fluid Dynamics (CFD) tools is crucial for the development of cost-effective oxy-fuel technologies and the minimization of environmental concerns at industrial scale. The coupling of detailed chemistry models and CFD simulations is still challenging, especially for large-scale plants, because of the high computational efforts required. The development of scale-bridging models is therefore necessary, to find a good compromise between computational efforts and the physical-chemical modeling precision. This paper presents a procedure for scale-bridging modeling of coal devolatilization, inmore » the presence of experimental error, that puts emphasis on the thermodynamic aspect of devolatilization, namely the final volatile yield of coal, rather than kinetics. The procedure consists of an engineering approach based on dataset consistency and Bayesian methodology including Gaussian-Process Regression (GPR). Experimental data from devolatilization tests carried out in an oxy-coal entrained flow reactor were considered and CFD simulations of the reactor were performed. Jointly evaluating experiments and simulations, a novel yield model was validated against the data via consistency analysis. In parallel, a Gaussian-Process Regression was performed, to improve the understanding of the uncertainty associated to the devolatilization, based on the experimental measurements. Potential model forms that could predict yield during devolatilization were obtained. The set of model forms obtained via GPR includes the yield model that was proven to be consistent with the data. Finally, the overall procedure has resulted in a novel yield model for coal devolatilization and in a valuable evaluation of uncertainty in the data, in the model form, and in the model parameters.« less

  12. Collaborative simulations and experiments for a novel yield model of coal devolatilization in oxy-coal combustion conditions

    DOE PAGES

    Iavarone, Salvatore; Smith, Sean T.; Smith, Philip J.; ...

    2017-06-03

    Oxy-coal combustion is an emerging low-cost “clean coal” technology for emissions reduction and Carbon Capture and Sequestration (CCS). The use of Computational Fluid Dynamics (CFD) tools is crucial for the development of cost-effective oxy-fuel technologies and the minimization of environmental concerns at industrial scale. The coupling of detailed chemistry models and CFD simulations is still challenging, especially for large-scale plants, because of the high computational efforts required. The development of scale-bridging models is therefore necessary, to find a good compromise between computational efforts and the physical-chemical modeling precision. This paper presents a procedure for scale-bridging modeling of coal devolatilization, inmore » the presence of experimental error, that puts emphasis on the thermodynamic aspect of devolatilization, namely the final volatile yield of coal, rather than kinetics. The procedure consists of an engineering approach based on dataset consistency and Bayesian methodology including Gaussian-Process Regression (GPR). Experimental data from devolatilization tests carried out in an oxy-coal entrained flow reactor were considered and CFD simulations of the reactor were performed. Jointly evaluating experiments and simulations, a novel yield model was validated against the data via consistency analysis. In parallel, a Gaussian-Process Regression was performed, to improve the understanding of the uncertainty associated to the devolatilization, based on the experimental measurements. Potential model forms that could predict yield during devolatilization were obtained. The set of model forms obtained via GPR includes the yield model that was proven to be consistent with the data. Finally, the overall procedure has resulted in a novel yield model for coal devolatilization and in a valuable evaluation of uncertainty in the data, in the model form, and in the model parameters.« less

  13. Automated generation of quantum-accurate classical interatomic potentials for metals and semiconductors

    NASA Astrophysics Data System (ADS)

    Thompson, Aidan; Foiles, Stephen; Schultz, Peter; Swiler, Laura; Trott, Christian; Tucker, Garritt

    2013-03-01

    Molecular dynamics (MD) is a powerful condensed matter simulation tool for bridging between macroscopic continuum models and quantum models (QM) treating a few hundred atoms, but is limited by the accuracy of available interatomic potentials. Sound physical and chemical understanding of these interactions have resulted in a variety of concise potentials for certain systems, but it is difficult to extend them to new materials and properties. The growing availability of large QM data sets has made it possible to use more automated machine-learning approaches. Bartók et al. demonstrated that the bispectrum of the local neighbor density provides good regression surrogates for QM models. We adopt a similar bispectrum representation within a linear regression scheme. We have produced potentials for silicon and tantalum, and we are currently extending the method to III-V compounds. Results will be presented demonstrating the accuracy of these potentials relative to the training data, as well as their ability to accurately predict material properties not explicitly included in the training data. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Dept. of Energy Nat. Nuclear Security Admin. under Contract DE-AC04-94AL85000.

  14. Estimating mono- and bi-phasic regression parameters using a mixture piecewise linear Bayesian hierarchical model

    PubMed Central

    Zhao, Rui; Catalano, Paul; DeGruttola, Victor G.; Michor, Franziska

    2017-01-01

    The dynamics of tumor burden, secreted proteins or other biomarkers over time, is often used to evaluate the effectiveness of therapy and to predict outcomes for patients. Many methods have been proposed to investigate longitudinal trends to better characterize patients and to understand disease progression. However, most approaches assume a homogeneous patient population and a uniform response trajectory over time and across patients. Here, we present a mixture piecewise linear Bayesian hierarchical model, which takes into account both population heterogeneity and nonlinear relationships between biomarkers and time. Simulation results show that our method was able to classify subjects according to their patterns of treatment response with greater than 80% accuracy in the three scenarios tested. We then applied our model to a large randomized controlled phase III clinical trial of multiple myeloma patients. Analysis results suggest that the longitudinal tumor burden trajectories in multiple myeloma patients are heterogeneous and nonlinear, even among patients assigned to the same treatment cohort. In addition, between cohorts, there are distinct differences in terms of the regression parameters and the distributions among categories in the mixture. Those results imply that longitudinal data from clinical trials may harbor unobserved subgroups and nonlinear relationships; accounting for both may be important for analyzing longitudinal data. PMID:28723910

  15. Effects of disturbance and climate change on ecosystem performance in the Yukon River Basin boreal forest

    USGS Publications Warehouse

    Wylie, Bruce K.; Rigge, Matthew B.; Brisco, Brian; Mrnaghan, Kevin; Rover, Jennifer R.; Long, Jordan

    2014-01-01

    A warming climate influences boreal forest productivity, dynamics, and disturbance regimes. We used ecosystem models and 250 m satellite Normalized Difference Vegetation Index (NDVI) data averaged over the growing season (GSN) to model current, and estimate future, ecosystem performance. We modeled Expected Ecosystem Performance (EEP), or anticipated productivity, in undisturbed stands over the 2000–2008 period from a variety of abiotic data sources, using a rule-based piecewise regression tree. The EEP model was applied to a future climate ensemble A1B projection to quantify expected changes to mature boreal forest performance. Ecosystem Performance Anomalies (EPA), were identified as the residuals of the EEP and GSN relationship and represent performance departures from expected performance conditions. These performance data were used to monitor successional events following fire. Results suggested that maximum EPA occurs 30–40 years following fire, and deciduous stands generally have higher EPA than coniferous stands. Mean undisturbed EEP is projected to increase 5.6% by 2040 and 8.7% by 2070, suggesting an increased deciduous component in boreal forests. Our results contribute to the understanding of boreal forest successional dynamics and its response to climate change. This information enables informed decisions to prepare for, and adapt to, climate change in the Yukon River Basin forest.

  16. Nonlinear dynamics of team performance and adaptability in emergency response.

    PubMed

    Guastello, Stephen J

    2010-04-01

    The impact of team size and performance feedback on adaptation levels and performance of emergency response (ER) teams was examined to introduce a metric for quantifying adaptation levels based on nonlinear dynamical systems (NDS) theory. NDS principles appear in reports surrounding Hurricane Katrina, earthquakes, floods, a disease epidemic, and the Southeast Asian tsunami. They are also intrinsic to coordination within teams, adaptation levels, and performance in dynamic decision processes. Performance was measured in a dynamic decision task in which ER teams of different sizes worked against an attacker who was trying to destroy a city (total N = 225 undergraduates). The complexity of teams' and attackers' adaptation strategies and the role of the opponents' performance were assessed by nonlinear regression analysis. An optimal group size for team performance was identified. Teams were more readily influenced by the attackers' performance than vice versa. The adaptive capabilities of attackers and teams were impaired by their opponents in some conditions. ER teams should be large enough to contribute a critical mass of ideas but not so large that coordination would be compromised. ER teams used self-organized strategies that could have been more adaptive, whereas attackers used chaotic strategies. The model and results are applicable to ER processes or training maneuvers involving dynamic decisions but could be limited to nonhierarchical groups.

  17. Artificial Neural Networks: A Novel Approach to Analysing the Nutritional Ecology of a Blowfly Species, Chrysomya megacephala

    PubMed Central

    Bianconi, André; Zuben, Cláudio J. Von; Serapião, Adriane B. de S.; Govone, José S.

    2010-01-01

    Bionomic features of blowflies may be clarified and detailed by the deployment of appropriate modelling techniques such as artificial neural networks, which are mathematical tools widely applied to the resolution of complex biological problems. The principal aim of this work was to use three well-known neural networks, namely Multi-Layer Perceptron (MLP), Radial Basis Function (RBF), and Adaptive Neural Network-Based Fuzzy Inference System (ANFIS), to ascertain whether these tools would be able to outperform a classical statistical method (multiple linear regression) in the prediction of the number of resultant adults (survivors) of experimental populations of Chrysomya megacephala (F.) (Diptera: Calliphoridae), based on initial larval density (number of larvae), amount of available food, and duration of immature stages. The coefficient of determination (R2) derived from the RBF was the lowest in the testing subset in relation to the other neural networks, even though its R2 in the training subset exhibited virtually a maximum value. The ANFIS model permitted the achievement of the best testing performance. Hence this model was deemed to be more effective in relation to MLP and RBF for predicting the number of survivors. All three networks outperformed the multiple linear regression, indicating that neural models could be taken as feasible techniques for predicting bionomic variables concerning the nutritional dynamics of blowflies. PMID:20569135

  18. The effects of pressure sensor acoustics on airdata derived from a high-angle-of-attack flush airdata sensing (HI-FADS) system

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen R.; Moes, Timothy R.

    1991-01-01

    The accuracy of a prototype nonintrusive airdata system derived for high-angle-of-attack measurements was demonstrated for quasi-steady maneuvers as great as 55 degrees during phase one of the F-18 high alpha research vehicle flight test program. This system consists of a matrix of nine pressure ports arranged in annular rings on the aircraft nose, and estimates the complete airdata set utilizing flow modeling and nonlinear regression. Particular attention is paid to the effects of acoustical distortions within the individual pressure sensors of the HI-FADS pressure matrix. A dynamic model to quantify these effects which describes acoustical distortion is developed and solved in closed form for frequency response.

  19. A hysteretic model considering Stribeck effect for small-scale magnetorheological damper

    NASA Astrophysics Data System (ADS)

    Zhao, Yu-Liang; Xu, Zhao-Dong

    2018-06-01

    Magnetorheological (MR) damper is an ideal semi-active control device for vibration suppression. The mechanical properties of this type of devices show strong nonlinear characteristics, especially the performance of the small-scale dampers. Therefore, developing an ideal model that can accurately describe the nonlinearity of such device is crucial to control design. In this paper, the dynamic characteristics of a small-scale MR damper developed by our research group is tested, and the Stribeck effect is observed in the low velocity region. Then, an improved model based on sigmoid model is proposed to describe this Stribeck effect observed in the experiment. After that, the parameters of this model are identified by genetic algorithms, and the mathematical relationship between these parameters and the input current, excitation frequency and amplitude is regressed. Finally, the predicted forces of the proposed model are validated with the experimental data. The results show that this model can well predict the mechanical properties of the small-scale damper, especially the Stribeck effect in the low velocity region.

  20. Use of MODIS Vegetation Data in Dynamic SPARROW Modeling of Reactive Nitrogen Flux

    NASA Astrophysics Data System (ADS)

    Smith, R. A.; Brakebill, J.; Schwarz, G. E.; Nolin, A. W.; Shih, J.; Blomquist, J.; Alexander, R. B.; Macauley, M.

    2012-12-01

    SPARROW models are widely used to identify and quantify the sources of contaminants in watersheds and to predict their flux and concentration at specified locations downstream. Conventional SPARROW models are steady-state in form, and describe the average relationship between sources and stream conditions based on non-linear regression of long-term water quality monitoring data on spatially-referenced explanatory information. But many watershed management issues involve intra- and inter-annual changes in contaminant sources, hydrologic forcing, or other environmental conditions which cause a temporary imbalance between watershed inputs and outputs. Dynamic behavior of the system relating to changes in watershed storage and processing then becomes important. We describe the results of dynamic statistical calibration of a SPARROW model of total reactive nitrogen flux in the Potomac River Basin based on seasonal water quality and watershed explanatory data for 80 monitoring stations over the period 2000 to 2008. One challenge in dynamic modeling of reactive nitrogen is obtaining frequently-reported, spatially-detailed input data on the phenology of agricultural production and growth of other terrestrial vegetation. In this NASA-funded research, we use the Enhanced Vegetation Index (EVI) and gross primary productivity (GPP) data from the Terra Satellite-borne MODIS sensor to parameterize seasonal uptake and release of nitrogen. The spatial reference frame of the model is a 16,000-reach, 1:100,000-scale stream network, and the computational time step is seasonal. Precipitation and temperature data are from PRISM. The model describes transient storage and transport of nitrogen from multiple nonpoint sources including fertilized cropland, pasture, urban/suburban land, and atmospheric deposition. Removal of nitrogen from watershed storage to stream channels and to "permanent" sinks (deep groundwater and the atmosphere) occurs as parallel first-order processes. Point sources of nitrogen bypass storage and flow directly to stream channels. Model results indicate that, on average, a little more than half of the reactive nitrogen flux comes from transient storage; but in some sub-watersheds a large majority of the flux comes from stored nitrogen input to the watershed in previous seasons and years.

  1. USAF (United States Air Force) Stability and Control DATCOM (Data Compendium)

    DTIC Science & Technology

    1978-04-01

    regression analysis involves the study of a group of variables to determine their effect on a given parameter. Because of the empirical nature of this...regression analysis of mathematical statistics. In general, a regression analysis involves the study of a group of variables to determine their effect on a...Excperiment, OSR TN 58-114, MIT Fluid Dynamics Research Group Rapt. 57-5, 1957. (U) 90. Kennet, H., and Ashley, H.: Review of Unsteady Aerodynamic Studies in

  2. Dynamic Network Model for Smart City Data-Loss Resilience Case Study: City-to-City Network for Crime Analytics

    PubMed Central

    Kotevska, Olivera; Kusne, A. Gilad; Samarov, Daniel V.; Lbath, Ahmed; Battou, Abdella

    2017-01-01

    Today’s cities generate tremendous amounts of data, thanks to a boom in affordable smart devices and sensors. The resulting big data creates opportunities to develop diverse sets of context-aware services and systems, ensuring smart city services are optimized to the dynamic city environment. Critical resources in these smart cities will be more rapidly deployed to regions in need, and those regions predicted to have an imminent or prospective need. For example, crime data analytics may be used to optimize the distribution of police, medical, and emergency services. However, as smart city services become dependent on data, they also become susceptible to disruptions in data streams, such as data loss due to signal quality reduction or due to power loss during data collection. This paper presents a dynamic network model for improving service resilience to data loss. The network model identifies statistically significant shared temporal trends across multivariate spatiotemporal data streams and utilizes these trends to improve data prediction performance in the case of data loss. Dynamics also allow the system to respond to changes in the data streams such as the loss or addition of new information flows. The network model is demonstrated by city-based crime rates reported in Montgomery County, MD, USA. A resilient network is developed utilizing shared temporal trends between cities to provide improved crime rate prediction and robustness to data loss, compared with the use of single city-based auto-regression. A maximum improvement in performance of 7.8% for Silver Spring is found and an average improvement of 5.6% among cities with high crime rates. The model also correctly identifies all the optimal network connections, according to prediction error minimization. City-to-city distance is designated as a predictor of shared temporal trends in crime and weather is shown to be a strong predictor of crime in Montgomery County. PMID:29250476

  3. Dynamic Network Model for Smart City Data-Loss Resilience Case Study: City-to-City Network for Crime Analytics.

    PubMed

    Kotevska, Olivera; Kusne, A Gilad; Samarov, Daniel V; Lbath, Ahmed; Battou, Abdella

    2017-01-01

    Today's cities generate tremendous amounts of data, thanks to a boom in affordable smart devices and sensors. The resulting big data creates opportunities to develop diverse sets of context-aware services and systems, ensuring smart city services are optimized to the dynamic city environment. Critical resources in these smart cities will be more rapidly deployed to regions in need, and those regions predicted to have an imminent or prospective need. For example, crime data analytics may be used to optimize the distribution of police, medical, and emergency services. However, as smart city services become dependent on data, they also become susceptible to disruptions in data streams, such as data loss due to signal quality reduction or due to power loss during data collection. This paper presents a dynamic network model for improving service resilience to data loss. The network model identifies statistically significant shared temporal trends across multivariate spatiotemporal data streams and utilizes these trends to improve data prediction performance in the case of data loss. Dynamics also allow the system to respond to changes in the data streams such as the loss or addition of new information flows. The network model is demonstrated by city-based crime rates reported in Montgomery County, MD, USA. A resilient network is developed utilizing shared temporal trends between cities to provide improved crime rate prediction and robustness to data loss, compared with the use of single city-based auto-regression. A maximum improvement in performance of 7.8% for Silver Spring is found and an average improvement of 5.6% among cities with high crime rates. The model also correctly identifies all the optimal network connections, according to prediction error minimization. City-to-city distance is designated as a predictor of shared temporal trends in crime and weather is shown to be a strong predictor of crime in Montgomery County.

  4. Applying Kaplan-Meier to Item Response Data

    ERIC Educational Resources Information Center

    McNeish, Daniel

    2018-01-01

    Some IRT models can be equivalently modeled in alternative frameworks such as logistic regression. Logistic regression can also model time-to-event data, which concerns the probability of an event occurring over time. Using the relation between time-to-event models and logistic regression and the relation between logistic regression and IRT, this…

  5. BGFit: management and automated fitting of biological growth curves.

    PubMed

    Veríssimo, André; Paixão, Laura; Neves, Ana Rute; Vinga, Susana

    2013-09-25

    Existing tools to model cell growth curves do not offer a flexible integrative approach to manage large datasets and automatically estimate parameters. Due to the increase of experimental time-series from microbiology and oncology, the need for a software that allows researchers to easily organize experimental data and simultaneously extract relevant parameters in an efficient way is crucial. BGFit provides a web-based unified platform, where a rich set of dynamic models can be fitted to experimental time-series data, further allowing to efficiently manage the results in a structured and hierarchical way. The data managing system allows to organize projects, experiments and measurements data and also to define teams with different editing and viewing permission. Several dynamic and algebraic models are already implemented, such as polynomial regression, Gompertz, Baranyi, Logistic and Live Cell Fraction models and the user can add easily new models thus expanding current ones. BGFit allows users to easily manage their data and models in an integrated way, even if they are not familiar with databases or existing computational tools for parameter estimation. BGFit is designed with a flexible architecture that focus on extensibility and leverages free software with existing tools and methods, allowing to compare and evaluate different data modeling techniques. The application is described in the context of bacterial and tumor cells growth data fitting, but it is also applicable to any type of two-dimensional data, e.g. physical chemistry and macroeconomic time series, being fully scalable to high number of projects, data and model complexity.

  6. How to interpret the results of medical time series data analysis: Classical statistical approaches versus dynamic Bayesian network modeling.

    PubMed

    Onisko, Agnieszka; Druzdzel, Marek J; Austin, R Marshall

    2016-01-01

    Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan-Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches.

  7. Socioeconomic dynamics of water quality in the Egyptian Nile

    NASA Astrophysics Data System (ADS)

    Malik, Maheen; Nisar, Zainab; Karakatsanis, Georgios

    2016-04-01

    The Nile River remains the most important source of freshwater for Egypt as it accounts for nearly all of the country's drinking and irrigation water. About 95% of the total population is accounted to live along the Banks of the Nile(1). Therefore, water quality deterioration in addition to general natural scarcity of water in the region(2) is the main driver for carrying out this study. What further aggravates this issue is the water conflict in the Blue Nile region. The study evaluates different water quality parameters and their concentrations in the Egyptian Nile; further assessing the temporal dynamics of water quality in the area with (a) the Environmental Kuznets Curve (EKC)(3) and (b) the Jevons Paradox (JP)(4) in order to identify water quality improvements or degradations using selected socioeconomic variables(5). For this purpose various environmental indicators including BOD, COD, DO, Phosphorus and TDS were plotted against different economic variables including Population, Gross Domestic Product (GDP), Annual Fresh Water Withdrawal and Improved Water Source. Mathematically, this was expressed by 2nd and 3rd degree polynomial regressions generating the EKC and JP respectively. The basic goal of the regression analysis is to model and highlight the dynamic trend of water quality indicators in relation to their established permissible limits, which will allow the identification of optimal future water quality policies. The results clearly indicate that the dependency of water quality indicators on socioeconomic variables differs for every indicator; while COD was above the permissible limits in all the cases despite of its decreasing trend in each case, BOD and phosphate signified increasing concentrations for the future, if they continue to follow the present trend. This could be an indication of rebound effect explained by the Jevons Paradox i.e. water quality deterioration after its improvement, either due to increase of population or intensification of economic activities related to these indicators. Keywords: Water quality dynamics, Environmental Kuznets Curve (EKC), Jevons Paradox (JP), economic variables, polynomial regressions, environmental indicators, permissible limit References: (1)Evans, A. (2007). River of Life River Nile. (2)Egypt's Water Crisis - Recipe for Disaster. (2016). [Blog] EcoMENA- Echoing Sustainability. (3)Alstine, J. and Neumayer, E. (2010). The Environmental Kuznets Curve. (4)Garrett, T. (2014). Rebound, Backfire, and the Jevons Paradox. [Blog] (5)Data.worldbank.org

  8. Determining the response of sea level to atmospheric pressure forcing using TOPEX/POSEIDON data

    NASA Technical Reports Server (NTRS)

    Fu, Lee-Lueng; Pihos, Greg

    1994-01-01

    The static response of sea level to the forcing of atmospheric pressure, the so-called inverted barometer (IB) effect, is investigated using TOPEX/POSEIDON data. This response, characterized by the rise and fall of sea level to compensate for the change of atmospheric pressure at a rate of -1 cm/mbar, is not associated with any ocean currents and hence is normally treated as an error to be removed from sea level observation. Linear regression and spectral transfer function analyses are applied to sea level and pressure to examine the validity of the IB effect. In regions outside the tropics, the regression coefficient is found to be consistently close to the theoretical value except for the regions of western boundary currents, where the mesoscale variability interferes with the IB effect. The spectral transfer function shows near IB response at periods of 30 degrees is -0.84 +/- 0.29 cm/mbar (1 standard deviation). The deviation from = 1 cm /mbar is shown to be caused primarily by the effect of wind forcing on sea level, based on multivariate linear regression model involving both pressure and wind forcing. The regression coefficient for pressure resulting from the multivariate analysis is -0.96 +/- 0.32 cm/mbar. In the tropics the multivariate analysis fails because sea level in the tropics is primarily responding to remote wind forcing. However, after removing from the data the wind-forced sea level estimated by a dynamic model of the tropical Pacific, the pressure regression coefficient improves from -1.22 +/- 0.69 cm/mbar to -0.99 +/- 0.46 cm/mbar, clearly revealing an IB response. The result of the study suggests that with a proper removal of the effect of wind forcing the IB effect is valid in most of the open ocean at periods longer than 20 days and spatial scales larger than 500 km.

  9. Impact of weather factors on hand, foot and mouth disease, and its role in short-term incidence trend forecast in Huainan City, Anhui Province.

    PubMed

    Zhao, Desheng; Wang, Lulu; Cheng, Jian; Xu, Jun; Xu, Zhiwei; Xie, Mingyu; Yang, Huihui; Li, Kesheng; Wen, Lingying; Wang, Xu; Zhang, Heng; Wang, Shusi; Su, Hong

    2017-03-01

    Hand, foot, and mouth disease (HFMD) is one of the most common communicable diseases in China, and current climate change had been recognized as a significant contributor. Nevertheless, no reliable models have been put forward to predict the dynamics of HFMD cases based on short-term weather variations. The present study aimed to examine the association between weather factors and HFMD, and to explore the accuracy of seasonal auto-regressive integrated moving average (SARIMA) model with local weather conditions in forecasting HFMD. Weather and HFMD data from 2009 to 2014 in Huainan, China, were used. Poisson regression model combined with a distributed lag non-linear model (DLNM) was applied to examine the relationship between weather factors and HFMD. The forecasting model for HFMD was performed by using the SARIMA model. The results showed that temperature rise was significantly associated with an elevated risk of HFMD. Yet, no correlations between relative humidity, barometric pressure and rainfall, and HFMD were observed. SARIMA models with temperature variable fitted HFMD data better than the model without it (sR 2 increased, while the BIC decreased), and the SARIMA (0, 1, 1)(0, 1, 0) 52 offered the best fit for HFMD data. In addition, compared with females and nursery children, males and scattered children may be more suitable for using SARIMA model to predict the number of HFMD cases and it has high precision. In conclusion, high temperature could increase the risk of contracting HFMD. SARIMA model with temperature variable can effectively improve its forecast accuracy, which can provide valuable information for the policy makers and public health to construct a best-fitting model and optimize HFMD prevention.

  10. Impact of weather factors on hand, foot and mouth disease, and its role in short-term incidence trend forecast in Huainan City, Anhui Province

    NASA Astrophysics Data System (ADS)

    Zhao, Desheng; Wang, Lulu; Cheng, Jian; Xu, Jun; Xu, Zhiwei; Xie, Mingyu; Yang, Huihui; Li, Kesheng; Wen, Lingying; Wang, Xu; Zhang, Heng; Wang, Shusi; Su, Hong

    2017-03-01

    Hand, foot, and mouth disease (HFMD) is one of the most common communicable diseases in China, and current climate change had been recognized as a significant contributor. Nevertheless, no reliable models have been put forward to predict the dynamics of HFMD cases based on short-term weather variations. The present study aimed to examine the association between weather factors and HFMD, and to explore the accuracy of seasonal auto-regressive integrated moving average (SARIMA) model with local weather conditions in forecasting HFMD. Weather and HFMD data from 2009 to 2014 in Huainan, China, were used. Poisson regression model combined with a distributed lag non-linear model (DLNM) was applied to examine the relationship between weather factors and HFMD. The forecasting model for HFMD was performed by using the SARIMA model. The results showed that temperature rise was significantly associated with an elevated risk of HFMD. Yet, no correlations between relative humidity, barometric pressure and rainfall, and HFMD were observed. SARIMA models with temperature variable fitted HFMD data better than the model without it (s R 2 increased, while the BIC decreased), and the SARIMA (0, 1, 1)(0, 1, 0)52 offered the best fit for HFMD data. In addition, compared with females and nursery children, males and scattered children may be more suitable for using SARIMA model to predict the number of HFMD cases and it has high precision. In conclusion, high temperature could increase the risk of contracting HFMD. SARIMA model with temperature variable can effectively improve its forecast accuracy, which can provide valuable information for the policy makers and public health to construct a best-fitting model and optimize HFMD prevention.

  11. Modelling and mapping tick dynamics using volunteered observations.

    PubMed

    Garcia-Martí, Irene; Zurita-Milla, Raúl; van Vliet, Arnold J H; Takken, Willem

    2017-11-14

    Tick populations and tick-borne infections have steadily increased since the mid-1990s posing an ever-increasing risk to public health. Yet, modelling tick dynamics remains challenging because of the lack of data and knowledge on this complex phenomenon. Here we present an approach to model and map tick dynamics using volunteered data. This approach is illustrated with 9 years of data collected by a group of trained volunteers who sampled active questing ticks (AQT) on a monthly basis and for 15 locations in the Netherlands. We aimed at finding the main environmental drivers of AQT at multiple time-scales, and to devise daily AQT maps at the national level for 2014. Tick dynamics is a complex ecological problem driven by biotic (e.g. pathogens, wildlife, humans) and abiotic (e.g. weather, landscape) factors. We enriched the volunteered AQT collection with six types of weather variables (aggregated at 11 temporal scales), three types of satellite-derived vegetation indices, land cover, and mast years. Then, we applied a feature engineering process to derive a set of 101 features to characterize the conditions that yielded a particular count of AQT on a date and location. To devise models predicting the AQT, we use a time-aware Random Forest regression method, which is suitable to find non-linear relationships in complex ecological problems, and provides an estimation of the most important features to predict the AQT. We trained a model capable of fitting AQT with reduced statistical metrics. The multi-temporal study on the feature importance indicates that variables linked to water levels in the atmosphere (i.e. evapotranspiration, relative humidity) consistently showed a higher explanatory power than previous works using temperature. As a product of this study, we are able of mapping daily tick dynamics at the national level. This study paves the way towards the design of new applications in the fields of environmental research, nature management, and public health. It also illustrates how Citizen Science initiatives produce geospatial data collections that can support scientific analysis, thus enabling the monitoring of complex environmental phenomena.

  12. The development of video game enjoyment in a role playing game.

    PubMed

    Wirth, Werner; Ryffel, Fabian; von Pape, Thilo; Karnowski, Veronika

    2013-04-01

    This study examines the development of video game enjoyment over time. The results of a longitudinal study (N=62) show that enjoyment increases over several sessions. Moreover, results of a multilevel regression model indicate a causal link between the dependent variable video game enjoyment and the predictor variables exploratory behavior, spatial presence, competence, suspense and solution, and simulated experiences of life. These findings are important for video game research because they reveal the antecedents of video game enjoyment in a real-world longitudinal setting. Results are discussed in terms of the dynamics of video game enjoyment under real-world conditions.

  13. Dynamic Filtering Improves Attentional State Prediction with fNIRS

    NASA Technical Reports Server (NTRS)

    Harrivel, Angela R.; Weissman, Daniel H.; Noll, Douglas C.; Huppert, Theodore; Peltier, Scott J.

    2016-01-01

    Brain activity can predict a person's level of engagement in an attentional task. However, estimates of brain activity are often confounded by measurement artifacts and systemic physiological noise. The optimal method for filtering this noise - thereby increasing such state prediction accuracy - remains unclear. To investigate this, we asked study participants to perform an attentional task while we monitored their brain activity with functional near infrared spectroscopy (fNIRS). We observed higher state prediction accuracy when noise in the fNIRS hemoglobin [Hb] signals was filtered with a non-stationary (adaptive) model as compared to static regression (84% +/- 6% versus 72% +/- 15%).

  14. Lateral stability analysis for X-29A drop model using system identification methodology

    NASA Technical Reports Server (NTRS)

    Raney, David L.; Batterson, James G.

    1989-01-01

    A 22-percent dynamically scaled replica of the X-29A forward-swept-wing airplane has been flown in radio-controlled drop tests at the NASA Langley Research Center. A system identification study of the recorded data was undertaken to examine the stability and control derivatives that influence the lateral behavior of this vehicle with particular emphasis on an observed wing rock phenomenon. All major lateral stability derivatives and the damping-in-roll derivative were identified for angles of attack from 5 to 80 degrees by using a data-partitioning methodology and a modified stepwise regression algorithm.

  15. Young Women’s Dynamic Family Size Preferences in the Context of Transitioning Fertility

    PubMed Central

    Yeatman, Sara; Sennott, Christie; Culpepper, Steven

    2013-01-01

    Dynamic theories of family size preferences posit that they are not a fixed and stable goal but rather are akin to a moving target that changes within individuals over time. Nonetheless, in high-fertility contexts, changes in family size preferences tend to be attributed to low construct validity and measurement error instead of genuine revisions in preferences. To address the appropriateness of this incongruity, the present study examines evidence for the sequential model of fertility among a sample of young Malawian women living in a context of transitioning fertility. Using eight waves of closely spaced data and fixed-effects models, we find that these women frequently change their reported family size preferences and that these changes are often associated with changes in their relationship and reproductive circumstances. The predictability of change gives credence to the argument that ideal family size is a meaningful construct, even in this higher-fertility setting. Changes are not equally predictable across all women, however, and gamma regression results demonstrate that women for whom reproduction is a more distant goal change their fertility preferences in less-predictable ways. PMID:23619999

  16. A speed limit compliance model for dynamic speed display sign.

    PubMed

    Ardeshiri, Anam; Jeihani, Mansoureh

    2014-12-01

    Violating speed limits is a major cause of motor vehicle crashes. Various techniques have been adopted to ensure that posted speed limits are obeyed by drivers. This study investigates the effect of dynamic speed display signs (DSDSs) on drivers' compliance with posted speed limit. An extensive speed data collection upstream of, adjacent to, and downstream of DSDS locations on multiple road classes with different speed limits (25, 35, and 45 mph) was performed short-term and long-term after DSDS installation. Conventional statistical analysis, regression models, and a Bayesian network were developed to assess the DSDS's effectiveness. General compliance with speed limit (upstream of the DSDS location), time of day, day of week, duration of DSDS operation, and distance from the DSDS location were significantly correlated with speed limit compliance adjacent to the DSDS. While compliance with the speed limit due to the DSDS increased by 5%, speed reduction occurred in 40% of the cases. Since drivers were likely to increase their speed after passing the DSDS, it should be installed on critical points supplemented with enforcement. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Stochastic seasonality and nonlinear density-dependent factors regulate population size in an African rodent

    USGS Publications Warehouse

    Leirs, H.; Stenseth, N.C.; Nichols, J.D.; Hines, J.E.; Verhagen, R.; Verheyen, W.

    1997-01-01

    Ecology has long been troubled by the controversy over how populations are regulated. Some ecologists focus on the role of environmental effects, whereas others argue that density-dependent feedback mechanisms are central. The relative importance of both processes is still hotly debated, but clear examples of both processes acting in the same population are rare. Keyfactor analysis (regression of population changes on possible causal factors) and time-series analysis are often used to investigate the presence of density dependence, but such approaches may be biased and provide no information on actual demographic rates. Here we report on both density-dependent and density-independent effects in a murid rodent pest species, the multimammate rat Mastomys natalensis (Smith, 1834), using statistical capture-recapture models. Both effects occur simultaneously, but we also demonstrate that they do not affect all demographic rates in the same way. We have incorporated the obtained estimates of demographic rates in a population dynamics model and show that the observed dynamics are affected by stabilizing nonlinear density-dependent components coupled with strong deterministic and stochastic seasonal components.

  18. Young women's dynamic family size preferences in the context of transitioning fertility.

    PubMed

    Yeatman, Sara; Sennott, Christie; Culpepper, Steven

    2013-10-01

    Dynamic theories of family size preferences posit that they are not a fixed and stable goal but rather are akin to a moving target that changes within individuals over time. Nonetheless, in high-fertility contexts, changes in family size preferences tend to be attributed to low construct validity and measurement error instead of genuine revisions in preferences. To address the appropriateness of this incongruity, the present study examines evidence for the sequential model of fertility among a sample of young Malawian women living in a context of transitioning fertility. Using eight waves of closely spaced data and fixed-effects models, we find that these women frequently change their reported family size preferences and that these changes are often associated with changes in their relationship and reproductive circumstances. The predictability of change gives credence to the argument that ideal family size is a meaningful construct, even in this higher-fertility setting. Changes are not equally predictable across all women, however, and gamma regression results demonstrate that women for whom reproduction is a more distant goal change their fertility preferences in less-predictable ways.

  19. Multidimensional Ultrasound Imaging of the Wrist: Changes of Shape and Displacement of the Median Nerve and Tendons in Carpal Tunnel Syndrome

    PubMed Central

    Filius, Anika; Scheltens, Marjan; Bosch, Hans G.; van Doorn, Pieter A.; Stam, Henk J.; Hovius, Steven E.R.; Amadio, Peter C.; Selles, Ruud W.

    2015-01-01

    Dynamics of structures within the carpal tunnel may alter in carpal tunnel syndrome (CTS) due to fibrotic changes and increased carpal tunnel pressure. Ultrasound can visualize these potential changes, making ultrasound potentially an accurate diagnostic tool. To study this, we imaged the carpal tunnel of 113 patients and 42 controls. CTS severity was classified according to validated clinical and nerve conduction study (NCS) classifications. Transversal and longitudinal displacement and shape (changes) were calculated for the median nerve, tendons and surrounding tissue. To predict diagnostic value binary logistic regression modeling was applied. Reduced longitudinal nerve displacement (p≤0.019), increased nerve cross-sectional area (p≤0.006) and perimeter (p≤0.007), and a trend of relatively changed tendon displacements were seen in patients. Changes were more convincing when CTS was classified as more severe. Binary logistic modeling to diagnose CTS using ultrasound showed a sensitivity of 70-71% and specificity of 80-84%. In conclusion, CTS patients have altered dynamics of structures within the carpal tunnel. PMID:25865180

  20. Vibration based structural health monitoring of an arch bridge: From automated OMA to damage detection

    NASA Astrophysics Data System (ADS)

    Magalhães, F.; Cunha, A.; Caetano, E.

    2012-04-01

    In order to evaluate the usefulness of approaches based on modal parameters tracking for structural health monitoring of bridges, in September of 2007, a dynamic monitoring system was installed in a concrete arch bridge at the city of Porto, in Portugal. The implementation of algorithms to perform the continuous on-line identification of modal parameters based on structural responses to ambient excitation (automated Operational Modal Analysis) has permitted to create a very complete database with the time evolution of the bridge modal characteristics during more than 2 years. This paper describes the strategy that was followed to minimize the effects of environmental and operational factors on the bridge natural frequencies, enabling, in a subsequent stage, the identification of structural anomalies. Alternative static and dynamic regression models are tested and complemented by a Principal Components Analysis. Afterwards, the identification of damages is tried with control charts. At the end, it is demonstrated that the adopted processing methodology permits the detection of realistic damage scenarios, associated with frequency shifts around 0.2%, which were simulated with a numerical model.

  1. Molecular dynamics simulations of fluid cyclopropane with MP2/CBS-fitted intermolecular interaction potentials

    NASA Astrophysics Data System (ADS)

    Ho, Yen-Ching; Wang, Yi-Siang; Chao, Sheng D.

    2017-08-01

    Modeling fluid cycloalkanes with molecular dynamics simulations has proven to be a very challenging task partly because of lacking a reliable force field based on quantum chemistry calculations. In this paper, we construct an ab initio force field for fluid cyclopropane using the second-order Møller-Plesset perturbation theory. We consider 15 conformers of the cyclopropane dimer for the orientation sampling. Single-point energies at important geometries are calibrated by the coupled cluster with single, double, and perturbative triple excitation method. Dunning's correlation consistent basis sets (up to aug-cc-pVTZ) are used in extrapolating the interaction energies at the complete basis set limit. The force field parameters in a 9-site Lennard-Jones model are regressed by the calculated interaction energies without using empirical data. With this ab initio force field, we perform molecular dynamics simulations of fluid cyclopropane and calculate both the structural and dynamical properties. We compare the simulation results with those using an empirical force field and obtain a quantitative agreement for the detailed atom-wise radial distribution functions. The experimentally observed gross radial distribution function (extracted from the neutron scattering measurements) is well reproduced in our simulation. Moreover, the calculated self-diffusion coefficients and shear viscosities are in good agreement with the experimental data over a wide range of thermodynamic conditions. To the best of our knowledge, this is the first ab initio force field which is capable of competing with empirical force fields for simulating fluid cyclopropane.

  2. Method of Obtaining High Resolution Intrinsic Wire Boom Damping Parameters for Multi-Body Dynamics Simulations

    NASA Technical Reports Server (NTRS)

    Yew, Alvin G.; Chai, Dean J.; Olney, David J.

    2010-01-01

    The goal of NASA's Magnetospheric MultiScale (MMS) mission is to understand magnetic reconnection with sensor measurements from four spinning satellites flown in a tight tetrahedron formation. Four of the six electric field sensors on each satellite are located at the end of 60- meter wire booms to increase measurement sensitivity in the spin plane and to minimize motion coupling from perturbations on the main body. A propulsion burn however, might induce boom oscillations that could impact science measurements if oscillations do not damp to values on the order of 0.1 degree in a timely fashion. Large damping time constants could also adversely affect flight dynamics and attitude control performance. In this paper, we will discuss the implementation of a high resolution method for calculating the boom's intrinsic damping, which was used in multi-body dynamics simulations. In summary, experimental data was obtained with a scaled-down boom, which was suspended as a pendulum in vacuum. Optical techniques were designed to accurately measure the natural decay of angular position and subsequently, data processing algorithms resulted in excellent spatial and temporal resolutions. This method was repeated in a parametric study for various lengths, root tensions and vacuum levels. For all data sets, regression models for damping were applied, including: nonlinear viscous, frequency-independent hysteretic, coulomb and some combination of them. Our data analysis and dynamics models have shown that the intrinsic damping for the baseline boom is insufficient, thereby forcing project management to explore mitigation strategies.

  3. Messing Up Texas?: A Re-Analysis of the Effects of Executions on Homicides.

    PubMed

    Brandt, Patrick T; Kovandzic, Tomislav V

    2015-01-01

    Executions in Texas from 1994-2005 do not deter homicides, contrary to the results of Land et al. (2009). We find that using different models--based on pre-tests for unit roots that correct for earlier model misspecifications--one cannot reject the null hypothesis that executions do not lead to a change in homicides in Texas over this period. Using additional control variables, we show that variables such as the number of prisoners in Texas may drive the main drop in homicides over this period. Such conclusions however are highly sensitive to model specification decisions, calling into question the assumptions about fixed parameters and constant structural relationships. This means that using dynamic regressions to account for policy changes that may affect homicides need to be done with significant care and attention.

  4. Temporal Drivers of Liking Based on Functional Data Analysis and Non-Additive Models for Multi-Attribute Time-Intensity Data of Fruit Chews.

    PubMed

    Kuesten, Carla; Bi, Jian

    2018-06-03

    Conventional drivers of liking analysis was extended with a time dimension into temporal drivers of liking (TDOL) based on functional data analysis methodology and non-additive models for multiple-attribute time-intensity (MATI) data. The non-additive models, which consider both direct effects and interaction effects of attributes to consumer overall liking, include Choquet integral and fuzzy measure in the multi-criteria decision-making, and linear regression based on variance decomposition. Dynamics of TDOL, i.e., the derivatives of the relative importance functional curves were also explored. Well-established R packages 'fda', 'kappalab' and 'relaimpo' were used in the paper for developing TDOL. Applied use of these methods shows that the relative importance of MATI curves offers insights for understanding the temporal aspects of consumer liking for fruit chews.

  5. A Model Comparison for Count Data with a Positively Skewed Distribution with an Application to the Number of University Mathematics Courses Completed

    ERIC Educational Resources Information Center

    Liou, Pey-Yan

    2009-01-01

    The current study examines three regression models: OLS (ordinary least square) linear regression, Poisson regression, and negative binomial regression for analyzing count data. Simulation results show that the OLS regression model performed better than the others, since it did not produce more false statistically significant relationships than…

  6. Self-consistent asset pricing models

    NASA Astrophysics Data System (ADS)

    Malevergne, Y.; Sornette, D.

    2007-08-01

    We discuss the foundations of factor or regression models in the light of the self-consistency condition that the market portfolio (and more generally the risk factors) is (are) constituted of the assets whose returns it is (they are) supposed to explain. As already reported in several articles, self-consistency implies correlations between the return disturbances. As a consequence, the alphas and betas of the factor model are unobservable. Self-consistency leads to renormalized betas with zero effective alphas, which are observable with standard OLS regressions. When the conditions derived from internal consistency are not met, the model is necessarily incomplete, which means that some sources of risk cannot be replicated (or hedged) by a portfolio of stocks traded on the market, even for infinite economies. Analytical derivations and numerical simulations show that, for arbitrary choices of the proxy which are different from the true market portfolio, a modified linear regression holds with a non-zero value αi at the origin between an asset i's return and the proxy's return. Self-consistency also introduces “orthogonality” and “normality” conditions linking the betas, alphas (as well as the residuals) and the weights of the proxy portfolio. Two diagnostics based on these orthogonality and normality conditions are implemented on a basket of 323 assets which have been components of the S&P500 in the period from January 1990 to February 2005. These two diagnostics show interesting departures from dynamical self-consistency starting about 2 years before the end of the Internet bubble. Assuming that the CAPM holds with the self-consistency condition, the OLS method automatically obeys the resulting orthogonality and normality conditions and therefore provides a simple way to self-consistently assess the parameters of the model by using proxy portfolios made only of the assets which are used in the CAPM regressions. Finally, the factor decomposition with the self-consistency condition derives a risk-factor decomposition in the multi-factor case which is identical to the principal component analysis (PCA), thus providing a direct link between model-driven and data-driven constructions of risk factors. This correspondence shows that PCA will therefore suffer from the same limitations as the CAPM and its multi-factor generalization, namely lack of out-of-sample explanatory power and predictability. In the multi-period context, the self-consistency conditions force the betas to be time-dependent with specific constraints.

  7. Breeding value accuracy estimates for growth traits using random regression and multi-trait models in Nelore cattle.

    PubMed

    Boligon, A A; Baldi, F; Mercadante, M E Z; Lobo, R B; Pereira, R J; Albuquerque, L G

    2011-06-28

    We quantified the potential increase in accuracy of expected breeding value for weights of Nelore cattle, from birth to mature age, using multi-trait and random regression models on Legendre polynomials and B-spline functions. A total of 87,712 weight records from 8144 females were used, recorded every three months from birth to mature age from the Nelore Brazil Program. For random regression analyses, all female weight records from birth to eight years of age (data set I) were considered. From this general data set, a subset was created (data set II), which included only nine weight records: at birth, weaning, 365 and 550 days of age, and 2, 3, 4, 5, and 6 years of age. Data set II was analyzed using random regression and multi-trait models. The model of analysis included the contemporary group as fixed effects and age of dam as a linear and quadratic covariable. In the random regression analyses, average growth trends were modeled using a cubic regression on orthogonal polynomials of age. Residual variances were modeled by a step function with five classes. Legendre polynomials of fourth and sixth order were utilized to model the direct genetic and animal permanent environmental effects, respectively, while third-order Legendre polynomials were considered for maternal genetic and maternal permanent environmental effects. Quadratic polynomials were applied to model all random effects in random regression models on B-spline functions. Direct genetic and animal permanent environmental effects were modeled using three segments or five coefficients, and genetic maternal and maternal permanent environmental effects were modeled with one segment or three coefficients in the random regression models on B-spline functions. For both data sets (I and II), animals ranked differently according to expected breeding value obtained by random regression or multi-trait models. With random regression models, the highest gains in accuracy were obtained at ages with a low number of weight records. The results indicate that random regression models provide more accurate expected breeding values than the traditionally finite multi-trait models. Thus, higher genetic responses are expected for beef cattle growth traits by replacing a multi-trait model with random regression models for genetic evaluation. B-spline functions could be applied as an alternative to Legendre polynomials to model covariance functions for weights from birth to mature age.

  8. Evaluating differential effects using regression interactions and regression mixture models

    PubMed Central

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This paper focuses on understanding regression mixture models, a relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their formulation, and their assumptions are compared using Monte Carlo simulations and real data analysis. The capabilities of regression mixture models are described and specific issues to be addressed when conducting regression mixtures are proposed. The paper aims to clarify the role that regression mixtures can take in the estimation of differential effects and increase awareness of the benefits and potential pitfalls of this approach. Regression mixture models are shown to be a potentially effective exploratory method for finding differential effects when these effects can be defined by a small number of classes of respondents who share a typical relationship between a predictor and an outcome. It is also shown that the comparison between regression mixture models and interactions becomes substantially more complex as the number of classes increases. It is argued that regression interactions are well suited for direct tests of specific hypotheses about differential effects and regression mixtures provide a useful approach for exploring effect heterogeneity given adequate samples and study design. PMID:26556903

  9. Evaluating Differential Effects Using Regression Interactions and Regression Mixture Models

    ERIC Educational Resources Information Center

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This article focuses on understanding regression mixture models, which are relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their…

  10. Characterizing Individual Differences in Functional Connectivity Using Dual-Regression and Seed-Based Approaches

    PubMed Central

    Smith, David V.; Utevsky, Amanda V.; Bland, Amy R.; Clement, Nathan; Clithero, John A.; Harsch, Anne E. W.; Carter, R. McKell; Huettel, Scott A.

    2014-01-01

    A central challenge for neuroscience lies in relating inter-individual variability to the functional properties of specific brain regions. Yet, considerable variability exists in the connectivity patterns between different brain areas, potentially producing reliable group differences. Using sex differences as a motivating example, we examined two separate resting-state datasets comprising a total of 188 human participants. Both datasets were decomposed into resting-state networks (RSNs) using a probabilistic spatial independent components analysis (ICA). We estimated voxelwise functional connectivity with these networks using a dual-regression analysis, which characterizes the participant-level spatiotemporal dynamics of each network while controlling for (via multiple regression) the influence of other networks and sources of variability. We found that males and females exhibit distinct patterns of connectivity with multiple RSNs, including both visual and auditory networks and the right frontal-parietal network. These results replicated across both datasets and were not explained by differences in head motion, data quality, brain volume, cortisol levels, or testosterone levels. Importantly, we also demonstrate that dual-regression functional connectivity is better at detecting inter-individual variability than traditional seed-based functional connectivity approaches. Our findings characterize robust—yet frequently ignored—neural differences between males and females, pointing to the necessity of controlling for sex in neuroscience studies of individual differences. Moreover, our results highlight the importance of employing network-based models to study variability in functional connectivity. PMID:24662574

  11. Dynamic networks of PTSD symptoms during conflict.

    PubMed

    Greene, Talya; Gelkopf, Marc; Epskamp, Sacha; Fried, Eiko

    2018-02-28

    Conceptualizing posttraumatic stress disorder (PTSD) symptoms as a dynamic system of causal elements could provide valuable insights into the way that PTSD develops and is maintained in traumatized individuals. We present the first study to apply a multilevel network model to produce an exploratory empirical conceptualization of dynamic networks of PTSD symptoms, using data collected during a period of conflict. Intensive longitudinal assessment data were collected during the Israel-Gaza War in July-August 2014. The final sample (n = 96) comprised a general population sample of Israeli adult civilians exposed to rocket fire. Participants completed twice-daily reports of PTSD symptoms via smartphone for 30 days. We used a multilevel vector auto-regression model to produce contemporaneous and temporal networks, and a partial correlation network model to obtain a between-subjects network. Multilevel network analysis found strong positive contemporaneous associations between hypervigilance and startle response, avoidance of thoughts and avoidance of reminders, and between flashbacks and emotional reactivity. The temporal network indicated the central role of startle response as a predictor of future PTSD symptomatology, together with restricted affect, blame, negative emotions, and avoidance of thoughts. There were some notable differences between the temporal and contemporaneous networks, including the presence of a number of negative associations, particularly from blame. The between-person network indicated flashbacks and emotional reactivity to be the most central symptoms. This study suggests various symptoms that could potentially be driving the development of PTSD. We discuss clinical implications such as identifying particular symptoms as targets for interventions.

  12. Weather Variability Associated with Aedes (Stegomyia) aegypti (Dengue Vector) Oviposition Dynamics in Northwestern Argentina

    PubMed Central

    Estallo, Elizabet L.; Ludueña-Almeida, Francisco F.; Introini, María V.; Zaidenberg, Mario; Almirón, Walter R.

    2015-01-01

    This study aims to develop a forecasting model by assessing the weather variability associated with seasonal fluctuation of Aedes aegypti oviposition dynamic at a city level in Orán, in northwestern Argentina. Oviposition dynamics were assessed by weekly monitoring of 90 ovitraps in the urban area during 2005-2007. Correlations were performed between the number of eggs collected weekly and weather variables (rainfall, photoperiod, vapor pressure of water, temperature, and relative humidity) with and without time lags (1 to 6 weeks). A stepwise multiple linear regression analysis was performed with the set of meteorological variables from the first year of study with the variables in the time lags that best correlated with the oviposition. Model validation was conducted using the data from the second year of study (October 2006- 2007). Minimum temperature and rainfall were the most important variables. No eggs were found at temperatures below 10°C. The most significant time lags were 3 weeks for minimum temperature and rains, 3 weeks for water vapor pressure, and 6 weeks for maximum temperature. Aedes aegypti could be expected in Orán three weeks after rains with adequate min temperatures. The best-fit forecasting model for the combined meteorological variables explained 70 % of the variance (adj. R2). The correlation between Ae. aegypti oviposition observed and estimated by the forecasting model resulted in rs = 0.80 (P < 0.05). The forecasting model developed would allow prediction of increases and decreases in the Ae. aegypti oviposition activity based on meteorological data for Orán city and, according to the meteorological variables, vector activity can be predicted three or four weeks in advance. PMID:25993415

  13. Estimating future temperature maxima in lakes across the United States using a surrogate modeling approach

    PubMed Central

    Zi, Tan; Schmidt, Michelle; Johnson, Thomas E.; Nover, Daniel M.; Clark, Christopher M.

    2017-01-01

    A warming climate increases thermal inputs to lakes with potential implications for water quality and aquatic ecosystems. In a previous study, we used a dynamic water column temperature and mixing simulation model to simulate chronic (7-day average) maximum temperatures under a range of potential future climate projections at selected sites representative of different U.S. regions. Here, to extend results to lakes where dynamic models have not been developed, we apply a novel machine learning approach that uses Gaussian Process regression to describe the model response surface as a function of simplified lake characteristics (depth, surface area, water clarity) and climate forcing (winter and summer air temperatures and potential evapotranspiration). We use this approach to extrapolate predictions from the simulation model to the statistical sample of U.S. lakes in the National Lakes Assessment (NLA) database. Results provide a national-scale scoping assessment of the potential thermal risk to lake water quality and ecosystems across the U.S. We suggest a small fraction of lakes will experience less risk of summer thermal stress events due to changes in stratification and mixing dynamics, but most will experience increases. The percentage of lakes in the NLA with simulated 7-day average maximum water temperatures in excess of 30°C is projected to increase from less than 2% to approximately 22% by the end of the 21st century, which could significantly reduce the number of lakes that can support cold water fisheries. Site-specific analysis of the full range of factors that influence thermal profiles in individual lakes is needed to develop appropriate adaptation strategies. PMID:29121058

  14. Transient combustion in hybrid rockets

    NASA Astrophysics Data System (ADS)

    Karabeyoglu, Mustafa Arif

    1998-09-01

    Hybrid rockets regained interest recently as an alternative chemical propulsion system due to their advantages over the solid and liquid systems that are currently in use. Development efforts on hybrids revealed two important problem areas: (1) low frequency instabilities and (2) slow transient response. Both of these are closely related to the transient behavior which is a poorly understood aspect of hybrid operation. This thesis is mainly involved with a theoretical study of transient combustion in hybrid rockets. We follow the methodology of identifying and modeling the subsystems of the motor such as the thermal lags in the solid, boundary layer combustion and chamber gasdynamics from a dynamic point of view. We begin with the thermal lag in the solid which yield the regression rate for any given wall heat flux variation. Interesting phenomena such as overshooting during throttling and the amplification and phase lead regions in the frequency domain are discovered. Later we develop a quasi-steady transient hybrid combustion model supported with time delays for the boundary layer processes. This is integrated with the thermal lag system to obtain the thermal combustion (TC) coupled response. The TC coupled system with positive delays generated low frequency instabilities. The scaling of the instabilities are in good agreement with actual motor test data. Finally, we formulate a gasdynamic model for the hybrid chamber which successfully resolves the filling/emptying and longitudinal acoustic behavior of the motor. The TC coupled system is later integrated to the gasdynamic model to obtain the overall response (TCG coupled system) of gaseous oxidizer motors with stiff feed systems. Low frequency instabilities were also encountered for the TCG coupled system. Apart from the transient investigations, the regression rate behavior of liquefying hybrid propellants such as solid cryogenic materials are also studied. The theory is based on the possibility of enhancement of regression rate by the entrainment mass transfer from a liquid layer formed on the fuel surface. The predicted regression rates are in good agreement with the cryogenic experimental findings obtained recently at Edwards Airforce Base with a frozen pentane and gaseous oxygen system.

  15. Combined chamber-tower approach: Using eddy covariance measurements to cross-validate carbon fluxes modeled from manual chamber campaigns

    NASA Astrophysics Data System (ADS)

    Brümmer, C.; Moffat, A. M.; Huth, V.; Augustin, J.; Herbst, M.; Kutsch, W. L.

    2016-12-01

    Manual carbon dioxide flux measurements with closed chambers at scheduled campaigns are a versatile method to study management effects at small scales in multiple-plot experiments. The eddy covariance technique has the advantage of quasi-continuous measurements but requires large homogeneous areas of a few hectares. To evaluate the uncertainties associated with interpolating from individual campaigns to the whole vegetation period, we installed both techniques at an agricultural site in Northern Germany. The presented comparison covers two cropping seasons, winter oilseed rape in 2012/13 and winter wheat in 2013/14. Modeling half-hourly carbon fluxes from campaigns is commonly performed based on non-linear regressions for the light response and respiration. The daily averages of net CO2 modeled from chamber data deviated from eddy covariance measurements in the range of ± 5 g C m-2 day-1. To understand the observed differences and to disentangle the effects, we performed four additional setups (expert versus default settings of the non-linear regressions based algorithm, purely empirical modeling with artificial neural networks versus non-linear regressions, cross-validating using eddy covariance measurements as campaign fluxes, weekly versus monthly scheduling of campaigns) to model the half-hourly carbon fluxes for the whole vegetation period. The good agreement of the seasonal course of net CO2 at plot and field scale for our agricultural site demonstrates that both techniques are robust and yield consistent results at seasonal time scale even for a managed ecosystem with high temporal dynamics in the fluxes. This allows combining the respective advantages of factorial experiments at plot scale with dense time series data at field scale. Furthermore, the information from the quasi-continuous eddy covariance measurements can be used to derive vegetation proxies to support the interpolation of carbon fluxes in-between the manual chamber campaigns.

  16. Physiological complexity of acute traumatic brain injury in patients treated with a brain oxygen protocol: utility of symbolic regression in predictive modeling of a dynamical system.

    PubMed

    Narotam, Pradeep K; Morrison, John F; Schmidt, Michael D; Nathoo, Narendra

    2014-04-01

    Predictive modeling of emergent behavior, inherent to complex physiological systems, requires the analysis of large complex clinical data streams currently being generated in the intensive care unit. Brain tissue oxygen protocols have yielded outcome benefits in traumatic brain injury (TBI), but the critical physiological thresholds for low brain oxygen have not been established for a dynamical patho-physiological system. High frequency, multi-modal clinical data sets from 29 patients with severe TBI who underwent multi-modality neuro-clinical care monitoring and treatment with a brain oxygen protocol were analyzed. The inter-relationship between acute physiological parameters was determined using symbolic regression (SR) as the computational framework. The mean patient age was 44.4±15 with a mean admission GCS of 6.6±3.9. Sixty-three percent sustained motor vehicle accidents and the most common pathology was intra-cerebral hemorrhage (50%). Hospital discharge mortality was 21%, poor outcome occurred in 24% of patients, and good outcome occurred in 56% of patients. Criticality for low brain oxygen was intracranial pressure (ICP) ≥22.8 mm Hg, for mortality at ICP≥37.1 mm Hg. The upper therapeutic threshold for cerebral perfusion pressure (CPP) was 75 mm Hg. Eubaric hyperoxia significantly impacted partial pressure of oxygen in brain tissue (PbtO2) at all ICP levels. Optimal brain temperature (Tbr) was 34-35°C, with an adverse effect when Tbr≥38°C. Survivors clustered at [Formula: see text] Hg vs. non-survivors [Formula: see text] 18 mm Hg. There were two mortality clusters for ICP: High ICP/low PbtO2 and low ICP/low PbtO2. Survivors maintained PbtO2 at all ranges of mean arterial pressure in contrast to non-survivors. The final SR equation for cerebral oxygenation is: [Formula: see text]. The SR-model of acute TBI advances new physiological thresholds or boundary conditions for acute TBI management: PbtO2≥25 mmHg; ICP≤22 mmHg; CPP≈60-75 mmHg; and Tbr≈34-37°C. SR is congruous with the emerging field of complexity science in the modeling of dynamical physiological systems, especially during pathophysiological states. The SR model of TBI is generalizable to known physical laws. This increase in entropy reduces uncertainty and improves predictive capacity. SR is an appropriate computational framework to enable future smart monitoring devices.

  17. Advances in Parameter and Uncertainty Quantification Using Bayesian Hierarchical Techniques with a Spatially Referenced Watershed Model (Invited)

    NASA Astrophysics Data System (ADS)

    Alexander, R. B.; Boyer, E. W.; Schwarz, G. E.; Smith, R. A.

    2013-12-01

    Estimating water and material stores and fluxes in watershed studies is frequently complicated by uncertainties in quantifying hydrological and biogeochemical effects of factors such as land use, soils, and climate. Although these process-related effects are commonly measured and modeled in separate catchments, researchers are especially challenged by their complexity across catchments and diverse environmental settings, leading to a poor understanding of how model parameters and prediction uncertainties vary spatially. To address these concerns, we illustrate the use of Bayesian hierarchical modeling techniques with a dynamic version of the spatially referenced watershed model SPARROW (SPAtially Referenced Regression On Watershed attributes). The dynamic SPARROW model is designed to predict streamflow and other water cycle components (e.g., evapotranspiration, soil and groundwater storage) for monthly varying hydrological regimes, using mechanistic functions, mass conservation constraints, and statistically estimated parameters. In this application, the model domain includes nearly 30,000 NHD (National Hydrologic Data) stream reaches and their associated catchments in the Susquehanna River Basin. We report the results of our comparisons of alternative models of varying complexity, including models with different explanatory variables as well as hierarchical models that account for spatial and temporal variability in model parameters and variance (error) components. The model errors are evaluated for changes with season and catchment size and correlations in time and space. The hierarchical models consist of a two-tiered structure in which climate forcing parameters are modeled as random variables, conditioned on watershed properties. Quantification of spatial and temporal variations in the hydrological parameters and model uncertainties in this approach leads to more efficient (lower variance) and less biased model predictions throughout the river network. Moreover, predictions of water-balance components are reported according to probabilistic metrics (e.g., percentiles, prediction intervals) that include both parameter and model uncertainties. These improvements in predictions of streamflow dynamics can inform the development of more accurate predictions of spatial and temporal variations in biogeochemical stores and fluxes (e.g., nutrients and carbon) in watersheds.

  18. Wheat yield dynamics: a structural econometric analysis.

    PubMed

    Sahin, Afsin; Akdi, Yilmaz; Arslan, Fahrettin

    2007-10-15

    In this study we initially have tried to explore the wheat situation in Turkey, which has a small-open economy and in the member countries of European Union (EU). We have observed that increasing the wheat yield is fundamental to obtain comparative advantage among countries by depressing domestic prices. Also the changing structure of supporting schemes in Turkey makes it necessary to increase its wheat yield level. For this purpose, we have used available data to determine the dynamics of wheat yield by Ordinary Least Square Regression methods. In order to find out whether there is a linear relationship among these series we have checked each series whether they are integrated at the same order or not. Consequently, we have pointed out that fertilizer usage and precipitation level are substantial inputs for producing high wheat yield. Furthermore, in respect for our model, fertilizer usage affects wheat yield more than precipitation level.

  19. Global Sensitivity Analysis as Good Modelling Practices tool for the identification of the most influential process parameters of the primary drying step during freeze-drying.

    PubMed

    Van Bockstal, Pieter-Jan; Mortier, Séverine Thérèse F C; Corver, Jos; Nopens, Ingmar; Gernaey, Krist V; De Beer, Thomas

    2018-02-01

    Pharmaceutical batch freeze-drying is commonly used to improve the stability of biological therapeutics. The primary drying step is regulated by the dynamic settings of the adaptable process variables, shelf temperature T s and chamber pressure P c . Mechanistic modelling of the primary drying step leads to the optimal dynamic combination of these adaptable process variables in function of time. According to Good Modelling Practices, a Global Sensitivity Analysis (GSA) is essential for appropriate model building. In this study, both a regression-based and variance-based GSA were conducted on a validated mechanistic primary drying model to estimate the impact of several model input parameters on two output variables, the product temperature at the sublimation front T i and the sublimation rate ṁ sub . T s was identified as most influential parameter on both T i and ṁ sub , followed by P c and the dried product mass transfer resistance α Rp for T i and ṁ sub , respectively. The GSA findings were experimentally validated for ṁ sub via a Design of Experiments (DoE) approach. The results indicated that GSA is a very useful tool for the evaluation of the impact of different process variables on the model outcome, leading to essential process knowledge, without the need for time-consuming experiments (e.g., DoE). Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Modeling absolute differences in life expectancy with a censored skew-normal regression approach

    PubMed Central

    Clough-Gorr, Kerri; Zwahlen, Marcel

    2015-01-01

    Parameter estimates from commonly used multivariable parametric survival regression models do not directly quantify differences in years of life expectancy. Gaussian linear regression models give results in terms of absolute mean differences, but are not appropriate in modeling life expectancy, because in many situations time to death has a negative skewed distribution. A regression approach using a skew-normal distribution would be an alternative to parametric survival models in the modeling of life expectancy, because parameter estimates can be interpreted in terms of survival time differences while allowing for skewness of the distribution. In this paper we show how to use the skew-normal regression so that censored and left-truncated observations are accounted for. With this we model differences in life expectancy using data from the Swiss National Cohort Study and from official life expectancy estimates and compare the results with those derived from commonly used survival regression models. We conclude that a censored skew-normal survival regression approach for left-truncated observations can be used to model differences in life expectancy across covariates of interest. PMID:26339544

Top