Sample records for linear additive model

  1. Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables

    ERIC Educational Resources Information Center

    Henson, Robert A.; Templin, Jonathan L.; Willse, John T.

    2009-01-01

    This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…

  2. Effects of Stochastic Traffic Flow Model on Expected System Performance

    DTIC Science & Technology

    2012-12-01

    NSWC-PCD has made considerable improvements to their pedestrian flow modeling . In addition to the linear paths, the 2011 version now includes...using stochastic paths. 2.2 Linear Paths vs. Stochastic Paths 2.2.1 Linear Paths and Direct Maximum Pd Calculation Modeling pedestrian traffic flow...as a stochastic process begins with the linear path model . Let the detec- tion area be R x C voxels. This creates C 2 total linear paths, path(Cs

  3. Genetic parameters for direct and maternal calving ease in Walloon dairy cattle based on linear and threshold models.

    PubMed

    Vanderick, S; Troch, T; Gillon, A; Glorieux, G; Gengler, N

    2014-12-01

    Calving ease scores from Holstein dairy cattle in the Walloon Region of Belgium were analysed using univariate linear and threshold animal models. Variance components and derived genetic parameters were estimated from a data set including 33,155 calving records. Included in the models were season, herd and sex of calf × age of dam classes × group of calvings interaction as fixed effects, herd × year of calving, maternal permanent environment and animal direct and maternal additive genetic as random effects. Models were fitted with the genetic correlation between direct and maternal additive genetic effects either estimated or constrained to zero. Direct heritability for calving ease was approximately 8% with linear models and approximately 12% with threshold models. Maternal heritabilities were approximately 2 and 4%, respectively. Genetic correlation between direct and maternal additive effects was found to be not significantly different from zero. Models were compared in terms of goodness of fit and predictive ability. Criteria of comparison such as mean squared error, correlation between observed and predicted calving ease scores as well as between estimated breeding values were estimated from 85,118 calving records. The results provided few differences between linear and threshold models even though correlations between estimated breeding values from subsets of data for sires with progeny from linear model were 17 and 23% greater for direct and maternal genetic effects, respectively, than from threshold model. For the purpose of genetic evaluation for calving ease in Walloon Holstein dairy cattle, the linear animal model without covariance between direct and maternal additive effects was found to be the best choice. © 2014 Blackwell Verlag GmbH.

  4. Performance Metrics, Error Modeling, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling

    2016-01-01

    A common set of statistical metrics has been used to summarize the performance of models or measurements-­ the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying un­certainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling meth­odology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.

  5. Time Delay Estimation

    DTIC Science & Technology

    1976-04-09

    of the signal and noise remain HH ***^-^*--~ 53 h, to r(Mc) h2(r» r(we) Figure 3-2 Sy&toetric Impulse Response for Two FIR Linear Phase...Inputs x,y and Outputs x.j. , 15 2-2 Linear System with Impulse Response h("r) 23 2-3 Model of Error Resulting from Linearly Filtering x(t) to...Corrupted with Additive Noise 42 2-6 Model of Directional Signal Corrupted with Additive Noise and Processed .... 45 2-7 Source Driving Two

  6. Latent log-linear models for handwritten digit classification.

    PubMed

    Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann

    2012-06-01

    We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.

  7. An improved null model for assessing the net effects of multiple stressors on communities.

    PubMed

    Thompson, Patrick L; MacLennan, Megan M; Vinebrooke, Rolf D

    2018-01-01

    Ecological stressors (i.e., environmental factors outside their normal range of variation) can mediate each other through their interactions, leading to unexpected combined effects on communities. Determining whether the net effect of stressors is ecologically surprising requires comparing their cumulative impact to a null model that represents the linear combination of their individual effects (i.e., an additive expectation). However, we show that standard additive and multiplicative null models that base their predictions on the effects of single stressors on community properties (e.g., species richness or biomass) do not provide this linear expectation, leading to incorrect interpretations of antagonistic and synergistic responses by communities. We present an alternative, the compositional null model, which instead bases its predictions on the effects of stressors on individual species, and then aggregates them to the community level. Simulations demonstrate the improved ability of the compositional null model to accurately provide a linear expectation of the net effect of stressors. We simulate the response of communities to paired stressors that affect species in a purely additive fashion and compare the relative abilities of the compositional null model and two standard community property null models (additive and multiplicative) to predict these linear changes in species richness and community biomass across different combinations (both positive, negative, or opposite) and intensities of stressors. The compositional model predicts the linear effects of multiple stressors under almost all scenarios, allowing for proper classification of net effects, whereas the standard null models do not. Our findings suggest that current estimates of the prevalence of ecological surprises on communities based on community property null models are unreliable, and should be improved by integrating the responses of individual species to the community level as does our compositional null model. © 2017 John Wiley & Sons Ltd.

  8. Strengthen forensic entomology in court--the need for data exploration and the validation of a generalised additive mixed model.

    PubMed

    Baqué, Michèle; Amendt, Jens

    2013-01-01

    Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models.

  9. Population response to climate change: linear vs. non-linear modeling approaches.

    PubMed

    Ellis, Alicia M; Post, Eric

    2004-03-31

    Research on the ecological consequences of global climate change has elicited a growing interest in the use of time series analysis to investigate population dynamics in a changing climate. Here, we compare linear and non-linear models describing the contribution of climate to the density fluctuations of the population of wolves on Isle Royale, Michigan from 1959 to 1999. The non-linear self excitatory threshold autoregressive (SETAR) model revealed that, due to differences in the strength and nature of density dependence, relatively small and large populations may be differentially affected by future changes in climate. Both linear and non-linear models predict a decrease in the population of wolves with predicted changes in climate. Because specific predictions differed between linear and non-linear models, our study highlights the importance of using non-linear methods that allow the detection of non-linearity in the strength and nature of density dependence. Failure to adopt a non-linear approach to modelling population response to climate change, either exclusively or in addition to linear approaches, may compromise efforts to quantify ecological consequences of future warming.

  10. A green vehicle routing problem with customer satisfaction criteria

    NASA Astrophysics Data System (ADS)

    Afshar-Bakeshloo, M.; Mehrabi, A.; Safari, H.; Maleki, M.; Jolai, F.

    2016-12-01

    This paper develops an MILP model, named Satisfactory-Green Vehicle Routing Problem. It consists of routing a heterogeneous fleet of vehicles in order to serve a set of customers within predefined time windows. In this model in addition to the traditional objective of the VRP, both the pollution and customers' satisfaction have been taken into account. Meanwhile, the introduced model prepares an effective dashboard for decision-makers that determines appropriate routes, the best mixed fleet, speed and idle time of vehicles. Additionally, some new factors evaluate the greening of each decision based on three criteria. This model applies piecewise linear functions (PLFs) to linearize a nonlinear fuzzy interval for incorporating customers' satisfaction into other linear objectives. We have presented a mixed integer linear programming formulation for the S-GVRP. This model enriches managerial insights by providing trade-offs between customers' satisfaction, total costs and emission levels. Finally, we have provided a numerical study for showing the applicability of the model.

  11. A penalized framework for distributed lag non-linear models.

    PubMed

    Gasparrini, Antonio; Scheipl, Fabian; Armstrong, Ben; Kenward, Michael G

    2017-09-01

    Distributed lag non-linear models (DLNMs) are a modelling tool for describing potentially non-linear and delayed dependencies. Here, we illustrate an extension of the DLNM framework through the use of penalized splines within generalized additive models (GAM). This extension offers built-in model selection procedures and the possibility of accommodating assumptions on the shape of the lag structure through specific penalties. In addition, this framework includes, as special cases, simpler models previously proposed for linear relationships (DLMs). Alternative versions of penalized DLNMs are compared with each other and with the standard unpenalized version in a simulation study. Results show that this penalized extension to the DLNM class provides greater flexibility and improved inferential properties. The framework exploits recent theoretical developments of GAMs and is implemented using efficient routines within freely available software. Real-data applications are illustrated through two reproducible examples in time series and survival analysis. © 2017 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  12. Linearized aerodynamic and control law models of the X-29A airplane and comparison with flight data

    NASA Technical Reports Server (NTRS)

    Bosworth, John T.

    1992-01-01

    Flight control system design and analysis for aircraft rely on mathematical models of the vehicle dynamics. In addition to a six degree of freedom nonlinear simulation, the X-29A flight controls group developed a set of programs that calculate linear perturbation models throughout the X-29A flight envelope. The models include the aerodynamics as well as flight control system dynamics and were used for stability, controllability, and handling qualities analysis. These linear models were compared to flight test results to help provide a safe flight envelope expansion. A description is given of the linear models at three flight conditions and two flight control system modes. The models are presented with a level of detail that would allow the reader to reproduce the linear results if desired. Comparison between the response of the linear model and flight measured responses are presented to demonstrate the strengths and weaknesses of the linear models' ability to predict flight dynamics.

  13. Fast and local non-linear evolution of steep wave-groups on deep water: A comparison of approximate models to fully non-linear simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adcock, T. A. A.; Taylor, P. H.

    2016-01-15

    The non-linear Schrödinger equation and its higher order extensions are routinely used for analysis of extreme ocean waves. This paper compares the evolution of individual wave-packets modelled using non-linear Schrödinger type equations with packets modelled using fully non-linear potential flow models. The modified non-linear Schrödinger Equation accurately models the relatively large scale non-linear changes to the shape of wave-groups, with a dramatic contraction of the group along the mean propagation direction and a corresponding extension of the width of the wave-crests. In addition, as extreme wave form, there is a local non-linear contraction of the wave-group around the crest whichmore » leads to a localised broadening of the wave spectrum which the bandwidth limited non-linear Schrödinger Equations struggle to capture. This limitation occurs for waves of moderate steepness and a narrow underlying spectrum.« less

  14. Finite linear diffusion model for design of overcharge protection for rechargeable lithium batteries

    NASA Technical Reports Server (NTRS)

    Narayanan, S. R.; Surampudi, S.; Attia, A. I.

    1991-01-01

    The overcharge condition in secondary lithium batteries employing redox additives for overcharge protection has been theoretically analyzed in terms of a finite linear diffusion model. The analysis leads to expressions relating the steady-state overcharge current density and cell voltage to the concentration, diffusion coefficient, standard reduction potential of the redox couple, and interelectrode distance. The model permits the estimation of the maximum permissible overcharge rate for any chosen set of system conditions. The model has been experimentally verified using 1,1-prime-dimethylferrocene as a redox additive. The theoretical results may be exploited in the design and optimization of overcharge protection by the redox additive approach.

  15. A comparison of linear and nonlinear statistical techniques in performance attribution.

    PubMed

    Chan, N H; Genovese, C R

    2001-01-01

    Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.

  16. The NASA High Speed ASE Project: Computational Analyses of a Low-Boom Supersonic Configuration

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; DeLaGarza, Antonio; Zink, Scott; Bounajem, Elias G.; Johnson, Christopher; Buonanno, Michael; Sanetrik, Mark D.; Yoo, Seung Y.; Kopasakis, George; Christhilf, David M.; hide

    2014-01-01

    A summary of NASA's High Speed Aeroservoelasticity (ASE) project is provided with a focus on a low-boom supersonic configuration developed by Lockheed-Martin and referred to as the N+2 configuration. The summary includes details of the computational models developed to date including a linear finite element model (FEM), linear unsteady aerodynamic models, structured and unstructured CFD grids, and discussion of the FEM development including sizing and structural constraints applied to the N+2 configuration. Linear results obtained to date include linear mode shapes and linear flutter boundaries. In addition to the tasks associated with the N+2 configuration, a summary of the work involving the development of AeroPropulsoServoElasticity (APSE) models is also discussed.

  17. [Application of SAS macro to evaluated multiplicative and additive interaction in logistic and Cox regression in clinical practices].

    PubMed

    Nie, Z Q; Ou, Y Q; Zhuang, J; Qu, Y J; Mai, J Z; Chen, J M; Liu, X Q

    2016-05-01

    Conditional logistic regression analysis and unconditional logistic regression analysis are commonly used in case control study, but Cox proportional hazard model is often used in survival data analysis. Most literature only refer to main effect model, however, generalized linear model differs from general linear model, and the interaction was composed of multiplicative interaction and additive interaction. The former is only statistical significant, but the latter has biological significance. In this paper, macros was written by using SAS 9.4 and the contrast ratio, attributable proportion due to interaction and synergy index were calculated while calculating the items of logistic and Cox regression interactions, and the confidence intervals of Wald, delta and profile likelihood were used to evaluate additive interaction for the reference in big data analysis in clinical epidemiology and in analysis of genetic multiplicative and additive interactions.

  18. Modelling female fertility traits in beef cattle using linear and non-linear models.

    PubMed

    Naya, H; Peñagaricano, F; Urioste, J I

    2017-06-01

    Female fertility traits are key components of the profitability of beef cattle production. However, these traits are difficult and expensive to measure, particularly under extensive pastoral conditions, and consequently, fertility records are in general scarce and somehow incomplete. Moreover, fertility traits are usually dominated by the effects of herd-year environment, and it is generally assumed that relatively small margins are kept for genetic improvement. New ways of modelling genetic variation in these traits are needed. Inspired in the methodological developments made by Prof. Daniel Gianola and co-workers, we assayed linear (Gaussian), Poisson, probit (threshold), censored Poisson and censored Gaussian models to three different kinds of endpoints, namely calving success (CS), number of days from first calving (CD) and number of failed oestrus (FE). For models involving FE and CS, non-linear models overperformed their linear counterparts. For models derived from CD, linear versions displayed better adjustment than the non-linear counterparts. Non-linear models showed consistently higher estimates of heritability and repeatability in all cases (h 2  < 0.08 and r < 0.13, for linear models; h 2  > 0.23 and r > 0.24, for non-linear models). While additive and permanent environment effects showed highly favourable correlations between all models (>0.789), consistency in selecting the 10% best sires showed important differences, mainly amongst the considered endpoints (FE, CS and CD). In consequence, endpoints should be considered as modelling different underlying genetic effects, with linear models more appropriate to describe CD and non-linear models better for FE and CS. © 2017 Blackwell Verlag GmbH.

  19. A novel heuristic for optimization aggregate production problem: Evidence from flat panel display in Malaysia

    NASA Astrophysics Data System (ADS)

    Al-Kuhali, K.; Hussain M., I.; Zain Z., M.; Mullenix, P.

    2015-05-01

    Aim: This paper contribute to the flat panel display industry it terms of aggregate production planning. Methodology: For the minimization cost of total production of LCD manufacturing, a linear programming was applied. The decision variables are general production costs, additional cost incurred for overtime production, additional cost incurred for subcontracting, inventory carrying cost, backorder costs and adjustments for changes incurred within labour levels. Model has been developed considering a manufacturer having several product types, which the maximum types are N, along a total time period of T. Results: Industrial case study based on Malaysia is presented to test and to validate the developed linear programming model for aggregate production planning. Conclusion: The model development is fit under stable environment conditions. Overall it can be recommended to adapt the proven linear programming model to production planning of Malaysian flat panel display industry.

  20. Constructing an Efficient Self-Tuning Aircraft Engine Model for Control and Health Management Applications

    NASA Technical Reports Server (NTRS)

    Armstrong, Jeffrey B.; Simon, Donald L.

    2012-01-01

    Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulations.Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulatns.

  1. Structured functional additive regression in reproducing kernel Hilbert spaces.

    PubMed

    Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen

    2014-06-01

    Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application.

  2. Iterative algorithms for large sparse linear systems on parallel computers

    NASA Technical Reports Server (NTRS)

    Adams, L. M.

    1982-01-01

    Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.

  3. Difference-based ridge-type estimator of parameters in restricted partial linear model with correlated errors.

    PubMed

    Wu, Jibo

    2016-01-01

    In this article, a generalized difference-based ridge estimator is proposed for the vector parameter in a partial linear model when the errors are dependent. It is supposed that some additional linear constraints may hold to the whole parameter space. Its mean-squared error matrix is compared with the generalized restricted difference-based estimator. Finally, the performance of the new estimator is explained by a simulation study and a numerical example.

  4. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape

    PubMed Central

    Coupé, Christophe

    2018-01-01

    As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for ‘difficult’ variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we assess a range of candidate distributions, including the Sichel, Delaporte, Box-Cox Green and Cole, and Box-Cox t distributions. We find that the Box-Cox t distribution, with appropriate modeling of its parameters, best fits the conditional distribution of phonemic inventory size. We finally discuss the specificities of phoneme counts, weak effects, and how GAMLSS should be considered for other linguistic variables. PMID:29713298

  5. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape.

    PubMed

    Coupé, Christophe

    2018-01-01

    As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for 'difficult' variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we assess a range of candidate distributions, including the Sichel, Delaporte, Box-Cox Green and Cole, and Box-Cox t distributions. We find that the Box-Cox t distribution, with appropriate modeling of its parameters, best fits the conditional distribution of phonemic inventory size. We finally discuss the specificities of phoneme counts, weak effects, and how GAMLSS should be considered for other linguistic variables.

  6. Sparse Additive Ordinary Differential Equations for Dynamic Gene Regulatory Network Modeling.

    PubMed

    Wu, Hulin; Lu, Tao; Xue, Hongqi; Liang, Hua

    2014-04-02

    The gene regulation network (GRN) is a high-dimensional complex system, which can be represented by various mathematical or statistical models. The ordinary differential equation (ODE) model is one of the popular dynamic GRN models. High-dimensional linear ODE models have been proposed to identify GRNs, but with a limitation of the linear regulation effect assumption. In this article, we propose a sparse additive ODE (SA-ODE) model, coupled with ODE estimation methods and adaptive group LASSO techniques, to model dynamic GRNs that could flexibly deal with nonlinear regulation effects. The asymptotic properties of the proposed method are established and simulation studies are performed to validate the proposed approach. An application example for identifying the nonlinear dynamic GRN of T-cell activation is used to illustrate the usefulness of the proposed method.

  7. Comparison of all atom, continuum, and linear fitting empirical models for charge screening effect of aqueous medium surrounding a protein molecule

    NASA Astrophysics Data System (ADS)

    Takahashi, Takuya; Sugiura, Junnnosuke; Nagayama, Kuniaki

    2002-05-01

    To investigate the role hydration plays in the electrostatic interactions of proteins, the time-averaged electrostatic potential of the B1 domain of protein G in an aqueous solution was calculated with full atomic molecular dynamics simulations that explicitly considers every atom (i.e., an all atom model). This all atom calculated potential was compared with the potential obtained from an electrostatic continuum model calculation. In both cases, the charge-screening effect was fairly well formulated with an effective relative dielectric constant which increased linearly with increasing charge-charge distance. This simulated linear dependence agrees with the experimentally determined linear relation proposed by Pickersgill. Cut-off approximations for Coulomb interactions failed to reproduce this linear relation. Correlation between the all atom model and the continuum models was found to be better than the respective correlation calculated for linear fitting to the two models. This confirms that the continuum model is better at treating the complicated shapes of protein conformations than the simple linear fitting empirical model. We have tried a sigmoid fitting empirical model in addition to the linear one. When weights of all data were treated equally, the sigmoid model, which requires two fitting parameters, fits results of both the all atom and the continuum models less accurately than the linear model which requires only one fitting parameter. When potential values are chosen as weighting factors, the fitting error of the sigmoid model became smaller, and the slope of both linear fitting curves became smaller. This suggests the screening effect of an aqueous medium within a short range, where potential values are relatively large, is smaller than that expected from the linear fitting curve whose slope is almost 4. To investigate the linear increase of the effective relative dielectric constant, the Poisson equation of a low-dielectric sphere in a high-dielectric medium was solved and charges distributed near the molecular surface were indicated as leading to the apparent linearity.

  8. A Bivariate Generalized Linear Item Response Theory Modeling Framework to the Analysis of Responses and Response Times.

    PubMed

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-01-01

    A generalized linear modeling framework to the analysis of responses and response times is outlined. In this framework, referred to as bivariate generalized linear item response theory (B-GLIRT), separate generalized linear measurement models are specified for the responses and the response times that are subsequently linked by cross-relations. The cross-relations can take various forms. Here, we focus on cross-relations with a linear or interaction term for ability tests, and cross-relations with a curvilinear term for personality tests. In addition, we discuss how popular existing models from the psychometric literature are special cases in the B-GLIRT framework depending on restrictions in the cross-relation. This allows us to compare existing models conceptually and empirically. We discuss various extensions of the traditional models motivated by practical problems. We also illustrate the applicability of our approach using various real data examples, including data on personality and cognitive ability.

  9. Investigation of the flight mechanics simulation of a hovering helicopter

    NASA Technical Reports Server (NTRS)

    Chaimovich, M.; Rosen, A.; Rand, O.; Mansur, M. H.; Tischler, M. B.

    1992-01-01

    The flight mechanics simulation of a hovering helicopter is investigated by comparing the results of two different numerical models with flight test data for a hovering AH-64 Apache. The two models are the U.S. Army BEMAP and the Technion model. These nonlinear models are linearized by applying a numerical linearization procedure. The results of the linear models are compared with identification results in terms of eigenvalues, stability and control derivatives, and frequency responses. Detailed time histories of the responses of the complete nonlinear models, as a result of various pilots' inputs, are compared with flight test results. In addition the sensitivity of the models to various effects are also investigated. The results are discussed and problematic aspects of the simulation are identified.

  10. A new adaptive multiple modelling approach for non-linear and non-stationary systems

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Gong, Yu; Hong, Xia

    2016-07-01

    This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.

  11. Random regression models using Legendre polynomials or linear splines for test-day milk yield of dairy Gyr (Bos indicus) cattle.

    PubMed

    Pereira, R J; Bignardi, A B; El Faro, L; Verneque, R S; Vercesi Filho, A E; Albuquerque, L G

    2013-01-01

    Studies investigating the use of random regression models for genetic evaluation of milk production in Zebu cattle are scarce. In this study, 59,744 test-day milk yield records from 7,810 first lactations of purebred dairy Gyr (Bos indicus) and crossbred (dairy Gyr × Holstein) cows were used to compare random regression models in which additive genetic and permanent environmental effects were modeled using orthogonal Legendre polynomials or linear spline functions. Residual variances were modeled considering 1, 5, or 10 classes of days in milk. Five classes fitted the changes in residual variances over the lactation adequately and were used for model comparison. The model that fitted linear spline functions with 6 knots provided the lowest sum of residual variances across lactation. On the other hand, according to the deviance information criterion (DIC) and bayesian information criterion (BIC), a model using third-order and fourth-order Legendre polynomials for additive genetic and permanent environmental effects, respectively, provided the best fit. However, the high rank correlation (0.998) between this model and that applying third-order Legendre polynomials for additive genetic and permanent environmental effects, indicates that, in practice, the same bulls would be selected by both models. The last model, which is less parameterized, is a parsimonious option for fitting dairy Gyr breed test-day milk yield records. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  12. Structured functional additive regression in reproducing kernel Hilbert spaces

    PubMed Central

    Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen

    2013-01-01

    Summary Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application. PMID:25013362

  13. A simplified approach to quasi-linear viscoelastic modeling

    PubMed Central

    Nekouzadeh, Ali; Pryse, Kenneth M.; Elson, Elliot L.; Genin, Guy M.

    2007-01-01

    The fitting of quasi-linear viscoelastic (QLV) constitutive models to material data often involves somewhat cumbersome numerical convolution. A new approach to treating quasi-linearity in one dimension is described and applied to characterize the behavior of reconstituted collagen. This approach is based on a new principle for including nonlinearity and requires considerably less computation than other comparable models for both model calibration and response prediction, especially for smoothly applied stretching. Additionally, the approach allows relaxation to adapt with the strain history. The modeling approach is demonstrated through tests on pure reconstituted collagen. Sequences of “ramp-and-hold” stretching tests were applied to rectangular collagen specimens. The relaxation force data from the “hold” was used to calibrate a new “adaptive QLV model” and several models from literature, and the force data from the “ramp” was used to check the accuracy of model predictions. Additionally, the ability of the models to predict the force response on a reloading of the specimen was assessed. The “adaptive QLV model” based on this new approach predicts collagen behavior comparably to or better than existing models, with much less computation. PMID:17499254

  14. Ambient temperature and coronary heart disease mortality in Beijing, China: a time series study

    PubMed Central

    2012-01-01

    Background Many studies have examined the association between ambient temperature and mortality. However, less evidence is available on the temperature effects on coronary heart disease (CHD) mortality, especially in China. In this study, we examined the relationship between ambient temperature and CHD mortality in Beijing, China during 2000 to 2011. In addition, we compared time series and time-stratified case-crossover models for the non-linear effects of temperature. Methods We examined the effects of temperature on CHD mortality using both time series and time-stratified case-crossover models. We also assessed the effects of temperature on CHD mortality by subgroups: gender (female and male) and age (age > =65 and age < 65). We used a distributed lag non-linear model to examine the non-linear effects of temperature on CHD mortality up to 15 lag days. We used Akaike information criterion to assess the model fit for the two designs. Results The time series models had a better model fit than time-stratified case-crossover models. Both designs showed that the relationships between temperature and group-specific CHD mortality were non-linear. Extreme cold and hot temperatures significantly increased the risk of CHD mortality. Hot effects were acute and short-term, while cold effects were delayed by two days and lasted for five days. The old people and women were more sensitive to extreme cold and hot temperatures than young and men. Conclusions This study suggests that time series models performed better than time-stratified case-crossover models according to the model fit, even though they produced similar non-linear effects of temperature on CHD mortality. In addition, our findings indicate that extreme cold and hot temperatures increase the risk of CHD mortality in Beijing, China, particularly for women and old people. PMID:22909034

  15. Propagating synchrony in feed-forward networks

    PubMed Central

    Jahnke, Sven; Memmesheimer, Raoul-Martin; Timme, Marc

    2013-01-01

    Coordinated patterns of precisely timed action potentials (spikes) emerge in a variety of neural circuits but their dynamical origin is still not well understood. One hypothesis states that synchronous activity propagating through feed-forward chains of groups of neurons (synfire chains) may dynamically generate such spike patterns. Additionally, synfire chains offer the possibility to enable reliable signal transmission. So far, mostly densely connected chains, often with all-to-all connectivity between groups, have been theoretically and computationally studied. Yet, such prominent feed-forward structures have not been observed experimentally. Here we analytically and numerically investigate under which conditions diluted feed-forward chains may exhibit synchrony propagation. In addition to conventional linear input summation, we study the impact of non-linear, non-additive summation accounting for the effect of fast dendritic spikes. The non-linearities promote synchronous inputs to generate precisely timed spikes. We identify how non-additive coupling relaxes the conditions on connectivity such that it enables synchrony propagation at connectivities substantially lower than required for linearly coupled chains. Although the analytical treatment is based on a simple leaky integrate-and-fire neuron model, we show how to generalize our methods to biologically more detailed neuron models and verify our results by numerical simulations with, e.g., Hodgkin Huxley type neurons. PMID:24298251

  16. Approximate reduction of linear population models governed by stochastic differential equations: application to multiregional models.

    PubMed

    Sanz, Luis; Alonso, Juan Antonio

    2017-12-01

    In this work we develop approximate aggregation techniques in the context of slow-fast linear population models governed by stochastic differential equations and apply the results to the treatment of populations with spatial heterogeneity. Approximate aggregation techniques allow one to transform a complex system involving many coupled variables and in which there are processes with different time scales, by a simpler reduced model with a fewer number of 'global' variables, in such a way that the dynamics of the former can be approximated by that of the latter. In our model we contemplate a linear fast deterministic process together with a linear slow process in which the parameters are affected by additive noise, and give conditions for the solutions corresponding to positive initial conditions to remain positive for all times. By letting the fast process reach equilibrium we build a reduced system with a lesser number of variables, and provide results relating the asymptotic behaviour of the first- and second-order moments of the population vector for the original and the reduced system. The general technique is illustrated by analysing a multiregional stochastic system in which dispersal is deterministic and the rate growth of the populations in each patch is affected by additive noise.

  17. Modeling the effects of AADT on predicting multiple-vehicle crashes at urban and suburban signalized intersections.

    PubMed

    Chen, Chen; Xie, Yuanchang

    2016-06-01

    Annual Average Daily Traffic (AADT) is often considered as a main covariate for predicting crash frequencies at urban and suburban intersections. A linear functional form is typically assumed for the Safety Performance Function (SPF) to describe the relationship between the natural logarithm of expected crash frequency and covariates derived from AADTs. Such a linearity assumption has been questioned by many researchers. This study applies Generalized Additive Models (GAMs) and Piecewise Linear Negative Binomial (PLNB) regression models to fit intersection crash data. Various covariates derived from minor-and major-approach AADTs are considered. Three different dependent variables are modeled, which are total multiple-vehicle crashes, rear-end crashes, and angle crashes. The modeling results suggest that a nonlinear functional form may be more appropriate. Also, the results show that it is important to take into consideration the joint safety effects of multiple covariates. Additionally, it is found that the ratio of minor to major-approach AADT has a varying impact on intersection safety and deserves further investigations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Analysis of propulsion system dynamics in the validation of a high-order state space model of the UH-60

    NASA Technical Reports Server (NTRS)

    Kim, Frederick D.

    1992-01-01

    Frequency responses generated from a high-order linear model of the UH-60 Black Hawk have shown that the propulsion system influences significantly the vertical and yaw dynamics of the aircraft at frequencies important to high-bandwidth control law designs. The inclusion of the propulsion system comprises the latest step in the development of a high-order linear model of the UH-60 that models additionally the dynamics of the fuselage, rotor, and inflow. A complete validation study of the linear model is presented in the frequency domain for both on-axis and off-axis coupled responses in the hoverflight condition, and on-axis responses for forward speeds of 80 and 120 knots.

  19. Heat capacities and volumetric changes in the glass transition range: a constitutive approach based on the standard linear solid

    NASA Astrophysics Data System (ADS)

    Lion, Alexander; Mittermeier, Christoph; Johlitz, Michael

    2017-09-01

    A novel approach to represent the glass transition is proposed. It is based on a physically motivated extension of the linear viscoelastic Poynting-Thomson model. In addition to a temperature-dependent damping element and two linear springs, two thermal strain elements are introduced. In order to take the process dependence of the specific heat into account and to model its characteristic behaviour below and above the glass transition, the Helmholtz free energy contains an additional contribution which depends on the temperature history and on the current temperature. The model describes the process-dependent volumetric and caloric behaviour of glass-forming materials, and defines a functional relationship between pressure, volumetric strain, and temperature. If a model for the isochoric part of the material behaviour is already available, for example a model of finite viscoelasticity, the caloric and volumetric behaviour can be represented with the current approach. The proposed model allows computing the isobaric and isochoric heat capacities in closed form. The difference c_p -c_v is process-dependent and tends towards the classical expression in the glassy and equilibrium ranges. Simulations and theoretical studies demonstrate the physical significance of the model.

  20. Numerical solution of non-linear dual-phase-lag bioheat transfer equation within skin tissues.

    PubMed

    Kumar, Dinesh; Kumar, P; Rai, K N

    2017-11-01

    This paper deals with numerical modeling and simulation of heat transfer in skin tissues using non-linear dual-phase-lag (DPL) bioheat transfer model under periodic heat flux boundary condition. The blood perfusion is assumed temperature-dependent which results in non-linear DPL bioheat transfer model in order to predict more accurate results. A numerical method of line which is based on finite difference and Runge-Kutta (4,5) schemes, is used to solve the present non-linear problem. Under specific case, the exact solution has been obtained and compared with the present numerical scheme, and we found that those are in good agreement. A comparison based on model selection criterion (AIC) has been made among non-linear DPL models when the variation of blood perfusion rate with temperature is of constant, linear and exponential type with the experimental data and it has been found that non-linear DPL model with exponential variation of blood perfusion rate is closest to the experimental data. In addition, it is found that due to absence of phase-lag phenomena in Pennes bioheat transfer model, it achieves steady state more quickly and always predict higher temperature than thermal and DPL non-linear models. The effect of coefficient of blood perfusion rate, dimensionless heating frequency and Kirchoff number on dimensionless temperature distribution has also been analyzed. The whole analysis is presented in dimensionless form. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Skew-t partially linear mixed-effects models for AIDS clinical studies.

    PubMed

    Lu, Tao

    2016-01-01

    We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.

  2. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  3. A heteroscedastic generalized linear model with a non-normal speed factor for responses and response times.

    PubMed

    Molenaar, Dylan; Bolsinova, Maria

    2017-05-01

    In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  4. Learning epistatic interactions from sequence-activity data to predict enantioselectivity

    NASA Astrophysics Data System (ADS)

    Zaugg, Julian; Gumulya, Yosephine; Malde, Alpeshkumar K.; Bodén, Mikael

    2017-12-01

    Enzymes with a high selectivity are desirable for improving economics of chemical synthesis of enantiopure compounds. To improve enzyme selectivity mutations are often introduced near the catalytic active site. In this compact environment epistatic interactions between residues, where contributions to selectivity are non-additive, play a significant role in determining the degree of selectivity. Using support vector machine regression models we map mutations to the experimentally characterised enantioselectivities for a set of 136 variants of the epoxide hydrolase from the fungus Aspergillus niger (AnEH). We investigate whether the influence a mutation has on enzyme selectivity can be accurately predicted through linear models, and whether prediction accuracy can be improved using higher-order counterparts. Comparing linear and polynomial degree = 2 models, mean Pearson coefficients (r) from 50 {× } 5 -fold cross-validation increase from 0.84 to 0.91 respectively. Equivalent models tested on interaction-minimised sequences achieve values of r=0.90 and r=0.93 . As expected, testing on a simulated control data set with no interactions results in no significant improvements from higher-order models. Additional experimentally derived AnEH mutants are tested with linear and polynomial degree = 2 models, with values increasing from r=0.51 to r=0.87 respectively. The study demonstrates that linear models perform well, however the representation of epistatic interactions in predictive models improves identification of selectivity-enhancing mutations. The improvement is attributed to higher-order kernel functions that represent epistatic interactions between residues.

  5. Learning epistatic interactions from sequence-activity data to predict enantioselectivity

    NASA Astrophysics Data System (ADS)

    Zaugg, Julian; Gumulya, Yosephine; Malde, Alpeshkumar K.; Bodén, Mikael

    2017-12-01

    Enzymes with a high selectivity are desirable for improving economics of chemical synthesis of enantiopure compounds. To improve enzyme selectivity mutations are often introduced near the catalytic active site. In this compact environment epistatic interactions between residues, where contributions to selectivity are non-additive, play a significant role in determining the degree of selectivity. Using support vector machine regression models we map mutations to the experimentally characterised enantioselectivities for a set of 136 variants of the epoxide hydrolase from the fungus Aspergillus niger ( AnEH). We investigate whether the influence a mutation has on enzyme selectivity can be accurately predicted through linear models, and whether prediction accuracy can be improved using higher-order counterparts. Comparing linear and polynomial degree = 2 models, mean Pearson coefficients ( r) from 50 {× } 5-fold cross-validation increase from 0.84 to 0.91 respectively. Equivalent models tested on interaction-minimised sequences achieve values of r=0.90 and r=0.93. As expected, testing on a simulated control data set with no interactions results in no significant improvements from higher-order models. Additional experimentally derived AnEH mutants are tested with linear and polynomial degree = 2 models, with values increasing from r=0.51 to r=0.87 respectively. The study demonstrates that linear models perform well, however the representation of epistatic interactions in predictive models improves identification of selectivity-enhancing mutations. The improvement is attributed to higher-order kernel functions that represent epistatic interactions between residues.

  6. Learning epistatic interactions from sequence-activity data to predict enantioselectivity.

    PubMed

    Zaugg, Julian; Gumulya, Yosephine; Malde, Alpeshkumar K; Bodén, Mikael

    2017-12-01

    Enzymes with a high selectivity are desirable for improving economics of chemical synthesis of enantiopure compounds. To improve enzyme selectivity mutations are often introduced near the catalytic active site. In this compact environment epistatic interactions between residues, where contributions to selectivity are non-additive, play a significant role in determining the degree of selectivity. Using support vector machine regression models we map mutations to the experimentally characterised enantioselectivities for a set of 136 variants of the epoxide hydrolase from the fungus Aspergillus niger (AnEH). We investigate whether the influence a mutation has on enzyme selectivity can be accurately predicted through linear models, and whether prediction accuracy can be improved using higher-order counterparts. Comparing linear and polynomial degree = 2 models, mean Pearson coefficients (r) from [Formula: see text]-fold cross-validation increase from 0.84 to 0.91 respectively. Equivalent models tested on interaction-minimised sequences achieve values of [Formula: see text] and [Formula: see text]. As expected, testing on a simulated control data set with no interactions results in no significant improvements from higher-order models. Additional experimentally derived AnEH mutants are tested with linear and polynomial degree = 2 models, with values increasing from [Formula: see text] to [Formula: see text] respectively. The study demonstrates that linear models perform well, however the representation of epistatic interactions in predictive models improves identification of selectivity-enhancing mutations. The improvement is attributed to higher-order kernel functions that represent epistatic interactions between residues.

  7. Cole-Cole, linear and multivariate modeling of capacitance data for on-line monitoring of biomass.

    PubMed

    Dabros, Michal; Dennewald, Danielle; Currie, David J; Lee, Mark H; Todd, Robert W; Marison, Ian W; von Stockar, Urs

    2009-02-01

    This work evaluates three techniques of calibrating capacitance (dielectric) spectrometers used for on-line monitoring of biomass: modeling of cell properties using the theoretical Cole-Cole equation, linear regression of dual-frequency capacitance measurements on biomass concentration, and multivariate (PLS) modeling of scanning dielectric spectra. The performance and robustness of each technique is assessed during a sequence of validation batches in two experimental settings of differing signal noise. In more noisy conditions, the Cole-Cole model had significantly higher biomass concentration prediction errors than the linear and multivariate models. The PLS model was the most robust in handling signal noise. In less noisy conditions, the three models performed similarly. Estimates of the mean cell size were done additionally using the Cole-Cole and PLS models, the latter technique giving more satisfactory results.

  8. Low dose radiation risks for women surviving the a-bombs in Japan: generalized additive model.

    PubMed

    Dropkin, Greg

    2016-11-24

    Analyses of cancer mortality and incidence in Japanese A-bomb survivors have been used to estimate radiation risks, which are generally higher for women. Relative Risk (RR) is usually modelled as a linear function of dose. Extrapolation from data including high doses predicts small risks at low doses. Generalized Additive Models (GAMs) are flexible methods for modelling non-linear behaviour. GAMs are applied to cancer incidence in female low dose subcohorts, using anonymous public data for the 1958 - 1998 Life Span Study, to test for linearity, explore interactions, adjust for the skewed dose distribution, examine significance below 100 mGy, and estimate risks at 10 mGy. For all solid cancer incidence, RR estimated from 0 - 100 mGy and 0 - 20 mGy subcohorts is significantly raised. The response tapers above 150 mGy. At low doses, RR increases with age-at-exposure and decreases with time-since-exposure, the preferred covariate. Using the empirical cumulative distribution of dose improves model fit, and capacity to detect non-linear responses. RR is elevated over wide ranges of covariate values. Results are stable under simulation, or when removing exceptional data cells, or adjusting neutron RBE. Estimates of Excess RR at 10 mGy using the cumulative dose distribution are 10 - 45 times higher than extrapolations from a linear model fitted to the full cohort. Below 100 mGy, quasipoisson models find significant effects for all solid, squamous, uterus, corpus, and thyroid cancers, and for respiratory cancers when age-at-exposure > 35 yrs. Results for the thyroid are compatible with studies of children treated for tinea capitis, and Chernobyl survivors. Results for the uterus are compatible with studies of UK nuclear workers and the Techa River cohort. Non-linear models find large, significant cancer risks for Japanese women exposed to low dose radiation from the atomic bombings. The risks should be reflected in protection standards.

  9. Probing kinematics and fate of the Universe with linearly time-varying deceleration parameter

    NASA Astrophysics Data System (ADS)

    Akarsu, Özgür; Dereli, Tekin; Kumar, Suresh; Xu, Lixin

    2014-02-01

    The parametrizations q = q 0+ q 1 z and q = q 0+ q 1(1 - a/ a 0) (Chevallier-Polarski-Linder parametrization) of the deceleration parameter, which are linear in cosmic redshift z and scale factor a , have been frequently utilized in the literature to study the kinematics of the Universe. In this paper, we follow a strategy that leads to these two well-known parametrizations of the deceleration parameter as well as an additional new parametrization, q = q 0+ q 1(1 - t/ t 0), which is linear in cosmic time t. We study the features of this linearly time-varying deceleration parameter in contrast with the other two linear parametrizations. We investigate in detail the kinematics of the Universe by confronting the three models with the latest observational data. We further study the dynamics of the Universe by considering the linearly time-varying deceleration parameter model in comparison with the standard ΛCDM model. We also discuss the future of the Universe in the context of the models under consideration.

  10. Testcross additive and dominance effects in best linear unbiased prediction of maize single-cross performance.

    PubMed

    Bernardo, R

    1996-11-01

    Best linear unbiased prediction (BLUP) has been found to be useful in maize (Zea mays L.) breeding. The advantage of including both testcross additive and dominance effects (Intralocus Model) in BLUP, rather than only testcross additive effects (Additive Model), has not been clearly demonstrated. The objective of this study was to compare the usefulness of Intralocus and Additive Models for BLUP of maize single-cross performance. Multilocation data from 1990 to 1995 were obtained from the hybrid testing program of Limagrain Genetics. Grain yield, moisture, stalk lodging, and root lodging of untested single crosses were predicted from (1) the performance of tested single crosses and (2) known genetic relationships among the parental inbreds. Correlations between predicted and observed performance were obtained with a delete-one cross-validation procedure. For the Intralocus Model, the correlations ranged from 0.50 to 0.66 for yield, 0.88 to 0.94 for moisture, 0.47 to 0.69 for stalk lodging, and 0.31 to 0.45 for root lodging. The BLUP procedure was consistently more effective with the Intralocus Model than with the Additive Model. When the Additive Model was used instead of the Intralocus Model, the reductions in the correlation were largest for root lodging (0.06-0.35), smallest for moisture (0.00-0.02), and intermediate for yield (0.02-0.06) and stalk lodging (0.02-0.08). The ratio of dominance variance (v D) to total genetic variance (v G) was highest for root lodging (0.47) and lowest for moisture (0.10). The Additive Model may be used if prior information indicates that VD for a given trait has little contribution to VG. Otherwise, the continued use of the Intralocus Model for BLUP of single-cross performance is recommended.

  11. Noise limitations in optical linear algebra processors.

    PubMed

    Batsell, S G; Jong, T L; Walkup, J F; Krile, T F

    1990-05-10

    A general statistical noise model is presented for optical linear algebra processors. A statistical analysis which includes device noise, the multiplication process, and the addition operation is undertaken. We focus on those processes which are architecturally independent. Finally, experimental results which verify the analytical predictions are also presented.

  12. Kelvin-Voigt model of wave propagation in fragmented geomaterials with impact damping

    NASA Astrophysics Data System (ADS)

    Khudyakov, Maxim; Pasternak, Elena; Dyskin, Arcady

    2017-04-01

    When a wave propagates through real materials, energy dissipation occurs. The effect of loss of energy in homogeneous materials can be accounted for by using simple viscous models. However, a reliable model representing the effect in fragmented geomaterials has not been established yet. The main reason for that is a mechanism how vibrations are transmitted between the elements (fragments) in these materials. It is hypothesised that the fragments strike against each other, in the process of oscillation, and the impacts lead to the energy loss. We assume that the energy loss is well represented by the restitution coefficient. The principal element of this concept is the interaction of two adjacent blocks. We model it by a simple linear oscillator (a mass on an elastic spring) with an additional condition: each time the system travels through the neutral point, where the displacement is equal to zero, the velocity reduces by multiplying itself by the restitution coefficient, which characterises an impact of the fragments. This additional condition renders the system non-linear. We show that the behaviour of such a model averaged over times much larger than the system period can approximately be represented by a conventional linear oscillator with linear damping characterised by a damping coefficient expressible through the restitution coefficient. Based on this the wave propagation at times considerably greater than the resonance period of oscillations of the neighbouring blocks can be modelled using the Kelvin-Voigt model. The wave velocities and the dispersion relations are obtained.

  13. A GPS Phase-Locked Loop Performance Metric Based on the Phase Discriminator Output

    PubMed Central

    Stevanovic, Stefan; Pervan, Boris

    2018-01-01

    We propose a novel GPS phase-lock loop (PLL) performance metric based on the standard deviation of tracking error (defined as the discriminator’s estimate of the true phase error), and explain its advantages over the popular phase jitter metric using theory, numerical simulation, and experimental results. We derive an augmented GPS phase-lock loop (PLL) linear model, which includes the effect of coherent averaging, to be used in conjunction with this proposed metric. The augmented linear model allows more accurate calculation of tracking error standard deviation in the presence of additive white Gaussian noise (AWGN) as compared to traditional linear models. The standard deviation of tracking error, with a threshold corresponding to half of the arctangent discriminator pull-in region, is shown to be a more reliable/robust measure of PLL performance under interference conditions than the phase jitter metric. In addition, the augmented linear model is shown to be valid up until this threshold, which facilitates efficient performance prediction, so that time-consuming direct simulations and costly experimental testing can be reserved for PLL designs that are much more likely to be successful. The effect of varying receiver reference oscillator quality on the tracking error metric is also considered. PMID:29351250

  14. Application of linear regression analysis in accuracy assessment of rolling force calculations

    NASA Astrophysics Data System (ADS)

    Poliak, E. I.; Shim, M. K.; Kim, G. S.; Choo, W. Y.

    1998-10-01

    Efficient operation of the computational models employed in process control systems require periodical assessment of the accuracy of their predictions. Linear regression is proposed as a tool which allows separate systematic and random prediction errors from those related to measurements. A quantitative characteristic of the model predictive ability is introduced in addition to standard statistical tests for model adequacy. Rolling force calculations are considered as an example for the application. However, the outlined approach can be used to assess the performance of any computational model.

  15. Incorporating additional tree and environmental variables in a lodgepole pine stem profile model

    Treesearch

    John C. Byrne

    1993-01-01

    A new variable-form segmented stem profile model is developed for lodgepole pine (Pinus contorta) trees from the northern Rocky Mountains of the United States. I improved estimates of stem diameter by predicting two of the model coefficients with linear equations using a measure of tree form, defined as a ratio of dbh and total height. Additional improvements were...

  16. Aircraft adaptive learning control

    NASA Technical Reports Server (NTRS)

    Lee, P. S. T.; Vanlandingham, H. F.

    1979-01-01

    The optimal control theory of stochastic linear systems is discussed in terms of the advantages of distributed-control systems, and the control of randomly-sampled systems. An optimal solution to longitudinal control is derived and applied to the F-8 DFBW aircraft. A randomly-sampled linear process model with additive process and noise is developed.

  17. Modeling the frequency of opposing left-turn conflicts at signalized intersections using generalized linear regression models.

    PubMed

    Zhang, Xin; Liu, Pan; Chen, Yuguang; Bai, Lu; Wang, Wei

    2014-01-01

    The primary objective of this study was to identify whether the frequency of traffic conflicts at signalized intersections can be modeled. The opposing left-turn conflicts were selected for the development of conflict predictive models. Using data collected at 30 approaches at 20 signalized intersections, the underlying distributions of the conflicts under different traffic conditions were examined. Different conflict-predictive models were developed to relate the frequency of opposing left-turn conflicts to various explanatory variables. The models considered include a linear regression model, a negative binomial model, and separate models developed for four traffic scenarios. The prediction performance of different models was compared. The frequency of traffic conflicts follows a negative binominal distribution. The linear regression model is not appropriate for the conflict frequency data. In addition, drivers behaved differently under different traffic conditions. Accordingly, the effects of conflicting traffic volumes on conflict frequency vary across different traffic conditions. The occurrences of traffic conflicts at signalized intersections can be modeled using generalized linear regression models. The use of conflict predictive models has potential to expand the uses of surrogate safety measures in safety estimation and evaluation.

  18. Computational Aeroelastic Analyses of a Low-Boom Supersonic Configuration

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Sanetrik, Mark D.; Chwalowski, Pawel; Connolly, Joseph

    2015-01-01

    An overview of NASA's Commercial Supersonic Technology (CST) Aeroservoelasticity (ASE) element is provided with a focus on recent computational aeroelastic analyses of a low-boom supersonic configuration developed by Lockheed-Martin and referred to as the N+2 configuration. The overview includes details of the computational models developed to date including a linear finite element model (FEM), linear unsteady aerodynamic models, unstructured CFD grids, and CFD-based aeroelastic analyses. In addition, a summary of the work involving the development of aeroelastic reduced-order models (ROMs) and the development of an aero-propulso-servo-elastic (APSE) model is provided.

  19. Integrated Control Modeling for Propulsion Systems Using NPSS

    NASA Technical Reports Server (NTRS)

    Parker, Khary I.; Felder, James L.; Lavelle, Thomas M.; Withrow, Colleen A.; Yu, Albert Y.; Lehmann, William V. A.

    2004-01-01

    The Numerical Propulsion System Simulation (NPSS), an advanced engineering simulation environment used to design and analyze aircraft engines, has been enhanced by integrating control development tools into it. One of these tools is a generic controller interface that allows NPSS to communicate with control development software environments such as MATLAB and EASY5. The other tool is a linear model generator (LMG) that gives NPSS the ability to generate linear, time-invariant state-space models. Integrating these tools into NPSS enables it to be used for control system development. This paper will discuss the development and integration of these tools into NPSS. In addition, it will show a comparison of transient model results of a generic, dual-spool, military-type engine model that has been implemented in NPSS and Simulink. It will also show the linear model generator s ability to approximate the dynamics of a nonlinear NPSS engine model.

  20. Evaluation of airborne lidar data to predict vegetation Presence/Absence

    USGS Publications Warehouse

    Palaseanu-Lovejoy, M.; Nayegandhi, A.; Brock, J.; Woodman, R.; Wright, C.W.

    2009-01-01

    This study evaluates the capabilities of the Experimental Advanced Airborne Research Lidar (EAARL) in delineating vegetation assemblages in Jean Lafitte National Park, Louisiana. Five-meter-resolution grids of bare earth, canopy height, canopy-reflection ratio, and height of median energy were derived from EAARL data acquired in September 2006. Ground-truth data were collected along transects to assess species composition, canopy cover, and ground cover. To decide which model is more accurate, comparisons of general linear models and generalized additive models were conducted using conventional evaluation methods (i.e., sensitivity, specificity, Kappa statistics, and area under the curve) and two new indexes, net reclassification improvement and integrated discrimination improvement. Generalized additive models were superior to general linear models in modeling presence/absence in training vegetation categories, but no statistically significant differences between the two models were achieved in determining the classification accuracy at validation locations using conventional evaluation methods, although statistically significant improvements in net reclassifications were observed. ?? 2009 Coastal Education and Research Foundation.

  1. Nonlinear time series modeling and forecasting the seismic data of the Hindu Kush region

    NASA Astrophysics Data System (ADS)

    Khan, Muhammad Yousaf; Mittnik, Stefan

    2018-01-01

    In this study, we extended the application of linear and nonlinear time models in the field of earthquake seismology and examined the out-of-sample forecast accuracy of linear Autoregressive (AR), Autoregressive Conditional Duration (ACD), Self-Exciting Threshold Autoregressive (SETAR), Threshold Autoregressive (TAR), Logistic Smooth Transition Autoregressive (LSTAR), Additive Autoregressive (AAR), and Artificial Neural Network (ANN) models for seismic data of the Hindu Kush region. We also extended the previous studies by using Vector Autoregressive (VAR) and Threshold Vector Autoregressive (TVAR) models and compared their forecasting accuracy with linear AR model. Unlike previous studies that typically consider the threshold model specifications by using internal threshold variable, we specified these models with external transition variables and compared their out-of-sample forecasting performance with the linear benchmark AR model. The modeling results show that time series models used in the present study are capable of capturing the dynamic structure present in the seismic data. The point forecast results indicate that the AR model generally outperforms the nonlinear models. However, in some cases, threshold models with external threshold variables specification produce more accurate forecasts, indicating that specification of threshold time series models is of crucial importance. For raw seismic data, the ACD model does not show an improved out-of-sample forecasting performance over the linear AR model. The results indicate that the AR model is the best forecasting device to model and forecast the raw seismic data of the Hindu Kush region.

  2. Linear Friction Welding Process Model for Carpenter Custom 465 Precipitation-Hardened Martensitic Stainless Steel

    DTIC Science & Technology

    2014-04-11

    Fig. 9(a) and (b). In addition, the temperature dependencies of the true and room-temperature-based mean values of the linear thermal expansion ...Variation of (a) thermal conductivity, (b) specific heat, (c) true linear thermal expansion coefficient, and (d) room-temperature-based mean thermal ...defined as follows: (a) alloy-grade and thermal -mechanical treatment of the workpiece materials to be joined, (b) frequency of reciprocating motion

  3. Spatial Assessment of Model Errors from Four Regression Techniques

    Treesearch

    Lianjun Zhang; Jeffrey H. Gove; Jeffrey H. Gove

    2005-01-01

    Fomst modelers have attempted to account for the spatial autocorrelations among trees in growth and yield models by applying alternative regression techniques such as linear mixed models (LMM), generalized additive models (GAM), and geographicalIy weighted regression (GWR). However, the model errors are commonly assessed using average errors across the entire study...

  4. Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing

    PubMed Central

    Yang, Changju; Kim, Hyongsuk

    2016-01-01

    A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model. PMID:27548186

  5. Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing.

    PubMed

    Yang, Changju; Kim, Hyongsuk

    2016-08-19

    A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model.

  6. General linear methods and friends: Toward efficient solutions of multiphysics problems

    NASA Astrophysics Data System (ADS)

    Sandu, Adrian

    2017-07-01

    Time dependent multiphysics partial differential equations are of great practical importance as they model diverse phenomena that appear in mechanical and chemical engineering, aeronautics, astrophysics, meteorology and oceanography, financial modeling, environmental sciences, etc. There is no single best time discretization for the complex multiphysics systems of practical interest. We discuss "multimethod" approaches that combine different time steps and discretizations using the rigourous frameworks provided by Partitioned General Linear Methods and Generalize-structure Additive Runge Kutta Methods..

  7. Optimization of the time series NDVI-rainfall relationship using linear mixed-effects modeling for the anti-desertification area in the Beijing and Tianjin sandstorm source region

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Sun, Tao; Fu, Anmin; Xu, Hao; Wang, Xinjie

    2018-05-01

    Degradation in drylands is a critically important global issue that threatens ecosystem and environmental in many ways. Researchers have tried to use remote sensing data and meteorological data to perform residual trend analysis and identify human-induced vegetation changes. However, complex interactions between vegetation and climate, soil units and topography have not yet been considered. Data used in the study included annual accumulated Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m normalized difference vegetation index (NDVI) from 2002 to 2013, accumulated rainfall from September to August, digital elevation model (DEM) and soil units. This paper presents linear mixed-effect (LME) modeling methods for the NDVI-rainfall relationship. We developed linear mixed-effects models that considered the random effects of sample points nested in soil units for nested two-level modeling and single-level modeling of soil units and sample points, respectively. Additionally, three functions, including the exponential function (exp), the power function (power), and the constant plus power function (CPP), were tested to remove heterogeneity, and an additional three correlation structures, including the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)] and the compound symmetry structure (CS), were used to address the spatiotemporal correlations. It was concluded that the nested two-level model considering both heteroscedasticity with (CPP) and spatiotemporal correlation with [ARMA(1,1)] showed the best performance (AMR = 0.1881, RMSE = 0.2576, adj- R 2 = 0.9593). Variations between soil units and sample points that may have an effect on the NDVI-rainfall relationship should be included in model structures, and linear mixed-effects modeling achieves this in an effective and accurate way.

  8. Controller design approach based on linear programming.

    PubMed

    Tanaka, Ryo; Shibasaki, Hiroki; Ogawa, Hiromitsu; Murakami, Takahiro; Ishida, Yoshihisa

    2013-11-01

    This study explains and demonstrates the design method for a control system with a load disturbance observer. Observer gains are determined by linear programming (LP) in terms of the Routh-Hurwitz stability criterion and the final-value theorem. In addition, the control model has a feedback structure, and feedback gains are determined to be the linear quadratic regulator. The simulation results confirmed that compared with the conventional method, the output estimated by our proposed method converges to a reference input faster when a load disturbance is added to a control system. In addition, we also confirmed the effectiveness of the proposed method by performing an experiment with a DC motor. © 2013 ISA. Published by ISA. All rights reserved.

  9. Dynamical heterogeneities and mechanical non-linearities: Modeling the onset of plasticity in polymer in the glass transition.

    PubMed

    Masurel, R J; Gelineau, P; Lequeux, F; Cantournet, S; Montes, H

    2017-12-27

    In this paper we focus on the role of dynamical heterogeneities on the non-linear response of polymers in the glass transition domain. We start from a simple coarse-grained model that assumes a random distribution of the initial local relaxation times and that quantitatively describes the linear viscoelasticity of a polymer in the glass transition regime. We extend this model to non-linear mechanics assuming a local Eyring stress dependence of the relaxation times. Implementing the model in a finite element mechanics code, we derive the mechanical properties and the local mechanical fields at the beginning of the non-linear regime. The model predicts a narrowing of distribution of relaxation times and the storage of a part of the mechanical energy --internal stress-- transferred to the material during stretching in this temperature range. We show that the stress field is not spatially correlated under and after loading and follows a Gaussian distribution. In addition the strain field exhibits shear bands, but the strain distribution is narrow. Hence, most of the mechanical quantities can be calculated analytically, in a very good approximation, with the simple assumption that the strain rate is constant.

  10. Non-Linear Concentration-Response Relationships between Ambient Ozone and Daily Mortality.

    PubMed

    Bae, Sanghyuk; Lim, Youn-Hee; Kashima, Saori; Yorifuji, Takashi; Honda, Yasushi; Kim, Ho; Hong, Yun-Chul

    2015-01-01

    Ambient ozone (O3) concentration has been reported to be significantly associated with mortality. However, linearity of the relationships and the presence of a threshold has been controversial. The aim of the present study was to examine the concentration-response relationship and threshold of the association between ambient O3 concentration and non-accidental mortality in 13 Japanese and Korean cities from 2000 to 2009. We selected Japanese and Korean cities which have population of over 1 million. We constructed Poisson regression models adjusting daily mean temperature, daily mean PM10, humidity, time trend, season, year, day of the week, holidays and yearly population. The association between O3 concentration and mortality was examined using linear, spline and linear-threshold models. The thresholds were estimated for each city, by constructing linear-threshold models. We also examined the city-combined association using a generalized additive mixed model. The mean O3 concentration did not differ greatly between Korea and Japan, which were 26.2 ppb and 24.2 ppb, respectively. Seven out of 13 cities showed better fits for the spline model compared with the linear model, supporting a non-linear relationships between O3 concentration and mortality. All of the 7 cities showed J or U shaped associations suggesting the existence of thresholds. The range of city-specific thresholds was from 11 to 34 ppb. The city-combined analysis also showed a non-linear association with a threshold around 30-40 ppb. We have observed non-linear concentration-response relationship with thresholds between daily mean ambient O3 concentration and daily number of non-accidental death in Japanese and Korean cities.

  11. Modulation of Additive and Interactive Effects in Lexical Decision by Trial History

    ERIC Educational Resources Information Center

    Masson, Michael E. J.; Kliegl, Reinhold

    2013-01-01

    Additive and interactive effects of word frequency, stimulus quality, and semantic priming have been used to test theoretical claims about the cognitive architecture of word-reading processes. Additive effects among these factors have been taken as evidence for discrete-stage models of word reading. We present evidence from linear mixed-model…

  12. Lattice Boltzmann methods for global linear instability analysis

    NASA Astrophysics Data System (ADS)

    Pérez, José Miguel; Aguilar, Alfonso; Theofilis, Vassilis

    2017-12-01

    Modal global linear instability analysis is performed using, for the first time ever, the lattice Boltzmann method (LBM) to analyze incompressible flows with two and three inhomogeneous spatial directions. Four linearization models have been implemented in order to recover the linearized Navier-Stokes equations in the incompressible limit. Two of those models employ the single relaxation time and have been proposed previously in the literature as linearization of the collision operator of the lattice Boltzmann equation. Two additional models are derived herein for the first time by linearizing the local equilibrium probability distribution function. Instability analysis results are obtained in three benchmark problems, two in closed geometries and one in open flow, namely the square and cubic lid-driven cavity flow and flow in the wake of the circular cylinder. Comparisons with results delivered by classic spectral element methods verify the accuracy of the proposed new methodologies and point potential limitations particular to the LBM approach. The known issue of appearance of numerical instabilities when the SRT model is used in direct numerical simulations employing the LBM is shown to be reflected in a spurious global eigenmode when the SRT model is used in the instability analysis. Although this mode is absent in the multiple relaxation times model, other spurious instabilities can also arise and are documented herein. Areas of potential improvements in order to make the proposed methodology competitive with established approaches for global instability analysis are discussed.

  13. Estimating PM2.5 Concentrations in Xi'an City Using a Generalized Additive Model with Multi-Source Monitoring Data

    PubMed Central

    Song, Yong-Ze; Yang, Hong-Lei; Peng, Jun-Huan; Song, Yi-Rong; Sun, Qian; Li, Yuan

    2015-01-01

    Particulate matter with an aerodynamic diameter <2.5 μm (PM2.5) represents a severe environmental problem and is of negative impact on human health. Xi'an City, with a population of 6.5 million, is among the highest concentrations of PM2.5 in China. In 2013, in total, there were 191 days in Xi’an City on which PM2.5 concentrations were greater than 100 μg/m3. Recently, a few studies have explored the potential causes of high PM2.5 concentration using remote sensing data such as the MODIS aerosol optical thickness (AOT) product. Linear regression is a commonly used method to find statistical relationships among PM2.5 concentrations and other pollutants, including CO, NO2, SO2, and O3, which can be indicative of emission sources. The relationships of these variables, however, are usually complicated and non-linear. Therefore, a generalized additive model (GAM) is used to estimate the statistical relationships between potential variables and PM2.5 concentrations. This model contains linear functions of SO2 and CO, univariate smoothing non-linear functions of NO2, O3, AOT and temperature, and bivariate smoothing non-linear functions of location and wind variables. The model can explain 69.50% of PM2.5 concentrations, with R2 = 0.691, which improves the result of a stepwise linear regression (R2 = 0.582) by 18.73%. The two most significant variables, CO concentration and AOT, represent 20.65% and 19.54% of the deviance, respectively, while the three other gas-phase concentrations, SO2, NO2, and O3 account for 10.88% of the total deviance. These results show that in Xi'an City, the traffic and other industrial emissions are the primary source of PM2.5. Temperature, location, and wind variables also non-linearly related with PM2.5. PMID:26540446

  14. Circuit models and three-dimensional electromagnetic simulations of a 1-MA linear transformer driver stage

    NASA Astrophysics Data System (ADS)

    Rose, D. V.; Miller, C. L.; Welch, D. R.; Clark, R. E.; Madrid, E. A.; Mostrom, C. B.; Stygar, W. A.; Lechien, K. R.; Mazarakis, M. A.; Langston, W. L.; Porter, J. L.; Woodworth, J. R.

    2010-09-01

    A 3D fully electromagnetic (EM) model of the principal pulsed-power components of a high-current linear transformer driver (LTD) has been developed. LTD systems are a relatively new modular and compact pulsed-power technology based on high-energy density capacitors and low-inductance switches located within a linear-induction cavity. We model 1-MA, 100-kV, 100-ns rise-time LTD cavities [A. A. Kim , Phys. Rev. ST Accel. Beams 12, 050402 (2009)PRABFM1098-440210.1103/PhysRevSTAB.12.050402] which can be used to drive z-pinch and material dynamics experiments. The model simulates the generation and propagation of electromagnetic power from individual capacitors and triggered gas switches to a radially symmetric output line. Multiple cavities, combined to provide voltage addition, drive a water-filled coaxial transmission line. A 3D fully EM model of a single 1-MA 100-kV LTD cavity driving a simple resistive load is presented and compared to electrical measurements. A new model of the current loss through the ferromagnetic cores is developed for use both in circuit representations of an LTD cavity and in the 3D EM simulations. Good agreement between the measured core current, a simple circuit model, and the 3D simulation model is obtained. A 3D EM model of an idealized ten-cavity LTD accelerator is also developed. The model results demonstrate efficient voltage addition when driving a matched impedance load, in good agreement with an idealized circuit model.

  15. Predonation Volume of Future Remnant Cortical Kidney Helps Predict Postdonation Renal Function in Live Kidney Donors.

    PubMed

    Fananapazir, Ghaneh; Benzl, Robert; Corwin, Michael T; Chen, Ling-Xin; Sageshima, Junichiro; Stewart, Susan L; Troppmann, Christoph

    2018-07-01

    Purpose To determine whether the predonation computed tomography (CT)-based volume of the future remnant kidney is predictive of postdonation renal function in living kidney donors. Materials and Methods This institutional review board-approved, retrospective, HIPAA-compliant study included 126 live kidney donors who had undergone predonation renal CT between January 2007 and December 2014 as well as 2-year postdonation measurement of estimated glomerular filtration rate (eGFR). The whole kidney volume and cortical volume of the future remnant kidney were measured and standardized for body surface area (BSA). Bivariate linear associations between the ratios of whole kidney volume to BSA and cortical volume to BSA were obtained. A linear regression model for 2-year postdonation eGFR that incorporated donor age, sex, and either whole kidney volume-to-BSA ratio or cortical volume-to-BSA ratio was created, and the coefficient of determination (R 2 ) for the model was calculated. Factors not statistically additive in assessing 2-year eGFR were removed by using backward elimination, and the coefficient of determination for this parsimonious model was calculated. Results Correlation was slightly better for cortical volume-to-BSA ratio than for whole kidney volume-to-BSA ratio (r = 0.48 vs r = 0.44, respectively). The linear regression model incorporating all donor factors had an R 2 of 0.66. The only factors that were significantly additive to the equation were cortical volume-to-BSA ratio and predonation eGFR (P = .01 and P < .01, respectively), and the final parsimonious linear regression model incorporating these two variables explained almost the same amount of variance (R 2 = 0.65) as did the full model. Conclusion The cortical volume of the future remnant kidney helped predict postdonation eGFR at 2 years. The cortical volume-to-BSA ratio should thus be considered for addition as an important variable to living kidney donor evaluation and selection guidelines. © RSNA, 2018.

  16. Generation of High Frequency Response in a Dynamically Loaded, Nonlinear Soil Column

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spears, Robert Edward; Coleman, Justin Leigh

    2015-08-01

    Detailed guidance on linear seismic analysis of soil columns is provided in “Seismic Analysis of Safety-Related Nuclear Structures and Commentary (ASCE 4, 1998),” which is currently under revision. A new Appendix in ASCE 4-2014 (draft) is being added to provide guidance for nonlinear time domain analysis which includes evaluation of soil columns. When performing linear analysis, a given soil column is typically evaluated with a linear, viscous damped constitutive model. When submitted to a sine wave motion, this constitutive model produces a smooth hysteresis loop. For nonlinear analysis, the soil column can be modelled with an appropriate nonlinear hysteretic soilmore » model. For the model in this paper, the stiffness and energy absorption result from a defined post yielding shear stress versus shear strain curve. This curve is input with tabular data points. When submitted to a sine wave motion, this constitutive model produces a hysteresis loop that looks similar in shape to the input tabular data points on the sides with discontinuous, pointed ends. This paper compares linear and nonlinear soil column results. The results show that the nonlinear analysis produces additional high frequency response. The paper provides additional study to establish what portion of the high frequency response is due to numerical noise associated with the tabular input curve and what portion is accurately caused by the pointed ends of the hysteresis loop. Finally, the paper shows how the results are changed when a significant structural mass is added to the top of the soil column.« less

  17. A SIGNIFICANCE TEST FOR THE LASSO1

    PubMed Central

    Lockhart, Richard; Taylor, Jonathan; Tibshirani, Ryan J.; Tibshirani, Robert

    2014-01-01

    In the sparse linear regression setting, we consider testing the significance of the predictor variable that enters the current lasso model, in the sequence of models visited along the lasso solution path. We propose a simple test statistic based on lasso fitted values, called the covariance test statistic, and show that when the true model is linear, this statistic has an Exp(1) asymptotic distribution under the null hypothesis (the null being that all truly active variables are contained in the current lasso model). Our proof of this result for the special case of the first predictor to enter the model (i.e., testing for a single significant predictor variable against the global null) requires only weak assumptions on the predictor matrix X. On the other hand, our proof for a general step in the lasso path places further technical assumptions on X and the generative model, but still allows for the important high-dimensional case p > n, and does not necessarily require that the current lasso model achieves perfect recovery of the truly active variables. Of course, for testing the significance of an additional variable between two nested linear models, one typically uses the chi-squared test, comparing the drop in residual sum of squares (RSS) to a χ12 distribution. But when this additional variable is not fixed, and has been chosen adaptively or greedily, this test is no longer appropriate: adaptivity makes the drop in RSS stochastically much larger than χ12 under the null hypothesis. Our analysis explicitly accounts for adaptivity, as it must, since the lasso builds an adaptive sequence of linear models as the tuning parameter λ decreases. In this analysis, shrinkage plays a key role: though additional variables are chosen adaptively, the coefficients of lasso active variables are shrunken due to the l1 penalty. Therefore, the test statistic (which is based on lasso fitted values) is in a sense balanced by these two opposing properties—adaptivity and shrinkage—and its null distribution is tractable and asymptotically Exp(1). PMID:25574062

  18. A comparative robustness evaluation of feedforward neurofilters

    NASA Technical Reports Server (NTRS)

    Troudet, Terry; Merrill, Walter

    1993-01-01

    A comparative performance and robustness analysis is provided for feedforward neurofilters trained with back propagation to filter additive white noise. The signals used in this analysis are simulated pitch rate responses to typical pilot command inputs for a modern fighter aircraft model. Various configurations of nonlinear and linear neurofilters are trained to estimate exact signal values from input sequences of noisy sampled signal values. In this application, nonlinear neurofiltering is found to be more efficient than linear neurofiltering in removing the noise from responses of the nominal vehicle model, whereas linear neurofiltering is found to be more robust in the presence of changes in the vehicle dynamics. The possibility of enhancing neurofiltering through hybrid architectures based on linear and nonlinear neuroprocessing is therefore suggested as a way of taking advantage of the robustness of linear neurofiltering, while maintaining the nominal performance advantage of nonlinear neurofiltering.

  19. Does Nonlinear Modeling Play a Role in Plasmid Bioprocess Monitoring Using Fourier Transform Infrared Spectra?

    PubMed

    Lopes, Marta B; Calado, Cecília R C; Figueiredo, Mário A T; Bioucas-Dias, José M

    2017-06-01

    The monitoring of biopharmaceutical products using Fourier transform infrared (FT-IR) spectroscopy relies on calibration techniques involving the acquisition of spectra of bioprocess samples along the process. The most commonly used method for that purpose is partial least squares (PLS) regression, under the assumption that a linear model is valid. Despite being successful in the presence of small nonlinearities, linear methods may fail in the presence of strong nonlinearities. This paper studies the potential usefulness of nonlinear regression methods for predicting, from in situ near-infrared (NIR) and mid-infrared (MIR) spectra acquired in high-throughput mode, biomass and plasmid concentrations in Escherichia coli DH5-α cultures producing the plasmid model pVAX-LacZ. The linear methods PLS and ridge regression (RR) are compared with their kernel (nonlinear) versions, kPLS and kRR, as well as with the (also nonlinear) relevance vector machine (RVM) and Gaussian process regression (GPR). For the systems studied, RR provided better predictive performances compared to the remaining methods. Moreover, the results point to further investigation based on larger data sets whenever differences in predictive accuracy between a linear method and its kernelized version could not be found. The use of nonlinear methods, however, shall be judged regarding the additional computational cost required to tune their additional parameters, especially when the less computationally demanding linear methods herein studied are able to successfully monitor the variables under study.

  20. Preprocessing Inconsistent Linear System for a Meaningful Least Squares Solution

    NASA Technical Reports Server (NTRS)

    Sen, Syamal K.; Shaykhian, Gholam Ali

    2011-01-01

    Mathematical models of many physical/statistical problems are systems of linear equations. Due to measurement and possible human errors/mistakes in modeling/data, as well as due to certain assumptions to reduce complexity, inconsistency (contradiction) is injected into the model, viz. the linear system. While any inconsistent system irrespective of the degree of inconsistency has always a least-squares solution, one needs to check whether an equation is too much inconsistent or, equivalently too much contradictory. Such an equation will affect/distort the least-squares solution to such an extent that renders it unacceptable/unfit to be used in a real-world application. We propose an algorithm which (i) prunes numerically redundant linear equations from the system as these do not add any new information to the model, (ii) detects contradictory linear equations along with their degree of contradiction (inconsistency index), (iii) removes those equations presumed to be too contradictory, and then (iv) obtain the minimum norm least-squares solution of the acceptably inconsistent reduced linear system. The algorithm presented in Matlab reduces the computational and storage complexities and also improves the accuracy of the solution. It also provides the necessary warning about the existence of too much contradiction in the model. In addition, we suggest a thorough relook into the mathematical modeling to determine the reason why unacceptable contradiction has occurred thus prompting us to make necessary corrections/modifications to the models - both mathematical and, if necessary, physical.

  1. Preprocessing in Matlab Inconsistent Linear System for a Meaningful Least Squares Solution

    NASA Technical Reports Server (NTRS)

    Sen, Symal K.; Shaykhian, Gholam Ali

    2011-01-01

    Mathematical models of many physical/statistical problems are systems of linear equations Due to measurement and possible human errors/mistakes in modeling/data, as well as due to certain assumptions to reduce complexity, inconsistency (contradiction) is injected into the model, viz. the linear system. While any inconsistent system irrespective of the degree of inconsistency has always a least-squares solution, one needs to check whether an equation is too much inconsistent or, equivalently too much contradictory. Such an equation will affect/distort the least-squares solution to such an extent that renders it unacceptable/unfit to be used in a real-world application. We propose an algorithm which (i) prunes numerically redundant linear equations from the system as these do not add any new information to the model, (ii) detects contradictory linear equations along with their degree of contradiction (inconsistency index), (iii) removes those equations presumed to be too contradictory, and then (iv) obtain the . minimum norm least-squares solution of the acceptably inconsistent reduced linear system. The algorithm presented in Matlab reduces the computational and storage complexities and also improves the accuracy of the solution. It also provides the necessary warning about the existence of too much contradiction in the model. In addition, we suggest a thorough relook into the mathematical modeling to determine the reason why unacceptable contradiction has occurred thus prompting us to make necessary corrections/modifications to the models - both mathematical and, if necessary, physical.

  2. Linear summation of outputs in a balanced network model of motor cortex.

    PubMed

    Capaday, Charles; van Vreeswijk, Carl

    2015-01-01

    Given the non-linearities of the neural circuitry's elements, we would expect cortical circuits to respond non-linearly when activated. Surprisingly, when two points in the motor cortex are activated simultaneously, the EMG responses are the linear sum of the responses evoked by each of the points activated separately. Additionally, the corticospinal transfer function is close to linear, implying that the synaptic interactions in motor cortex must be effectively linear. To account for this, here we develop a model of motor cortex composed of multiple interconnected points, each comprised of reciprocally connected excitatory and inhibitory neurons. We show how non-linearities in neuronal transfer functions are eschewed by strong synaptic interactions within each point. Consequently, the simultaneous activation of multiple points results in a linear summation of their respective outputs. We also consider the effects of reduction of inhibition at a cortical point when one or more surrounding points are active. The network response in this condition is linear over an approximately two- to three-fold decrease of inhibitory feedback strength. This result supports the idea that focal disinhibition allows linear coupling of motor cortical points to generate movement related muscle activation patterns; albeit with a limitation on gain control. The model also explains why neural activity does not spread as far out as the axonal connectivity allows, whilst also explaining why distant cortical points can be, nonetheless, functionally coupled by focal disinhibition. Finally, we discuss the advantages that linear interactions at the cortical level afford to motor command synthesis.

  3. Bistable energy harvesting enhancement with an auxiliary linear oscillator

    NASA Astrophysics Data System (ADS)

    Harne, R. L.; Thota, M.; Wang, K. W.

    2013-12-01

    Recent work has indicated that linear vibrational energy harvesters with an appended degree-of-freedom (DOF) may be advantageous for introducing new dynamic forms to extend the operational bandwidth. Given the additional interest in bistable harvester designs, which exhibit a propitious snap through effect from one stable state to the other, it is a logical extension to explore the influence of an added DOF to a bistable system. However, bistable snap through is not a resonant phenomenon, which tempers the presumption that the dynamics induced by an additional DOF on bistable designs would inherently be beneficial as for linear systems. This paper presents two analytical formulations to assess the fundamental and superharmonic steady-state dynamics of an excited bistable energy harvester to which is attached an auxiliary linear oscillator. From an energy harvesting perspective, the model predicts that the additional linear DOF uniformly amplifies the bistable harvester response magnitude and generated power for excitation frequencies less than the attachment’s resonance while improved power density spans a bandwidth below this frequency. Analyses predict bandwidths having co-existent responses composed of a unique proportion of fundamental and superharmonic dynamics. Experiments validate key analytical predictions and observe the ability for the coupled system to develop an advantageous multi-harmonic interwell response when the initial conditions are insufficient for continuous high-energy orbit at the excitation frequency. Overall, the addition of an auxiliary linear oscillator to a bistable harvester is found to be an effective means of enhancing the energy harvesting performance and robustness.

  4. Modeling the cardiovascular system using a nonlinear additive autoregressive model with exogenous input

    NASA Astrophysics Data System (ADS)

    Riedl, M.; Suhrbier, A.; Malberg, H.; Penzel, T.; Bretthauer, G.; Kurths, J.; Wessel, N.

    2008-07-01

    The parameters of heart rate variability and blood pressure variability have proved to be useful analytical tools in cardiovascular physics and medicine. Model-based analysis of these variabilities additionally leads to new prognostic information about mechanisms behind regulations in the cardiovascular system. In this paper, we analyze the complex interaction between heart rate, systolic blood pressure, and respiration by nonparametric fitted nonlinear additive autoregressive models with external inputs. Therefore, we consider measurements of healthy persons and patients suffering from obstructive sleep apnea syndrome (OSAS), with and without hypertension. It is shown that the proposed nonlinear models are capable of describing short-term fluctuations in heart rate as well as systolic blood pressure significantly better than similar linear ones, which confirms the assumption of nonlinear controlled heart rate and blood pressure. Furthermore, the comparison of the nonlinear and linear approaches reveals that the heart rate and blood pressure variability in healthy subjects is caused by a higher level of noise as well as nonlinearity than in patients suffering from OSAS. The residue analysis points at a further source of heart rate and blood pressure variability in healthy subjects, in addition to heart rate, systolic blood pressure, and respiration. Comparison of the nonlinear models within and among the different groups of subjects suggests the ability to discriminate the cohorts that could lead to a stratification of hypertension risk in OSAS patients.

  5. Regression Analysis of Top of Descent Location for Idle-thrust Descents

    NASA Technical Reports Server (NTRS)

    Stell, Laurel; Bronsvoort, Jesper; McDonald, Greg

    2013-01-01

    In this paper, multiple regression analysis is used to model the top of descent (TOD) location of user-preferred descent trajectories computed by the flight management system (FMS) on over 1000 commercial flights into Melbourne, Australia. The independent variables cruise altitude, final altitude, cruise Mach, descent speed, wind, and engine type were also recorded or computed post-operations. Both first-order and second-order models are considered, where cross-validation, hypothesis testing, and additional analysis are used to compare models. This identifies the models that should give the smallest errors if used to predict TOD location for new data in the future. A model that is linear in TOD altitude, final altitude, descent speed, and wind gives an estimated standard deviation of 3.9 nmi for TOD location given the trajec- tory parameters, which means about 80% of predictions would have error less than 5 nmi in absolute value. This accuracy is better than demonstrated by other ground automation predictions using kinetic models. Furthermore, this approach would enable online learning of the model. Additional data or further knowl- edge of algorithms is necessary to conclude definitively that no second-order terms are appropriate. Possible applications of the linear model are described, including enabling arriving aircraft to fly optimized descents computed by the FMS even in congested airspace. In particular, a model for TOD location that is linear in the independent variables would enable decision support tool human-machine interfaces for which a kinetic approach would be computationally too slow.

  6. Double elementary Goldstone Higgs boson production in future linear colliders

    NASA Astrophysics Data System (ADS)

    Guo, Yu-Chen; Yue, Chong-Xing; Liu, Zhi-Cheng

    2018-03-01

    The Elementary Goldstone Higgs (EGH) model is a perturbative extension of the Standard Model (SM), which identifies the EGH boson as the observed Higgs boson. In this paper, we study pair production of the EGH boson in future linear electron positron colliders. The cross-sections in the TeV region can be changed to about ‑27%, 163% and ‑34% for the e+e‑→ Zhh, e+e‑→ νν¯hh and e+e‑→ tt¯hh processes with respect to the SM predictions, respectively. According to the expected measurement precisions, such correction effects might be observed in future linear colliders. In addition, we compare the cross-sections of double SM-like Higgs boson production with the predictions in other new physics models.

  7. Quantifying uncertainty and sensitivity in sea ice models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urrego Blanco, Jorge Rolando; Hunke, Elizabeth Clare; Urban, Nathan Mark

    The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.

  8. Nonclassical point of view of the Brownian motion generation via fractional deterministic model

    NASA Astrophysics Data System (ADS)

    Gilardi-Velázquez, H. E.; Campos-Cantón, E.

    In this paper, we present a dynamical system based on the Langevin equation without stochastic term and using fractional derivatives that exhibit properties of Brownian motion, i.e. a deterministic model to generate Brownian motion is proposed. The stochastic process is replaced by considering an additional degree of freedom in the second-order Langevin equation. Thus, it is transformed into a system of three first-order linear differential equations, additionally α-fractional derivative are considered which allow us to obtain better statistical properties. Switching surfaces are established as a part of fluctuating acceleration. The final system of three α-order linear differential equations does not contain a stochastic term, so the system generates motion in a deterministic way. Nevertheless, from the time series analysis, we found that the behavior of the system exhibits statistics properties of Brownian motion, such as, a linear growth in time of mean square displacement, a Gaussian distribution. Furthermore, we use the detrended fluctuation analysis to prove the Brownian character of this motion.

  9. Mixed models, linear dependency, and identification in age-period-cohort models.

    PubMed

    O'Brien, Robert M

    2017-07-20

    This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Three-dimensional earthquake analysis of roller-compacted concrete dams

    NASA Astrophysics Data System (ADS)

    Kartal, M. E.

    2012-07-01

    Ground motion effect on a roller-compacted concrete (RCC) dams in the earthquake zone should be taken into account for the most critical conditions. This study presents three-dimensional earthquake response of a RCC dam considering geometrical non-linearity. Besides, material and connection non-linearity are also taken into consideration in the time-history analyses. Bilinear and multilinear kinematic hardening material models are utilized in the materially non-linear analyses for concrete and foundation rock respectively. The contraction joints inside the dam blocks and dam-foundation-reservoir interaction are modeled by the contact elements. The hydrostatic and hydrodynamic pressures of the reservoir water are modeled with the fluid finite elements based on the Lagrangian approach. The gravity and hydrostatic pressure effects are employed as initial condition before the strong ground motion. In the earthquake analyses, viscous dampers are defined in the finite element model to represent infinite boundary conditions. According to numerical solutions, horizontal displacements increase under hydrodynamic pressure. Besides, those also increase in the materially non-linear analyses of the dam. In addition, while the principle stress components by the hydrodynamic pressure effect the reservoir water, those decrease in the materially non-linear time-history analyses.

  11. Understanding climate impacts on vegetation using a spatiotemporal non-linear Granger causality framework

    NASA Astrophysics Data System (ADS)

    Papagiannopoulou, Christina; Decubber, Stijn; Miralles, Diego; Demuzere, Matthias; Dorigo, Wouter; Verhoest, Niko; Waegeman, Willem

    2017-04-01

    Satellite data provide an abundance of information about crucial climatic and environmental variables. These data - consisting of global records, spanning up to 35 years and having the form of multivariate time series with different spatial and temporal resolutions - enable the study of key climate-vegetation interactions. Although methods which are based on correlations and linear models are typically used for this purpose, their assumptions for linearity about the climate-vegetation relationships are too simplistic. Therefore, we adopt a recently proposed non-linear Granger causality analysis [1], in which we incorporate spatial information, concatenating data from neighboring pixels and training a joint model on the combined data. Experimental results based on global data sets show that considering non-linear relationships leads to a higher explained variance of past vegetation dynamics, compared to simple linear models. Our approach consists of several steps. First, we compile an extensive database [1], which includes multiple data sets for land surface temperature, near-surface air temperature, surface radiation, precipitation, snow water equivalents and surface soil moisture. Based on this database, high-level features are constructed and considered as predictors in our machine-learning framework. These high-level features include (de-trended) seasonal anomalies, lagged variables, past cumulative variables, and extreme indices, all calculated based on the raw climatic data. Second, we apply a spatiotemporal non-linear Granger causality framework - in which the linear predictive model is substituted for a non-linear machine learning algorithm - in order to assess which of these predictor variables Granger-cause vegetation dynamics at each 1° pixel. We use the de-trended anomalies of Normalized Difference Vegetation Index (NDVI) to characterize vegetation, being the target variable of our framework. Experimental results indicate that climate strongly (Granger-)causes vegetation dynamics in most regions globally. More specifically, water availability is the most dominant vegetation driver, being the dominant vegetation driver in 54% of the vegetated surface. Furthermore, our results show that precipitation and soil moisture have prolonged impacts on vegetation in semiarid regions, with up to 10% of additional explained variance on the vegetation dynamics occurring three months later. Finally, hydro-climatic extremes seem to have a remarkable impact on vegetation, since they also explain up to 10% of additional variance of vegetation in certain regions despite their infrequent occurrence. References [1] Papagiannopoulou, C., Miralles, D. G., Verhoest, N. E. C., Dorigo, W. A., and Waegeman, W.: A non-linear Granger causality framework to investigate climate-vegetation dynamics, Geosci. Model Dev. Discuss., doi:10.5194/gmd-2016-266, in review, 2016.

  12. Mathematical Modeling of Intestinal Iron Absorption Using Genetic Programming

    PubMed Central

    Colins, Andrea; Gerdtzen, Ziomara P.; Nuñez, Marco T.; Salgado, J. Cristian

    2017-01-01

    Iron is a trace metal, key for the development of living organisms. Its absorption process is complex and highly regulated at the transcriptional, translational and systemic levels. Recently, the internalization of the DMT1 transporter has been proposed as an additional regulatory mechanism at the intestinal level, associated to the mucosal block phenomenon. The short-term effect of iron exposure in apical uptake and initial absorption rates was studied in Caco-2 cells at different apical iron concentrations, using both an experimental approach and a mathematical modeling framework. This is the first report of short-term studies for this system. A non-linear behavior in the apical uptake dynamics was observed, which does not follow the classic saturation dynamics of traditional biochemical models. We propose a method for developing mathematical models for complex systems, based on a genetic programming algorithm. The algorithm is aimed at obtaining models with a high predictive capacity, and considers an additional parameter fitting stage and an additional Jackknife stage for estimating the generalization error. We developed a model for the iron uptake system with a higher predictive capacity than classic biochemical models. This was observed both with the apical uptake dataset used for generating the model and with an independent initial rates dataset used to test the predictive capacity of the model. The model obtained is a function of time and the initial apical iron concentration, with a linear component that captures the global tendency of the system, and a non-linear component that can be associated to the movement of DMT1 transporters. The model presented in this paper allows the detailed analysis, interpretation of experimental data, and identification of key relevant components for this complex biological process. This general method holds great potential for application to the elucidation of biological mechanisms and their key components in other complex systems. PMID:28072870

  13. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  14. Frequency-domain full-waveform inversion with non-linear descent directions

    NASA Astrophysics Data System (ADS)

    Geng, Yu; Pan, Wenyong; Innanen, Kristopher A.

    2018-05-01

    Full-waveform inversion (FWI) is a highly non-linear inverse problem, normally solved iteratively, with each iteration involving an update constructed through linear operations on the residuals. Incorporating a flexible degree of non-linearity within each update may have important consequences for convergence rates, determination of low model wavenumbers and discrimination of parameters. We examine one approach for doing so, wherein higher order scattering terms are included within the sensitivity kernel during the construction of the descent direction, adjusting it away from that of the standard Gauss-Newton approach. These scattering terms are naturally admitted when we construct the sensitivity kernel by varying not the current but the to-be-updated model at each iteration. Linear and/or non-linear inverse scattering methodologies allow these additional sensitivity contributions to be computed from the current data residuals within any given update. We show that in the presence of pre-critical reflection data, the error in a second-order non-linear update to a background of s0 is, in our scheme, proportional to at most (Δs/s0)3 in the actual parameter jump Δs causing the reflection. In contrast, the error in a standard Gauss-Newton FWI update is proportional to (Δs/s0)2. For numerical implementation of more complex cases, we introduce a non-linear frequency-domain scheme, with an inner and an outer loop. A perturbation is determined from the data residuals within the inner loop, and a descent direction based on the resulting non-linear sensitivity kernel is computed in the outer loop. We examine the response of this non-linear FWI using acoustic single-parameter synthetics derived from the Marmousi model. The inverted results vary depending on data frequency ranges and initial models, but we conclude that the non-linear FWI has the capability to generate high-resolution model estimates in both shallow and deep regions, and to converge rapidly, relative to a benchmark FWI approach involving the standard gradient.

  15. Two-warehouse partial backlogging inventory model for deteriorating items with linear trend in demand under inflationary conditions

    NASA Astrophysics Data System (ADS)

    Jaggi, Chandra K.; Khanna, Aditi; Verma, Priyanka

    2011-07-01

    In today's business transactions, there are various reasons, namely, bulk purchase discounts, re-ordering costs, seasonality of products, inflation induced demand, etc., which force the buyer to order more than the warehouse capacity. Such situations call for additional storage space to store the excess units purchased. This additional storage space is typically a rented warehouse. Inflation plays a very interesting and significant role here: It increases the cost of goods. To safeguard from the rising prices, during the inflation regime, the organisation prefers to keep a higher inventory, thereby increasing the aggregate demand. This additional inventory needs additional storage space, which is facilitated by a rented warehouse. Ignoring the effects of the time value of money and inflation might yield misleading results. In this study, a two-warehouse inventory model with linear trend in demand under inflationary conditions having different rates of deterioration has been developed. Shortages at the owned warehouse are also allowed subject to partial backlogging. The solution methodology provided in the model helps to decide on the feasibility of renting a warehouse. Finally, findings have been illustrated with the help of numerical examples. Comprehensive sensitivity analysis has also been provided.

  16. The novel application of artificial neural network on bioelectrical impedance analysis to assess the body composition in elderly

    PubMed Central

    2013-01-01

    Background This study aims to improve accuracy of Bioelectrical Impedance Analysis (BIA) prediction equations for estimating fat free mass (FFM) of the elderly by using non-linear Back Propagation Artificial Neural Network (BP-ANN) model and to compare the predictive accuracy with the linear regression model by using energy dual X-ray absorptiometry (DXA) as reference method. Methods A total of 88 Taiwanese elderly adults were recruited in this study as subjects. Linear regression equations and BP-ANN prediction equation were developed using impedances and other anthropometrics for predicting the reference FFM measured by DXA (FFMDXA) in 36 male and 26 female Taiwanese elderly adults. The FFM estimated by BIA prediction equations using traditional linear regression model (FFMLR) and BP-ANN model (FFMANN) were compared to the FFMDXA. The measuring results of an additional 26 elderly adults were used to validate than accuracy of the predictive models. Results The results showed the significant predictors were impedance, gender, age, height and weight in developed FFMLR linear model (LR) for predicting FFM (coefficient of determination, r2 = 0.940; standard error of estimate (SEE) = 2.729 kg; root mean square error (RMSE) = 2.571kg, P < 0.001). The above predictors were set as the variables of the input layer by using five neurons in the BP-ANN model (r2 = 0.987 with a SD = 1.192 kg and relatively lower RMSE = 1.183 kg), which had greater (improved) accuracy for estimating FFM when compared with linear model. The results showed a better agreement existed between FFMANN and FFMDXA than that between FFMLR and FFMDXA. Conclusion When compared the performance of developed prediction equations for estimating reference FFMDXA, the linear model has lower r2 with a larger SD in predictive results than that of BP-ANN model, which indicated ANN model is more suitable for estimating FFM. PMID:23388042

  17. Nonautonomous linear system of the terrestrial carbon cycle

    NASA Astrophysics Data System (ADS)

    Luo, Y.

    2012-12-01

    Carbon cycle has been studied by uses of observation through various networks, field and laboratory experiments, and simulation models. Much less has been done on theoretical thinking and analysis to understand fundament properties of carbon cycle and then guide observatory, experimental, and modeling research. This presentation is to explore what would be the theoretical properties of terrestrial carbon cycle and how those properties can be used to make observatory, experimental, and modeling research more effective. Thousands of published data sets from litter decomposition and soil incubation studies almost all indicate that decay processes of litter and soil organic carbon can be well described by first order differential equations with one or more pools. Carbon pool dynamics in plants and soil after disturbances (e.g., wildfire, clear-cut of forests, and plows of soil for cropping) and during natural recovery or ecosystem restoration also exhibit characteristics of first-order linear systems. Thus, numerous lines of empirical evidence indicate that the terrestrial carbon cycle can be adequately described as a nonautonomous linear system. The linearity reflects the nature of the carbon cycle that carbon, once fixed by photosynthesis, is linearly transferred among pools within an ecosystem. The linear carbon transfer, however, is modified by nonlinear functions of external forcing variables. In addition, photosynthetic carbon influx is also nonlinearly influenced by external variables. This nonautonomous linear system can be mathematically expressed by a first-order linear ordinary matrix equation. We have recently used this theoretical property of terrestrial carbon cycle to develop a semi-analytic solution of spinup. The new methods have been applied to five global land models, including NCAR's CLM and CABLE models and can computationally accelerate spinup by two orders of magnitude. We also use this theoretical property to develop an analytic framework to decompose modeled carbon cycle into a few traceable components so as to facilitate model intercompsirosn, benchmark analysis, and data assimilation of global land models.

  18. Modified hyperbolic sine model for titanium dioxide-based memristive thin films

    NASA Astrophysics Data System (ADS)

    Abu Bakar, Raudah; Syahirah Kamarozaman, Nur; Fazlida Hanim Abdullah, Wan; Herman, Sukreen Hana

    2018-03-01

    Since the emergence of memristor as the newest fundamental circuit elements, studies on memristor modeling have been evolved. To date, the developed models were based on the linear model, linear ionic drift model using different window functions, tunnelling barrier model and hyperbolic-sine function based model. Although using hyperbolic-sine function model could predict the memristor electrical properties, the model was not well fitted to the experimental data. In order to improve the performance of the hyperbolic-sine function model, the state variable equation was modified. On the one hand, the addition of window function cannot provide an improved fitting. By multiplying the Yakopcic’s state variable model to Chang’s model on the other hand resulted in the closer agreement with the TiO2 thin film experimental data. The percentage error was approximately 2.15%.

  19. Robust global identifiability theory using potentials--Application to compartmental models.

    PubMed

    Wongvanich, N; Hann, C E; Sirisena, H R

    2015-04-01

    This paper presents a global practical identifiability theory for analyzing and identifying linear and nonlinear compartmental models. The compartmental system is prolonged onto the potential jet space to formulate a set of input-output equations that are integrals in terms of the measured data, which allows for robust identification of parameters without requiring any simulation of the model differential equations. Two classes of linear and non-linear compartmental models are considered. The theory is first applied to analyze the linear nitrous oxide (N2O) uptake model. The fitting accuracy of the identified models from differential jet space and potential jet space identifiability theories is compared with a realistic noise level of 3% which is derived from sensor noise data in the literature. The potential jet space approach gave a match that was well within the coefficient of variation. The differential jet space formulation was unstable and not suitable for parameter identification. The proposed theory is then applied to a nonlinear immunological model for mastitis in cows. In addition, the model formulation is extended to include an iterative method which allows initial conditions to be accurately identified. With up to 10% noise, the potential jet space theory predicts the normalized population concentration infected with pathogens, to within 9% of the true curve. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Vestibular coriolis effect differences modeled with three-dimensional linear-angular interactions.

    PubMed

    Holly, Jan E

    2004-01-01

    The vestibular coriolis (or "cross-coupling") effect is traditionally explained by cross-coupled angular vectors, which, however, do not explain the differences in perceptual disturbance under different acceleration conditions. For example, during head roll tilt in a rotating chair, the magnitude of perceptual disturbance is affected by a number of factors, including acceleration or deceleration of the chair rotation or a zero-g environment. Therefore, it has been suggested that linear-angular interactions play a role. The present research investigated whether these perceptual differences and others involving linear coriolis accelerations could be explained under one common framework: the laws of motion in three dimensions, which include all linear-angular interactions among all six components of motion (three angular and three linear). The results show that the three-dimensional laws of motion predict the differences in perceptual disturbance. No special properties of the vestibular system or nervous system are required. In addition, simulations were performed with angular, linear, and tilt time constants inserted into the model, giving the same predictions. Three-dimensional graphics were used to highlight the manner in which linear-angular interaction causes perceptual disturbance, and a crucial component is the Stretch Factor, which measures the "unexpected" linear component.

  1. Linear summation of outputs in a balanced network model of motor cortex

    PubMed Central

    Capaday, Charles; van Vreeswijk, Carl

    2015-01-01

    Given the non-linearities of the neural circuitry's elements, we would expect cortical circuits to respond non-linearly when activated. Surprisingly, when two points in the motor cortex are activated simultaneously, the EMG responses are the linear sum of the responses evoked by each of the points activated separately. Additionally, the corticospinal transfer function is close to linear, implying that the synaptic interactions in motor cortex must be effectively linear. To account for this, here we develop a model of motor cortex composed of multiple interconnected points, each comprised of reciprocally connected excitatory and inhibitory neurons. We show how non-linearities in neuronal transfer functions are eschewed by strong synaptic interactions within each point. Consequently, the simultaneous activation of multiple points results in a linear summation of their respective outputs. We also consider the effects of reduction of inhibition at a cortical point when one or more surrounding points are active. The network response in this condition is linear over an approximately two- to three-fold decrease of inhibitory feedback strength. This result supports the idea that focal disinhibition allows linear coupling of motor cortical points to generate movement related muscle activation patterns; albeit with a limitation on gain control. The model also explains why neural activity does not spread as far out as the axonal connectivity allows, whilst also explaining why distant cortical points can be, nonetheless, functionally coupled by focal disinhibition. Finally, we discuss the advantages that linear interactions at the cortical level afford to motor command synthesis. PMID:26097452

  2. Reference evapotranspiration forecasting based on local meteorological and global climate information screened by partial mutual information

    NASA Astrophysics Data System (ADS)

    Fang, Wei; Huang, Shengzhi; Huang, Qiang; Huang, Guohe; Meng, Erhao; Luan, Jinkai

    2018-06-01

    In this study, reference evapotranspiration (ET0) forecasting models are developed for the least economically developed regions subject to meteorological data scarcity. Firstly, the partial mutual information (PMI) capable of capturing the linear and nonlinear dependence is investigated regarding its utility to identify relevant predictors and exclude those that are redundant through the comparison with partial linear correlation. An efficient input selection technique is crucial for decreasing model data requirements. Then, the interconnection between global climate indices and regional ET0 is identified. Relevant climatic indices are introduced as additional predictors to comprise information regarding ET0, which ought to be provided by meteorological data unavailable. The case study in the Jing River and Beiluo River basins, China, reveals that PMI outperforms the partial linear correlation in excluding the redundant information, favouring the yield of smaller predictor sets. The teleconnection analysis identifies the correlation between Nino 1 + 2 and regional ET0, indicating influences of ENSO events on the evapotranspiration process in the study area. Furthermore, introducing Nino 1 + 2 as predictors helps to yield more accurate ET0 forecasts. A model performance comparison also shows that non-linear stochastic models (SVR or RF with input selection through PMI) do not always outperform linear models (MLR with inputs screen by linear correlation). However, the former can offer quite comparable performance depending on smaller predictor sets. Therefore, efforts such as screening model inputs through PMI and incorporating global climatic indices interconnected with ET0 can benefit the development of ET0 forecasting models suitable for data-scarce regions.

  3. smoothG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barker, Andrew T.; Gelever, Stephan A.; Lee, Chak S.

    2017-12-12

    smoothG is a collection of parallel C++ classes/functions that algebraically constructs reduced models of different resolutions from a given high-fidelity graph model. In addition, smoothG also provides efficient linear solvers for the reduced models. Other than pure graph problem, the software finds its application in subsurface flow and power grid simulations in which graph Laplacians are found

  4. Further advances in predicting species distributions

    Treesearch

    Gretchen G. Moisen; Thomas C. Edwards; Patrick E. Osborne

    2006-01-01

    In 2001, a workshop focused on the use of generalized linear models (GLM: McCullagh and Nelder, 1989) and generalized additive models (GAM: Hastie and Tibshirani, 1986, 1990) for predicting species distributions was held in Riederalp, Switzerland. This topic led to the publication of special issues in Ecological Modelling (Guisan et al., 2002) and Biodiversity and...

  5. Interpreting linear support vector machine models with heat map molecule coloring

    PubMed Central

    2011-01-01

    Background Model-based virtual screening plays an important role in the early drug discovery stage. The outcomes of high-throughput screenings are a valuable source for machine learning algorithms to infer such models. Besides a strong performance, the interpretability of a machine learning model is a desired property to guide the optimization of a compound in later drug discovery stages. Linear support vector machines showed to have a convincing performance on large-scale data sets. The goal of this study is to present a heat map molecule coloring technique to interpret linear support vector machine models. Based on the weights of a linear model, the visualization approach colors each atom and bond of a compound according to its importance for activity. Results We evaluated our approach on a toxicity data set, a chromosome aberration data set, and the maximum unbiased validation data sets. The experiments show that our method sensibly visualizes structure-property and structure-activity relationships of a linear support vector machine model. The coloring of ligands in the binding pocket of several crystal structures of a maximum unbiased validation data set target indicates that our approach assists to determine the correct ligand orientation in the binding pocket. Additionally, the heat map coloring enables the identification of substructures important for the binding of an inhibitor. Conclusions In combination with heat map coloring, linear support vector machine models can help to guide the modification of a compound in later stages of drug discovery. Particularly substructures identified as important by our method might be a starting point for optimization of a lead compound. The heat map coloring should be considered as complementary to structure based modeling approaches. As such, it helps to get a better understanding of the binding mode of an inhibitor. PMID:21439031

  6. Trend in Obesity Prevalence in European Adult Cohort Populations during Follow-up since 1996 and Their Predictions to 2015

    PubMed Central

    von Ruesten, Anne; Steffen, Annika; Floegel, Anna; van der A, Daphne L.; Masala, Giovanna; Tjønneland, Anne; Halkjaer, Jytte; Palli, Domenico; Wareham, Nicholas J.; Loos, Ruth J. F.; Sørensen, Thorkild I. A.; Boeing, Heiner

    2011-01-01

    Objective To investigate trends in obesity prevalence in recent years and to predict the obesity prevalence in 2015 in European populations. Methods Data of 97 942 participants from seven cohorts involved in the European Prospective Investigation into Cancer and Nutrition (EPIC) study participating in the Diogenes project (named as “Diogenes cohort” in the following) with weight measurements at baseline and follow-up were used to predict future obesity prevalence with logistic linear and non-linear (leveling off) regression models. In addition, linear and leveling off models were fitted to the EPIC-Potsdam dataset with five weight measures during the observation period to find out which of these two models might provide the more realistic prediction. Results During a mean follow-up period of 6 years, the obesity prevalence in the Diogenes cohort increased from 13% to 17%. The linear prediction model predicted an overall obesity prevalence of about 30% in 2015, whereas the leveling off model predicted a prevalence of about 20%. In the EPIC-Potsdam cohort, the shape of obesity trend favors a leveling off model among men (R2 = 0.98), and a linear model among women (R2 = 0.99). Conclusion Our data show an increase in obesity prevalence since the 1990ies, and predictions by 2015 suggests a sizeable further increase in European populations. However, the estimates from the leveling off model were considerably lower. PMID:22102897

  7. Longitudinal mathematics development of students with learning disabilities and students without disabilities: a comparison of linear, quadratic, and piecewise linear mixed effects models.

    PubMed

    Kohli, Nidhi; Sullivan, Amanda L; Sadeh, Shanna; Zopluoglu, Cengiz

    2015-04-01

    Effective instructional planning and intervening rely heavily on accurate understanding of students' growth, but relatively few researchers have examined mathematics achievement trajectories, particularly for students with special needs. We applied linear, quadratic, and piecewise linear mixed-effects models to identify the best-fitting model for mathematics development over elementary and middle school and to ascertain differences in growth trajectories of children with learning disabilities relative to their typically developing peers. The analytic sample of 2150 students was drawn from the Early Childhood Longitudinal Study - Kindergarten Cohort, a nationally representative sample of United States children who entered kindergarten in 1998. We first modeled students' mathematics growth via multiple mixed-effects models to determine the best fitting model of 9-year growth and then compared the trajectories of students with and without learning disabilities. Results indicate that the piecewise linear mixed-effects model captured best the functional form of students' mathematics trajectories. In addition, there were substantial achievement gaps between students with learning disabilities and students with no disabilities, and their trajectories differed such that students without disabilities progressed at a higher rate than their peers who had learning disabilities. The results underscore the need for further research to understand how to appropriately model students' mathematics trajectories and the need for attention to mathematics achievement gaps in policy. Copyright © 2015 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  8. Response statistics of rotating shaft with non-linear elastic restoring forces by path integration

    NASA Astrophysics Data System (ADS)

    Gaidai, Oleg; Naess, Arvid; Dimentberg, Michael

    2017-07-01

    Extreme statistics of random vibrations is studied for a Jeffcott rotor under uniaxial white noise excitation. Restoring force is modelled as elastic non-linear; comparison is done with linearized restoring force to see the force non-linearity effect on the response statistics. While for the linear model analytical solutions and stability conditions are available, it is not generally the case for non-linear system except for some special cases. The statistics of non-linear case is studied by applying path integration (PI) method, which is based on the Markov property of the coupled dynamic system. The Jeffcott rotor response statistics can be obtained by solving the Fokker-Planck (FP) equation of the 4D dynamic system. An efficient implementation of PI algorithm is applied, namely fast Fourier transform (FFT) is used to simulate dynamic system additive noise. The latter allows significantly reduce computational time, compared to the classical PI. Excitation is modelled as Gaussian white noise, however any kind distributed white noise can be implemented with the same PI technique. Also multidirectional Markov noise can be modelled with PI in the same way as unidirectional. PI is accelerated by using Monte Carlo (MC) estimated joint probability density function (PDF) as initial input. Symmetry of dynamic system was utilized to afford higher mesh resolution. Both internal (rotating) and external damping are included in mechanical model of the rotor. The main advantage of using PI rather than MC is that PI offers high accuracy in the probability distribution tail. The latter is of critical importance for e.g. extreme value statistics, system reliability, and first passage probability.

  9. Sparse 4D TomoSAR imaging in the presence of non-linear deformation

    NASA Astrophysics Data System (ADS)

    Khwaja, Ahmed Shaharyar; ćetin, Müjdat

    2018-04-01

    In this paper, we present a sparse four-dimensional tomographic synthetic aperture radar (4D TomoSAR) imaging scheme that can estimate elevation and linear as well as non-linear seasonal deformation rates of scatterers using the interferometric phase. Unlike existing sparse processing techniques that use fixed dictionaries based on a linear deformation model, we use a variable dictionary for the non-linear deformation in the form of seasonal sinusoidal deformation, in addition to the fixed dictionary for the linear deformation. We estimate the amplitude of the sinusoidal deformation using an optimization method and create the variable dictionary using the estimated amplitude. We show preliminary results using simulated data that demonstrate the soundness of our proposed technique for sparse 4D TomoSAR imaging in the presence of non-linear deformation.

  10. Quantum processing by remote quantum control

    NASA Astrophysics Data System (ADS)

    Qiang, Xiaogang; Zhou, Xiaoqi; Aungskunsiri, Kanin; Cable, Hugo; O'Brien, Jeremy L.

    2017-12-01

    Client-server models enable computations to be hosted remotely on quantum servers. We present a novel protocol for realizing this task, with practical advantages when using technology feasible in the near term. Client tasks are realized as linear combinations of operations implemented by the server, where the linear coefficients are hidden from the server. We report on an experimental demonstration of our protocol using linear optics, which realizes linear combination of two single-qubit operations by a remote single-qubit control. In addition, we explain when our protocol can remain efficient for larger computations, as well as some ways in which privacy can be maintained using our protocol.

  11. A Linear Electromagnetic Piston Pump

    NASA Astrophysics Data System (ADS)

    Hogan, Paul H.

    Advancements in mobile hydraulics for human-scale applications have increased demand for a compact hydraulic power supply. Conventional designs couple a rotating electric motor to a hydraulic pump, which increases the package volume and requires several energy conversions. This thesis investigates the use of a free piston as the moving element in a linear motor to eliminate multiple energy conversions and decrease the overall package volume. A coupled model used a quasi-static magnetic equivalent circuit to calculate the motor inductance and the electromagnetic force acting on the piston. The force was an input to a time domain model to evaluate the mechanical and pressure dynamics. The magnetic circuit model was validated with finite element analysis and an experimental prototype linear motor. The coupled model was optimized using a multi-objective genetic algorithm to explore the parameter space and maximize power density and efficiency. An experimental prototype linear pump coupled pistons to an off-the-shelf linear motor to validate the mechanical and pressure dynamics models. The magnetic circuit force calculation agreed within 3% of finite element analysis, and within 8% of experimental data from the unoptimized prototype linear motor. The optimized motor geometry also had good agreement with FEA; at zero piston displacement, the magnetic circuit calculates optimized motor force within 10% of FEA in less than 1/1000 the computational time. This makes it well suited to genetic optimization algorithms. The mechanical model agrees very well with the experimental piston pump position data when tuned for additional unmodeled mechanical friction. Optimized results suggest that an improvement of 400% of the state of the art power density is attainable with as high as 85% net efficiency. This demonstrates that a linear electromagnetic piston pump has potential to serve as a more compact and efficient supply of fluid power for the human scale.

  12. Resonant sterile neutrino dark matter in the local and high-z Universe

    NASA Astrophysics Data System (ADS)

    Bozek, Brandon; Boylan-Kolchin, Michael; Horiuchi, Shunsaku; Garrison-Kimmel, Shea; Abazajian, Kevork; Bullock, James S.

    2016-06-01

    Sterile neutrinos comprise an entire class of dark matter models that, depending on their production mechanism, can be hot, warm, or cold dark matter (CDM). We simulate the Local Group and representative volumes of the Universe in a variety of sterile neutrino models, all of which are consistent with the possible existence of a radiative decay line at ˜3.5 keV. We compare models of production via resonances in the presence of a lepton asymmetry (suggested by Shi & Fuller 1999) to `thermal' models. We find that properties in the highly non-linear regime - e.g. counts of satellites and internal properties of haloes and subhaloes - are insensitive to the precise fall-off in power with wavenumber, indicating that non-linear evolution essentially washes away differences in the initial (linear) matter power spectrum. In the quasi-linear regime at higher redshifts, however, quantitative differences in the 3D matter power spectra remain, raising the possibility that such models can be tested with future observations of the Lyman-α forest. While many of the sterile neutrino models largely eliminate multiple small-scale issues within the CDM paradigm, we show that these models may be ruled out in the near future via discoveries of additional dwarf satellites in the Local Group.

  13. Estimation of the limit of detection in semiconductor gas sensors through linearized calibration models.

    PubMed

    Burgués, Javier; Jiménez-Soto, Juan Manuel; Marco, Santiago

    2018-07-12

    The limit of detection (LOD) is a key figure of merit in chemical sensing. However, the estimation of this figure of merit is hindered by the non-linear calibration curve characteristic of semiconductor gas sensor technologies such as, metal oxide (MOX), gasFETs or thermoelectric sensors. Additionally, chemical sensors suffer from cross-sensitivities and temporal stability problems. The application of the International Union of Pure and Applied Chemistry (IUPAC) recommendations for univariate LOD estimation in non-linear semiconductor gas sensors is not straightforward due to the strong statistical requirements of the IUPAC methodology (linearity, homoscedasticity, normality). Here, we propose a methodological approach to LOD estimation through linearized calibration models. As an example, the methodology is applied to the detection of low concentrations of carbon monoxide using MOX gas sensors in a scenario where the main source of error is the presence of uncontrolled levels of humidity. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Prediction of HDR quality by combining perceptually transformed display measurements with machine learning

    NASA Astrophysics Data System (ADS)

    Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott

    2017-09-01

    We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.

  15. A hierarchy for modeling high speed propulsion systems

    NASA Technical Reports Server (NTRS)

    Hartley, Tom T.; Deabreu, Alex

    1991-01-01

    General research efforts on reduced order propulsion models for control systems design are overviewed. Methods for modeling high speed propulsion systems are discussed including internal flow propulsion systems that do not contain rotating machinery such as inlets, ramjets, and scramjets. The discussion is separated into four sections: (1) computational fluid dynamics model for the entire nonlinear system or high order nonlinear models; (2) high order linearized model derived from fundamental physics; (3) low order linear models obtained from other high order models; and (4) low order nonlinear models. Included are special considerations on any relevant control system designs. The methods discussed are for the quasi-one dimensional Euler equations of gasdynamic flow. The essential nonlinear features represented are large amplitude nonlinear waves, moving normal shocks, hammershocks, subsonic combustion via heat addition, temperature dependent gases, detonation, and thermal choking.

  16. A Novel Nonlinear Piezoelectric Energy Harvesting System Based on Linear-Element Coupling: Design, Modeling and Dynamic Analysis.

    PubMed

    Zhou, Shengxi; Yan, Bo; Inman, Daniel J

    2018-05-09

    This paper presents a novel nonlinear piezoelectric energy harvesting system which consists of linear piezoelectric energy harvesters connected by linear springs. In principle, the presented nonlinear system can improve broadband energy harvesting efficiency where magnets are forbidden. The linear spring inevitably produces the nonlinear spring force on the connected harvesters, because of the geometrical relationship and the time-varying relative displacement between two adjacent harvesters. Therefore, the presented nonlinear system has strong nonlinear characteristics. A theoretical model of the presented nonlinear system is deduced, based on Euler-Bernoulli beam theory, Kirchhoff’s law, piezoelectric theory and the relevant geometrical relationship. The energy harvesting enhancement of the presented nonlinear system (when n = 2, 3) is numerically verified by comparing with its linear counterparts. In the case study, the output power area of the presented nonlinear system with two and three energy harvesters is 268.8% and 339.8% of their linear counterparts, respectively. In addition, the nonlinear dynamic response characteristics are analyzed via bifurcation diagrams, Poincare maps of the phase trajectory, and the spectrum of the output voltage.

  17. Generalized linear and generalized additive models in studies of species distributions: Setting the scene

    USGS Publications Warehouse

    Guisan, Antoine; Edwards, T.C.; Hastie, T.

    2002-01-01

    An important statistical development of the last 30 years has been the advance in regression analysis provided by generalized linear models (GLMs) and generalized additive models (GAMs). Here we introduce a series of papers prepared within the framework of an international workshop entitled: Advances in GLMs/GAMs modeling: from species distribution to environmental management, held in Riederalp, Switzerland, 6-11 August 2001. We first discuss some general uses of statistical models in ecology, as well as provide a short review of several key examples of the use of GLMs and GAMs in ecological modeling efforts. We next present an overview of GLMs and GAMs, and discuss some of their related statistics used for predictor selection, model diagnostics, and evaluation. Included is a discussion of several new approaches applicable to GLMs and GAMs, such as ridge regression, an alternative to stepwise selection of predictors, and methods for the identification of interactions by a combined use of regression trees and several other approaches. We close with an overview of the papers and how we feel they advance our understanding of their application to ecological modeling. ?? 2002 Elsevier Science B.V. All rights reserved.

  18. Regression Model Term Selection for the Analysis of Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred; Volden, Thomas R.

    2010-01-01

    The paper discusses the selection of regression model terms for the analysis of wind tunnel strain-gage balance calibration data. Different function class combinations are presented that may be used to analyze calibration data using either a non-iterative or an iterative method. The role of the intercept term in a regression model of calibration data is reviewed. In addition, useful algorithms and metrics originating from linear algebra and statistics are recommended that will help an analyst (i) to identify and avoid both linear and near-linear dependencies between regression model terms and (ii) to make sure that the selected regression model of the calibration data uses only statistically significant terms. Three different tests are suggested that may be used to objectively assess the predictive capability of the final regression model of the calibration data. These tests use both the original data points and regression model independent confirmation points. Finally, data from a simplified manual calibration of the Ames MK40 balance is used to illustrate the application of some of the metrics and tests to a realistic calibration data set.

  19. The linear relationship between the Vulnerable Elders Survey-13 score and mortality in an Asian population of community-dwelling older persons.

    PubMed

    Wang, Jye; Lin, Wender; Chang, Ling-Hui

    2018-01-01

    The Vulnerable Elders Survey-13 (VES-13) has been used as a screening tool to identify vulnerable community-dwelling older persons for more in-depth assessment and targeted interventions. Although many studies supported its use in different populations, few have addressed Asian populations. The optimal scaling system for the VES-13 in predicting health outcomes also has not been adequately tested. This study (1) assesses the applicability of the VES-13 to predict the mortality of community-dwelling older persons in Taiwan, (2) identifies the best scaling system for the VES-13 in predicting mortality using generalized additive models (GAMs), and (3) determines whether including covariates, such as socio-demographic factors and common geriatric syndromes, improves model fitting. This retrospective longitudinal cohort study analyzed the data of 2184 community-dwelling persons 65 years old or older from the 2003 wave of the national-wide Taiwan Longitudinal Study on Aging. Cox proportional hazards models and Generalized Additive Models (GAMs) were used. The VES-13 significantly predicted the mortality of Taiwan's community-dwelling elders. A one-point increase in the VES-13 score raised the risk of death by 26% (hazard ratio, 1.26; 95% confidence interval, 1.21-1.32). The hazard ratio of death increased linearly with each additional VES-13 score point, suggesting that using a continuous scale is appropriate. Inclusion of socio-demographic factors and geriatric syndromes improved the model-fitting. The VES-13 is appropriate for an Asian population. VES-13 scores linearly predict the mortality of this population. Adjusting the weighting of the physical activity items may improve the performance of the VES-13. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Application of linear logic to simulation

    NASA Astrophysics Data System (ADS)

    Clarke, Thomas L.

    1998-08-01

    Linear logic, since its introduction by Girard in 1987 has proven expressive and powerful. Linear logic has provided natural encodings of Turing machines, Petri nets and other computational models. Linear logic is also capable of naturally modeling resource dependent aspects of reasoning. The distinguishing characteristic of linear logic is that it accounts for resources; two instances of the same variable are considered differently from a single instance. Linear logic thus must obey a form of the linear superposition principle. A proportion can be reasoned with only once, unless a special operator is applied. Informally, linear logic distinguishes two kinds of conjunction, two kinds of disjunction, and also introduces a modal storage operator that explicitly indicates propositions that can be reused. This paper discuses the application of linear logic to simulation. A wide variety of logics have been developed; in addition to classical logic, there are fuzzy logics, affine logics, quantum logics, etc. All of these have found application in simulations of one sort or another. The special characteristics of linear logic and its benefits for simulation will be discussed. Of particular interest is a connection that can be made between linear logic and simulated dynamics by using the concept of Lie algebras and Lie groups. Lie groups provide the connection between the exponential modal storage operators of linear logic and the eigen functions of dynamic differential operators. Particularly suggestive are possible relations between complexity result for linear logic and non-computability results for dynamical systems.

  1. Application of Multiregressive Linear Models, Dynamic Kriging Models and Neural Network Models to Predictive Maintenance of Hydroelectric Power Systems

    NASA Astrophysics Data System (ADS)

    Lucifredi, A.; Mazzieri, C.; Rossi, M.

    2000-05-01

    Since the operational conditions of a hydroelectric unit can vary within a wide range, the monitoring system must be able to distinguish between the variations of the monitored variable caused by variations of the operation conditions and those due to arising and progressing of failures and misoperations. The paper aims to identify the best technique to be adopted for the monitoring system. Three different methods have been implemented and compared. Two of them use statistical techniques: the first, the linear multiple regression, expresses the monitored variable as a linear function of the process parameters (independent variables), while the second, the dynamic kriging technique, is a modified technique of multiple linear regression representing the monitored variable as a linear combination of the process variables in such a way as to minimize the variance of the estimate error. The third is based on neural networks. Tests have shown that the monitoring system based on the kriging technique is not affected by some problems common to the other two models e.g. the requirement of a large amount of data for their tuning, both for training the neural network and defining the optimum plane for the multiple regression, not only in the system starting phase but also after a trivial operation of maintenance involving the substitution of machinery components having a direct impact on the observed variable. Or, in addition, the necessity of different models to describe in a satisfactory way the different ranges of operation of the plant. The monitoring system based on the kriging statistical technique overrides the previous difficulties: it does not require a large amount of data to be tuned and is immediately operational: given two points, the third can be immediately estimated; in addition the model follows the system without adapting itself to it. The results of the experimentation performed seem to indicate that a model based on a neural network or on a linear multiple regression is not optimal, and that a different approach is necessary to reduce the amount of work during the learning phase using, when available, all the information stored during the initial phase of the plant to build the reference baseline, elaborating, if it is the case, the raw information available. A mixed approach using the kriging statistical technique and neural network techniques could optimise the result.

  2. Investigation of the exposure to radon and progeny in the thermal spas of Loutraki (Attica-Greece): results from measurements and modelling.

    PubMed

    Nikolopoulos, Dimitrios; Vogiannis, Efstratios; Petraki, Ermioni; Zisos, Athanasios; Louizi, Anna

    2010-01-01

    Radon and progeny ((218)Po, (214)Pb, (214)Bi and (214)Po) in thermal spas are well known radioactive pollutants identified for additional radiation burden of patients due to the activity concentration peaks which appear during bath treatment or due to drinking of waters of high radon content. This burden affects additionally the working personnel of the spas. The present paper has focused on the thermal spas of Loutraki (Attica-Greece). The aim was the investigation of the health impact for patients and working personnel due to radon and progeny. Attention has been paid to radon and progeny transient concentration peaks (for bath treatment) and to radon of thermal waters (both for bath treatment and drinking therapy). Designed experiments have been carried out, which included radon and progeny activity concentration measurements in thermal waters and ambient air. Additionally, published models for description of radon and progeny transient concentration peaks were employed. The models were based on physicochemical processes involved and employed non linear first order derivative mass balance differential equations which were solved numerically with the aid of specially developed computer codes. The collected measurements were analysed incorporating these models. Results were checked via non linear statistical tests. Predictions and measurements were found in close agreement. Non linear parameters were estimated. The models were employed for dosimetric estimations of patients and working personnel. The effective doses of patients receiving bath treatment were found low but not negligible. The corresponding doses to patients receiving potable treatment were found high but below the proposed international limits. It was found that the working personnel are exposed to considerable effective doses, however well below the acceptable limits for workers. It was concluded that treatment and working in the Loutraki spas leads to intense variations of radon and progeny and consequently additional health impact both to patients and working personnel.

  3. Boussinesq equations and other systems for small-amplitude long waves in nonlinear dispersive media: II. The nonlinear theory

    NASA Astrophysics Data System (ADS)

    Bona, J. L.; Chen, M.; Saut, J.-C.

    2004-05-01

    In part I of this work (Bona J L, Chen M and Saut J-C 2002 Boussinesq equations and other systems for small-amplitude long waves in nonlinear dispersive media I: Derivation and the linear theory J. Nonlinear Sci. 12 283-318), a four-parameter family of Boussinesq systems was derived to describe the propagation of surface water waves. Similar systems are expected to arise in other physical settings where the dominant aspects of propagation are a balance between the nonlinear effects of convection and the linear effects of frequency dispersion. In addition to deriving these systems, we determined in part I exactly which of them are linearly well posed in various natural function classes. It was argued that linear well-posedness is a natural necessary requirement for the possible physical relevance of the model in question. In this paper, it is shown that the first-order correct models that are linearly well posed are in fact locally nonlinearly well posed. Moreover, in certain specific cases, global well-posedness is established for physically relevant initial data. In part I, higher-order correct models were also derived. A preliminary analysis of a promising subclass of these models shows them to be well posed.

  4. A generalized linear factor model approach to the hierarchical framework for responses and response times.

    PubMed

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-05-01

    We show how the hierarchical model for responses and response times as developed by van der Linden (2007), Fox, Klein Entink, and van der Linden (2007), Klein Entink, Fox, and van der Linden (2009), and Glas and van der Linden (2010) can be simplified to a generalized linear factor model with only the mild restriction that there is no hierarchical model at the item side. This result is valuable as it enables all well-developed modelling tools and extensions that come with these methods. We show that the restriction we impose on the hierarchical model does not influence parameter recovery under realistic circumstances. In addition, we present two illustrative real data analyses to demonstrate the practical benefits of our approach. © 2014 The British Psychological Society.

  5. A Multiple Risk Factors Model of the Development of Aggression among Early Adolescents from Urban Disadvantaged Neighborhoods

    ERIC Educational Resources Information Center

    Kim, Sangwon; Orpinas, Pamela; Kamphaus, Randy; Kelder, Steven H.

    2011-01-01

    This study empirically derived a multiple risk factors model of the development of aggression among middle school students in urban, low-income neighborhoods, using Hierarchical Linear Modeling (HLM). Results indicated that aggression increased from sixth to eighth grade. Additionally, the influences of four risk domains (individual, family,…

  6. Using Generalized Additive Models to Analyze Single-Case Designs

    ERIC Educational Resources Information Center

    Shadish, William; Sullivan, Kristynn

    2013-01-01

    Many analyses for single-case designs (SCDs)--including nearly all the effect size indicators-- currently assume no trend in the data. Regression and multilevel models allow for trend, but usually test only linear trend and have no principled way of knowing if higher order trends should be represented in the model. This paper shows how Generalized…

  7. Guidelines and Procedures for Computing Time-Series Suspended-Sediment Concentrations and Loads from In-Stream Turbidity-Sensor and Streamflow Data

    USGS Publications Warehouse

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.

    2009-01-01

    In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.

  8. Use of non-linear mixed-effects modelling and regression analysis to predict the number of somatic coliphages by plaque enumeration after 3 hours of incubation.

    PubMed

    Mendez, Javier; Monleon-Getino, Antonio; Jofre, Juan; Lucena, Francisco

    2017-10-01

    The present study aimed to establish the kinetics of the appearance of coliphage plaques using the double agar layer titration technique to evaluate the feasibility of using traditional coliphage plaque forming unit (PFU) enumeration as a rapid quantification method. Repeated measurements of the appearance of plaques of coliphages titrated according to ISO 10705-2 at different times were analysed using non-linear mixed-effects regression to determine the most suitable model of their appearance kinetics. Although this model is adequate, to simplify its applicability two linear models were developed to predict the numbers of coliphages reliably, using the PFU counts as determined by the ISO after only 3 hours of incubation. One linear model, when the number of plaques detected was between 4 and 26 PFU after 3 hours, had a linear fit of: (1.48 × Counts 3 h + 1.97); and the other, values >26 PFU, had a fit of (1.18 × Counts 3 h + 2.95). If the number of plaques detected was <4 PFU after 3 hours, we recommend incubation for (18 ± 3) hours. The study indicates that the traditional coliphage plating technique has a reasonable potential to provide results in a single working day without the need to invest in additional laboratory equipment.

  9. Estimation of a partially linear additive model for data from an outcome-dependent sampling design with a continuous outcome

    PubMed Central

    Tan, Ziwen; Qin, Guoyou; Zhou, Haibo

    2016-01-01

    Outcome-dependent sampling (ODS) designs have been well recognized as a cost-effective way to enhance study efficiency in both statistical literature and biomedical and epidemiologic studies. A partially linear additive model (PLAM) is widely applied in real problems because it allows for a flexible specification of the dependence of the response on some covariates in a linear fashion and other covariates in a nonlinear non-parametric fashion. Motivated by an epidemiological study investigating the effect of prenatal polychlorinated biphenyls exposure on children's intelligence quotient (IQ) at age 7 years, we propose a PLAM in this article to investigate a more flexible non-parametric inference on the relationships among the response and covariates under the ODS scheme. We propose the estimation method and establish the asymptotic properties of the proposed estimator. Simulation studies are conducted to show the improved efficiency of the proposed ODS estimator for PLAM compared with that from a traditional simple random sampling design with the same sample size. The data of the above-mentioned study is analyzed to illustrate the proposed method. PMID:27006375

  10. Key Results of Interaction Models with Centering

    ERIC Educational Resources Information Center

    Afshartous, David; Preston, Richard A.

    2011-01-01

    We consider the effect on estimation of simultaneous variable centering and interaction effects in linear regression. We technically define, review, and amplify many of the statistical issues for interaction models with centering in order to create a useful and compact reference for teachers, students, and applied researchers. In addition, we…

  11. INTERANNUAL VARIATION IN METEOROLOGICALLY ADJUSTED OZONE LEVELS IN THE EASTERN UNITED STATES: A COMPARISON OF TWO APPROACHED

    EPA Science Inventory

    Assessing the influence of abatement efforts and other human activities on ozone levels is complicated by the atmosphere's changeable nature. Two statistical methods, the dynamic linear model(DLM) and the generalized additive model (GAM), are used to estimate ozone trends in the...

  12. The Dangers of Estimating V˙O2max Using Linear, Nonexercise Prediction Models.

    PubMed

    Nevill, Alan M; Cooke, Carlton B

    2017-05-01

    This study aimed to compare the accuracy and goodness of fit of two competing models (linear vs allometric) when estimating V˙O2max (mL·kg·min) using nonexercise prediction models. The two competing models were fitted to the V˙O2max (mL·kg·min) data taken from two previously published studies. Study 1 (the Allied Dunbar National Fitness Survey) recruited 1732 randomly selected healthy participants, 16 yr and older, from 30 English parliamentary constituencies. Estimates of V˙O2max were obtained using a progressive incremental test on a motorized treadmill. In study 2, maximal oxygen uptake was measured directly during a fatigue limited treadmill test in older men (n = 152) and women (n = 146) 55 to 86 yr old. In both studies, the quality of fit associated with estimating V˙O2max (mL·kg·min) was superior using allometric rather than linear (additive) models based on all criteria (R, maximum log-likelihood, and Akaike information criteria). Results suggest that linear models will systematically overestimate V˙O2max for participants in their 20s and underestimate V˙O2max for participants in their 60s and older. The residuals saved from the linear models were neither normally distributed nor independent of the predicted values nor age. This will probably explain the absence of a key quadratic age term in the linear models, crucially identified using allometric models. Not only does the curvilinear age decline within an exponential function follow a more realistic age decline (the right-hand side of a bell-shaped curve), but the allometric models identified either a stature-to-body mass ratio (study 1) or a fat-free mass-to-body mass ratio (study 2), both associated with leanness when estimating V˙O2max. Adopting allometric models will provide more accurate predictions of V˙O2max (mL·kg·min) using plausible, biologically sound, and interpretable models.

  13. Robust Models for Optic Flow Coding in Natural Scenes Inspired by Insect Biology

    PubMed Central

    Brinkworth, Russell S. A.; O'Carroll, David C.

    2009-01-01

    The extraction of accurate self-motion information from the visual world is a difficult problem that has been solved very efficiently by biological organisms utilizing non-linear processing. Previous bio-inspired models for motion detection based on a correlation mechanism have been dogged by issues that arise from their sensitivity to undesired properties of the image, such as contrast, which vary widely between images. Here we present a model with multiple levels of non-linear dynamic adaptive components based directly on the known or suspected responses of neurons within the visual motion pathway of the fly brain. By testing the model under realistic high-dynamic range conditions we show that the addition of these elements makes the motion detection model robust across a large variety of images, velocities and accelerations. Furthermore the performance of the entire system is more than the incremental improvements offered by the individual components, indicating beneficial non-linear interactions between processing stages. The algorithms underlying the model can be implemented in either digital or analog hardware, including neuromorphic analog VLSI, but defy an analytical solution due to their dynamic non-linear operation. The successful application of this algorithm has applications in the development of miniature autonomous systems in defense and civilian roles, including robotics, miniature unmanned aerial vehicles and collision avoidance sensors. PMID:19893631

  14. Sampling schemes and parameter estimation for nonlinear Bernoulli-Gaussian sparse models

    NASA Astrophysics Data System (ADS)

    Boudineau, Mégane; Carfantan, Hervé; Bourguignon, Sébastien; Bazot, Michael

    2016-06-01

    We address the sparse approximation problem in the case where the data are approximated by the linear combination of a small number of elementary signals, each of these signals depending non-linearly on additional parameters. Sparsity is explicitly expressed through a Bernoulli-Gaussian hierarchical model in a Bayesian framework. Posterior mean estimates are computed using Markov Chain Monte-Carlo algorithms. We generalize the partially marginalized Gibbs sampler proposed in the linear case in [1], and build an hybrid Hastings-within-Gibbs algorithm in order to account for the nonlinear parameters. All model parameters are then estimated in an unsupervised procedure. The resulting method is evaluated on a sparse spectral analysis problem. It is shown to converge more efficiently than the classical joint estimation procedure, with only a slight increase of the computational cost per iteration, consequently reducing the global cost of the estimation procedure.

  15. Emotional expression in music: contribution, linearity, and additivity of primary musical cues

    PubMed Central

    Eerola, Tuomas; Friberg, Anders; Bresin, Roberto

    2013-01-01

    The aim of this study is to manipulate musical cues systematically to determine the aspects of music that contribute to emotional expression, and whether these cues operate in additive or interactive fashion, and whether the cue levels can be characterized as linear or non-linear. An optimized factorial design was used with six primary musical cues (mode, tempo, dynamics, articulation, timbre, and register) across four different music examples. Listeners rated 200 musical examples according to four perceived emotional characters (happy, sad, peaceful, and scary). The results exhibited robust effects for all cues and the ranked importance of these was established by multiple regression. The most important cue was mode followed by tempo, register, dynamics, articulation, and timbre, although the ranking varied across the emotions. The second main result suggested that most cue levels contributed to the emotions in a linear fashion, explaining 77–89% of variance in ratings. Quadratic encoding of cues did lead to minor but significant increases of the models (0–8%). Finally, the interactions between the cues were non-existent suggesting that the cues operate mostly in an additive fashion, corroborating recent findings on emotional expression in music (Juslin and Lindström, 2010). PMID:23908642

  16. Emotional expression in music: contribution, linearity, and additivity of primary musical cues.

    PubMed

    Eerola, Tuomas; Friberg, Anders; Bresin, Roberto

    2013-01-01

    The aim of this study is to manipulate musical cues systematically to determine the aspects of music that contribute to emotional expression, and whether these cues operate in additive or interactive fashion, and whether the cue levels can be characterized as linear or non-linear. An optimized factorial design was used with six primary musical cues (mode, tempo, dynamics, articulation, timbre, and register) across four different music examples. Listeners rated 200 musical examples according to four perceived emotional characters (happy, sad, peaceful, and scary). The results exhibited robust effects for all cues and the ranked importance of these was established by multiple regression. The most important cue was mode followed by tempo, register, dynamics, articulation, and timbre, although the ranking varied across the emotions. The second main result suggested that most cue levels contributed to the emotions in a linear fashion, explaining 77-89% of variance in ratings. Quadratic encoding of cues did lead to minor but significant increases of the models (0-8%). Finally, the interactions between the cues were non-existent suggesting that the cues operate mostly in an additive fashion, corroborating recent findings on emotional expression in music (Juslin and Lindström, 2010).

  17. Genetic parameters for racing records in trotters using linear and generalized linear models.

    PubMed

    Suontama, M; van der Werf, J H J; Juga, J; Ojala, M

    2012-09-01

    Heritability and repeatability and genetic and phenotypic correlations were estimated for trotting race records with linear and generalized linear models using 510,519 records on 17,792 Finnhorses and 513,161 records on 25,536 Standardbred trotters. Heritability and repeatability were estimated for single racing time and earnings traits with linear models, and logarithmic scale was used for racing time and fourth-root scale for earnings to correct for nonnormality. Generalized linear models with a gamma distribution were applied for single racing time and with a multinomial distribution for single earnings traits. In addition, genetic parameters for annual earnings were estimated with linear models on the observed and fourth-root scales. Racing success traits of single placings, winnings, breaking stride, and disqualifications were analyzed using generalized linear models with a binomial distribution. Estimates of heritability were greatest for racing time, which ranged from 0.32 to 0.34. Estimates of heritability were low for single earnings with all distributions, ranging from 0.01 to 0.09. Annual earnings were closer to normal distribution than single earnings. Heritability estimates were moderate for annual earnings on the fourth-root scale, 0.19 for Finnhorses and 0.27 for Standardbred trotters. Heritability estimates for binomial racing success variables ranged from 0.04 to 0.12, being greatest for winnings and least for breaking stride. Genetic correlations among racing traits were high, whereas phenotypic correlations were mainly low to moderate, except correlations between racing time and earnings were high. On the basis of a moderate heritability and moderate to high repeatability for racing time and annual earnings, selection of horses for these traits is effective when based on a few repeated records. Because of high genetic correlations, direct selection for racing time and annual earnings would also result in good genetic response in racing success.

  18. Linear shoaling of free-surface waves in multi-layer non-hydrostatic models

    NASA Astrophysics Data System (ADS)

    Bai, Yefei; Cheung, Kwok Fai

    2018-01-01

    The capability to describe shoaling over sloping bottom is fundamental to modeling of coastal wave transformation. The linear shoaling gradient provides a metric to measure this property in non-hydrostatic models with layer-integrated formulations. The governing equations in Boussinesq form facilitate derivation of the linear shoaling gradient, which is in the form of a [ 2 P + 2 , 2 P ] expansion of the water depth parameter kd with P equal to 1 for a one-layer model and (4 N - 4) for an N-layer model. The expansion reproduces the analytical solution from Airy wave theory at the shallow water limit and maintains a reasonable approximation up to kd = 1.2 and 2 for the one and two-layer models. Additional layers provide rapid and monotonic convergence of the shoaling gradient into deep water. Numerical experiments of wave propagation over a plane slope illustrate manifestation of the shoaling errors through the transformation processes from deep to shallow water. Even though outside the zone of active wave transformation, shoaling errors from deep to intermediate water are cumulative to produce appreciable impact to the wave amplitude in shallow water.

  19. Financial Distress Prediction using Linear Discriminant Analysis and Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Santoso, Noviyanti; Wibowo, Wahyu

    2018-03-01

    A financial difficulty is the early stages before the bankruptcy. Bankruptcies caused by the financial distress can be seen from the financial statements of the company. The ability to predict financial distress became an important research topic because it can provide early warning for the company. In addition, predicting financial distress is also beneficial for investors and creditors. This research will be made the prediction model of financial distress at industrial companies in Indonesia by comparing the performance of Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) combined with variable selection technique. The result of this research is prediction model based on hybrid Stepwise-SVM obtains better balance among fitting ability, generalization ability and model stability than the other models.

  20. Understanding Child Stunting in India: A Comprehensive Analysis of Socio-Economic, Nutritional and Environmental Determinants Using Additive Quantile Regression

    PubMed Central

    Fenske, Nora; Burns, Jacob; Hothorn, Torsten; Rehfuess, Eva A.

    2013-01-01

    Background Most attempts to address undernutrition, responsible for one third of global child deaths, have fallen behind expectations. This suggests that the assumptions underlying current modelling and intervention practices should be revisited. Objective We undertook a comprehensive analysis of the determinants of child stunting in India, and explored whether the established focus on linear effects of single risks is appropriate. Design Using cross-sectional data for children aged 0–24 months from the Indian National Family Health Survey for 2005/2006, we populated an evidence-based diagram of immediate, intermediate and underlying determinants of stunting. We modelled linear, non-linear, spatial and age-varying effects of these determinants using additive quantile regression for four quantiles of the Z-score of standardized height-for-age and logistic regression for stunting and severe stunting. Results At least one variable within each of eleven groups of determinants was significantly associated with height-for-age in the 35% Z-score quantile regression. The non-modifiable risk factors child age and sex, and the protective factors household wealth, maternal education and BMI showed the largest effects. Being a twin or multiple birth was associated with dramatically decreased height-for-age. Maternal age, maternal BMI, birth order and number of antenatal visits influenced child stunting in non-linear ways. Findings across the four quantile and two logistic regression models were largely comparable. Conclusions Our analysis confirms the multifactorial nature of child stunting. It emphasizes the need to pursue a systems-based approach and to consider non-linear effects, and suggests that differential effects across the height-for-age distribution do not play a major role. PMID:24223839

  1. Understanding child stunting in India: a comprehensive analysis of socio-economic, nutritional and environmental determinants using additive quantile regression.

    PubMed

    Fenske, Nora; Burns, Jacob; Hothorn, Torsten; Rehfuess, Eva A

    2013-01-01

    Most attempts to address undernutrition, responsible for one third of global child deaths, have fallen behind expectations. This suggests that the assumptions underlying current modelling and intervention practices should be revisited. We undertook a comprehensive analysis of the determinants of child stunting in India, and explored whether the established focus on linear effects of single risks is appropriate. Using cross-sectional data for children aged 0-24 months from the Indian National Family Health Survey for 2005/2006, we populated an evidence-based diagram of immediate, intermediate and underlying determinants of stunting. We modelled linear, non-linear, spatial and age-varying effects of these determinants using additive quantile regression for four quantiles of the Z-score of standardized height-for-age and logistic regression for stunting and severe stunting. At least one variable within each of eleven groups of determinants was significantly associated with height-for-age in the 35% Z-score quantile regression. The non-modifiable risk factors child age and sex, and the protective factors household wealth, maternal education and BMI showed the largest effects. Being a twin or multiple birth was associated with dramatically decreased height-for-age. Maternal age, maternal BMI, birth order and number of antenatal visits influenced child stunting in non-linear ways. Findings across the four quantile and two logistic regression models were largely comparable. Our analysis confirms the multifactorial nature of child stunting. It emphasizes the need to pursue a systems-based approach and to consider non-linear effects, and suggests that differential effects across the height-for-age distribution do not play a major role.

  2. Prediction uncertainty and data worth assessment for groundwater transport times in an agricultural catchment

    NASA Astrophysics Data System (ADS)

    Zell, Wesley O.; Culver, Teresa B.; Sanford, Ward E.

    2018-06-01

    Uncertainties about the age of base-flow discharge can have serious implications for the management of degraded environmental systems where subsurface pathways, and the ongoing release of pollutants that accumulated in the subsurface during past decades, dominate the water quality signal. Numerical groundwater models may be used to estimate groundwater return times and base-flow ages and thus predict the time required for stakeholders to see the results of improved agricultural management practices. However, the uncertainty inherent in the relationship between (i) the observations of atmospherically-derived tracers that are required to calibrate such models and (ii) the predictions of system age that the observations inform have not been investigated. For example, few if any studies have assessed the uncertainty of numerically-simulated system ages or evaluated the uncertainty reductions that may result from the expense of collecting additional subsurface tracer data. In this study we combine numerical flow and transport modeling of atmospherically-derived tracers with prediction uncertainty methods to accomplish four objectives. First, we show the relative importance of head, discharge, and tracer information for characterizing response times in a uniquely data rich catchment that includes 266 age-tracer measurements (SF6, CFCs, and 3H) in addition to long term monitoring of water levels and stream discharge. Second, we calculate uncertainty intervals for model-simulated base-flow ages using both linear and non-linear methods, and find that the prediction sensitivity vector used by linear first-order second-moment methods results in much larger uncertainties than non-linear Monte Carlo methods operating on the same parameter uncertainty. Third, by combining prediction uncertainty analysis with multiple models of the system, we show that data-worth calculations and monitoring network design are sensitive to variations in the amount of water leaving the system via stream discharge and irrigation withdrawals. Finally, we demonstrate a novel model-averaged computation of potential data worth that can account for these uncertainties in model structure.

  3. Quantifying the sensitivity of feedstock properties and process conditions on hydrochar yield, carbon content, and energy content.

    PubMed

    Li, Liang; Wang, Yiying; Xu, Jiting; Flora, Joseph R V; Hoque, Shamia; Berge, Nicole D

    2018-08-01

    Hydrothermal carbonization (HTC) is a wet, low temperature thermal conversion process that continues to gain attention for the generation of hydrochar. The importance of specific process conditions and feedstock properties on hydrochar characteristics is not well understood. To evaluate this, linear and non-linear models were developed to describe hydrochar characteristics based on data collected from HTC-related literature. A Sobol analysis was subsequently conducted to identify parameters that most influence hydrochar characteristics. Results from this analysis indicate that for each investigated hydrochar property, the model fit and predictive capability associated with the random forest models is superior to both the linear and regression tree models. Based on results from the Sobol analysis, the feedstock properties and process conditions most influential on hydrochar yield, carbon content, and energy content were identified. In addition, a variational process parameter sensitivity analysis was conducted to determine how feedstock property importance changes with process conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Cyclic Plasticity Constitutive Model for Uniaxial Ratcheting Behavior of AZ31B Magnesium Alloy

    NASA Astrophysics Data System (ADS)

    Lin, Y. C.; Liu, Zheng-Hua; Chen, Xiao-Min; Long, Zhi-Li

    2015-05-01

    Investigating the ratcheting behavior of magnesium alloys is significant for the structure's reliable design. The uniaxial ratcheting behavior of AZ31B magnesium alloy is studied by the asymmetric cyclic stress-controlled experiments at room temperature. A modified kinematic hardening model is established to describe the uniaxial ratcheting behavior of the studied alloy. In the modified model, the material parameter m i is improved as an exponential function of the maximum equivalent stress. The modified model can be used to predict the ratcheting strain evolution of the studied alloy under the single-step and multi-step asymmetric stress-controlled cyclic loadings. Additionally, due to the significant effect of twinning on the plastic deformation of magnesium alloy, the relationship between the material parameter m i and the linear density of twins is discussed. It is found that there is a linear relationship between the material parameter m i and the linear density of twins induced by the cyclic loadings.

  5. The linear transformation model with frailties for the analysis of item response times.

    PubMed

    Wang, Chun; Chang, Hua-Hua; Douglas, Jeffrey A

    2013-02-01

    The item response times (RTs) collected from computerized testing represent an underutilized source of information about items and examinees. In addition to knowing the examinees' responses to each item, we can investigate the amount of time examinees spend on each item. In this paper, we propose a semi-parametric model for RTs, the linear transformation model with a latent speed covariate, which combines the flexibility of non-parametric modelling and the brevity as well as interpretability of parametric modelling. In this new model, the RTs, after some non-parametric monotone transformation, become a linear model with latent speed as covariate plus an error term. The distribution of the error term implicitly defines the relationship between the RT and examinees' latent speeds; whereas the non-parametric transformation is able to describe various shapes of RT distributions. The linear transformation model represents a rich family of models that includes the Cox proportional hazards model, the Box-Cox normal model, and many other models as special cases. This new model is embedded in a hierarchical framework so that both RTs and responses are modelled simultaneously. A two-stage estimation method is proposed. In the first stage, the Markov chain Monte Carlo method is employed to estimate the parametric part of the model. In the second stage, an estimating equation method with a recursive algorithm is adopted to estimate the non-parametric transformation. Applicability of the new model is demonstrated with a simulation study and a real data application. Finally, methods to evaluate the model fit are suggested. © 2012 The British Psychological Society.

  6. Neurosurgery simulation using non-linear finite element modeling and haptic interaction

    NASA Astrophysics Data System (ADS)

    Lee, Huai-Ping; Audette, Michel; Joldes, Grand R.; Enquobahrie, Andinet

    2012-02-01

    Real-time surgical simulation is becoming an important component of surgical training. To meet the realtime requirement, however, the accuracy of the biomechancial modeling of soft tissue is often compromised due to computing resource constraints. Furthermore, haptic integration presents an additional challenge with its requirement for a high update rate. As a result, most real-time surgical simulation systems employ a linear elasticity model, simplified numerical methods such as the boundary element method or spring-particle systems, and coarse volumetric meshes. However, these systems are not clinically realistic. We present here an ongoing work aimed at developing an efficient and physically realistic neurosurgery simulator using a non-linear finite element method (FEM) with haptic interaction. Real-time finite element analysis is achieved by utilizing the total Lagrangian explicit dynamic (TLED) formulation and GPU acceleration of per-node and per-element operations. We employ a virtual coupling method for separating deformable body simulation and collision detection from haptic rendering, which needs to be updated at a much higher rate than the visual simulation. The system provides accurate biomechancial modeling of soft tissue while retaining a real-time performance with haptic interaction. However, our experiments showed that the stability of the simulator depends heavily on the material property of the tissue and the speed of colliding objects. Hence, additional efforts including dynamic relaxation are required to improve the stability of the system.

  7. Transport properties of active Brownian particles in a modified energy-depot model driven by correlated noises

    NASA Astrophysics Data System (ADS)

    Guan, Lin; Fang, Yuwen; Li, Kongzhai; Zeng, Chunhua; Yang, Fengzao

    2018-09-01

    In this paper, we investigate the role of correlated multiplicative (κ1) and additive (κ2) noises in a modified energy conversion depot model, at which it is added a linear term in the conversion of internal energy of active Brownian particles (ABPs). The linear term (a1 ≠ 0 . 0) in energy conversion model breaks the symmetry of the potential to generate motion of the ABPs with a net transport velocity. Adopt a nonlinear Langevin approach, the transport properties of the ABPs have been discussed, and our results show that: (i) the transport velocity <υ1 > of the ABPs are always positive whether the correlation intensity λ = 0 . 0 or not; (ii) for a small value of the multiplicative noise intensity κ1, the variation of <υ1 > with λ shows a minimum, there exists an optimal value of the correlation intensity λ at which the <υ1 > of the ABPs is minimized. But for a large value of κ1, the <υ1 > monotonically decreases; (iii) the transport velocity <υ1 > increases with the increase of the κ1 or κ2, i.e., the multiplicative or additive noise can facilitate the transport of the ABPs; and (iv) the effective diffusion increases with the increase of a1, namely, the linear term in modified energy conversion model of the ABPs can enhance the diffusion of the ABPs.

  8. Does transport time help explain the high trauma mortality rates in rural areas? New and traditional predictors assessed by new and traditional statistical methods

    PubMed Central

    Røislien, Jo; Lossius, Hans Morten; Kristiansen, Thomas

    2015-01-01

    Background Trauma is a leading global cause of death. Trauma mortality rates are higher in rural areas, constituting a challenge for quality and equality in trauma care. The aim of the study was to explore population density and transport time to hospital care as possible predictors of geographical differences in mortality rates, and to what extent choice of statistical method might affect the analytical results and accompanying clinical conclusions. Methods Using data from the Norwegian Cause of Death registry, deaths from external causes 1998–2007 were analysed. Norway consists of 434 municipalities, and municipality population density and travel time to hospital care were entered as predictors of municipality mortality rates in univariate and multiple regression models of increasing model complexity. We fitted linear regression models with continuous and categorised predictors, as well as piecewise linear and generalised additive models (GAMs). Models were compared using Akaike's information criterion (AIC). Results Population density was an independent predictor of trauma mortality rates, while the contribution of transport time to hospital care was highly dependent on choice of statistical model. A multiple GAM or piecewise linear model was superior, and similar, in terms of AIC. However, while transport time was statistically significant in multiple models with piecewise linear or categorised predictors, it was not in GAM or standard linear regression. Conclusions Population density is an independent predictor of trauma mortality rates. The added explanatory value of transport time to hospital care is marginal and model-dependent, highlighting the importance of exploring several statistical models when studying complex associations in observational data. PMID:25972600

  9. Linear Mathematical Model for Seam Tracking with an Arc Sensor in P-GMAW Processes

    PubMed Central

    Liu, Wenji; Li, Liangyu; Hong, Ying; Yue, Jianfeng

    2017-01-01

    Arc sensors have been used in seam tracking and widely studied since the 80s and commercial arc sensing products for T and V shaped grooves have been developed. However, it is difficult to use these arc sensors in narrow gap welding because the arc stability and sensing accuracy are not satisfactory. Pulse gas melting arc welding (P-GMAW) has been successfully applied in narrow gap welding and all position welding processes, so it is worthwhile to research P-GMAW arc sensing technology. In this paper, we derived a linear mathematical P-GMAW model for arc sensing, and the assumptions for the model are verified through experiments and finite element methods. Finally, the linear characteristics of the mathematical model were investigated. In torch height changing experiments, uphill experiments, and groove angle changing experiments the P-GMAW arc signals all satisfied the linear rules. In addition, the faster the welding speed, the higher the arc signal sensitivities; the smaller the groove angle, the greater the arc sensitivities. The arc signal variation rate needs to be modified according to the welding power, groove angles, and weaving or rotate speed. PMID:28335425

  10. Linear Mathematical Model for Seam Tracking with an Arc Sensor in P-GMAW Processes.

    PubMed

    Liu, Wenji; Li, Liangyu; Hong, Ying; Yue, Jianfeng

    2017-03-14

    Arc sensors have been used in seam tracking and widely studied since the 80s and commercial arc sensing products for T and V shaped grooves have been developed. However, it is difficult to use these arc sensors in narrow gap welding because the arc stability and sensing accuracy are not satisfactory. Pulse gas melting arc welding (P-GMAW) has been successfully applied in narrow gap welding and all position welding processes, so it is worthwhile to research P-GMAW arc sensing technology. In this paper, we derived a linear mathematical P-GMAW model for arc sensing, and the assumptions for the model are verified through experiments and finite element methods. Finally, the linear characteristics of the mathematical model were investigated. In torch height changing experiments, uphill experiments, and groove angle changing experiments the P-GMAW arc signals all satisfied the linear rules. In addition, the faster the welding speed, the higher the arc signal sensitivities; the smaller the groove angle, the greater the arc sensitivities. The arc signal variation rate needs to be modified according to the welding power, groove angles, and weaving or rotate speed.

  11. Variable Selection with Prior Information for Generalized Linear Models via the Prior LASSO Method.

    PubMed

    Jiang, Yuan; He, Yunxiao; Zhang, Heping

    LASSO is a popular statistical tool often used in conjunction with generalized linear models that can simultaneously select variables and estimate parameters. When there are many variables of interest, as in current biological and biomedical studies, the power of LASSO can be limited. Fortunately, so much biological and biomedical data have been collected and they may contain useful information about the importance of certain variables. This paper proposes an extension of LASSO, namely, prior LASSO (pLASSO), to incorporate that prior information into penalized generalized linear models. The goal is achieved by adding in the LASSO criterion function an additional measure of the discrepancy between the prior information and the model. For linear regression, the whole solution path of the pLASSO estimator can be found with a procedure similar to the Least Angle Regression (LARS). Asymptotic theories and simulation results show that pLASSO provides significant improvement over LASSO when the prior information is relatively accurate. When the prior information is less reliable, pLASSO shows great robustness to the misspecification. We illustrate the application of pLASSO using a real data set from a genome-wide association study.

  12. Linear and Poisson models for genetic evaluation of tick resistance in cross-bred Hereford x Nellore cattle.

    PubMed

    Ayres, D R; Pereira, R J; Boligon, A A; Silva, F F; Schenkel, F S; Roso, V M; Albuquerque, L G

    2013-12-01

    Cattle resistance to ticks is measured by the number of ticks infesting the animal. The model used for the genetic analysis of cattle resistance to ticks frequently requires logarithmic transformation of the observations. The objective of this study was to evaluate the predictive ability and goodness of fit of different models for the analysis of this trait in cross-bred Hereford x Nellore cattle. Three models were tested: a linear model using logarithmic transformation of the observations (MLOG); a linear model without transformation of the observations (MLIN); and a generalized linear Poisson model with residual term (MPOI). All models included the classificatory effects of contemporary group and genetic group and the covariates age of animal at the time of recording and individual heterozygosis, as well as additive genetic effects as random effects. Heritability estimates were 0.08 ± 0.02, 0.10 ± 0.02 and 0.14 ± 0.04 for MLIN, MLOG and MPOI models, respectively. The model fit quality, verified by deviance information criterion (DIC) and residual mean square, indicated fit superiority of MPOI model. The predictive ability of the models was compared by validation test in independent sample. The MPOI model was slightly superior in terms of goodness of fit and predictive ability, whereas the correlations between observed and predicted tick counts were practically the same for all models. A higher rank correlation between breeding values was observed between models MLOG and MPOI. Poisson model can be used for the selection of tick-resistant animals. © 2013 Blackwell Verlag GmbH.

  13. A quasi-likelihood approach to non-negative matrix factorization

    PubMed Central

    Devarajan, Karthik; Cheung, Vincent C.K.

    2017-01-01

    A unified approach to non-negative matrix factorization based on the theory of generalized linear models is proposed. This approach embeds a variety of statistical models, including the exponential family, within a single theoretical framework and provides a unified view of such factorizations from the perspective of quasi-likelihood. Using this framework, a family of algorithms for handling signal-dependent noise is developed and its convergence proven using the Expectation-Maximization algorithm. In addition, a measure to evaluate the goodness-of-fit of the resulting factorization is described. The proposed methods allow modeling of non-linear effects via appropriate link functions and are illustrated using an application in biomedical signal processing. PMID:27348511

  14. The Development of Web-based Graphical User Interface for Unified Modeling Data with Multi (Correlated) Responses

    NASA Astrophysics Data System (ADS)

    Made Tirta, I.; Anggraeni, Dian

    2018-04-01

    Statistical models have been developed rapidly into various directions to accommodate various types of data. Data collected from longitudinal, repeated measured, clustered data (either continuous, binary, count, or ordinal), are more likely to be correlated. Therefore statistical model for independent responses, such as Generalized Linear Model (GLM), Generalized Additive Model (GAM) are not appropriate. There are several models available to apply for correlated responses including GEEs (Generalized Estimating Equations), for marginal model and various mixed effect model such as GLMM (Generalized Linear Mixed Models) and HGLM (Hierarchical Generalized Linear Models) for subject spesific models. These models are available on free open source software R, but they can only be accessed through command line interface (using scrit). On the othe hand, most practical researchers very much rely on menu based or Graphical User Interface (GUI). We develop, using Shiny framework, standard pull down menu Web-GUI that unifies most models for correlated responses. The Web-GUI has accomodated almost all needed features. It enables users to do and compare various modeling for repeated measure data (GEE, GLMM, HGLM, GEE for nominal responses) much more easily trough online menus. This paper discusses the features of the Web-GUI and illustrates the use of them. In General we find that GEE, GLMM, HGLM gave very closed results.

  15. Linear dependence between the wavefront gradient and the masked intensity for the point source with a CCD sensor

    NASA Astrophysics Data System (ADS)

    Yang, Huizhen; Ma, Liang; Wang, Bin

    2018-01-01

    In contrast to the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system doesn't need a WFS to measure the wavefront aberrations. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. The model-based WFSless system has a great potential in real-time correction applications because of its fast convergence. The control algorithm of the model-based WFSless system is based on an important theory result that is the linear relation between the Mean-Square Gradient (MSG) magnitude of the wavefront aberration and the second moment of the masked intensity distribution in the focal plane (also called as Masked Detector Signal-MDS). The linear dependence between MSG and MDS for the point source imaging with a CCD sensor will be discussed from theory and simulation in this paper. The theory relationship between MSG and MDS is given based on our previous work. To verify the linear relation for the point source, we set up an imaging model under atmospheric turbulence. Additionally, the value of MDS will be deviate from that of theory because of the noise of detector and further the deviation will affect the correction effect. The theory results under noise will be obtained through theoretical derivation and then the linear relation between MDS and MDS under noise will be discussed through the imaging model. Results show the linear relation between MDS and MDS under noise is also maintained well, which provides a theoretical support to applications of the model-based WFSless system.

  16. Feature Extraction of Event-Related Potentials Using Wavelets: An Application to Human Performance Monitoring

    NASA Technical Reports Server (NTRS)

    Trejo, Leonard J.; Shensa, Mark J.; Remington, Roger W. (Technical Monitor)

    1998-01-01

    This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many f ree parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation,-, algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance.

  17. Feature extraction of event-related potentials using wavelets: an application to human performance monitoring

    NASA Technical Reports Server (NTRS)

    Trejo, L. J.; Shensa, M. J.

    1999-01-01

    This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many free parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance. Copyright 1999 Academic Press.

  18. Magnetotransport in a Model of a Disordered Strange Metal

    NASA Astrophysics Data System (ADS)

    Patel, Aavishkar A.; McGreevy, John; Arovas, Daniel P.; Sachdev, Subir

    2018-04-01

    Despite much theoretical effort, there is no complete theory of the "strange" metal state of the high temperature superconductors, and its linear-in-temperature T resistivity. Recent experiments showing an unexpected linear-in-field B magnetoresistivity have deepened the puzzle. We propose a simple model of itinerant electrons, interacting via random couplings, with electrons localized on a lattice of "quantum dots" or "islands." This model is solvable in a particular large-N limit and can reproduce observed behavior. The key feature of our model is that the electrons in each quantum dot are described by a Sachdev-Ye-Kitaev model describing electrons without quasiparticle excitations. For a particular choice of the interaction between the itinerant and localized electrons, this model realizes a controlled description of a diffusive marginal-Fermi liquid (MFL) without momentum conservation, which has a linear-in-T resistivity and a T ln T specific heat as T →0 . By tuning the strength of this interaction relative to the bandwidth of the itinerant electrons, we can additionally obtain a finite-T crossover to a fully incoherent regime that also has a linear-in-T resistivity. We describe the magnetotransport properties of this model and show that the MFL regime has conductivities that scale as a function of B /T ; however, the magnetoresistance saturates at large B . We then consider a macroscopically disordered sample with domains of such MFLs with varying densities of electrons and islands. Using an effective-medium approximation, we obtain a macroscopic electrical resistance that scales linearly in the magnetic field B applied perpendicular to the plane of the sample, at large B . The resistance also scales linearly in T at small B , and as T f (B /T ) at intermediate B . We consider implications for recent experiments reporting linear transverse magnetoresistance in the strange metal phases of the pnictides and cuprates.

  19. Investigation of Periodic-Disturbance Identification and Rejection in Spacecraft

    DTIC Science & Technology

    2006-08-01

    linear theory. Therefore, it is of interest to examine its efficacy on the current nonlinear spacecraft model. In addition, the robustness of the...School, Monterey, California 93943 Spacecraft periodic-disturbance rejection using a realistic spacecraft hardware simulator and its associated models...is investigated. The effectiveness of the dipole-type disturbance rejection filter on the current realistic nonlinear rigid-body spacecraft model is

  20. Building out a Measurement Model to Incorporate Complexities of Testing in the Language Domain

    ERIC Educational Resources Information Center

    Wilson, Mark; Moore, Stephen

    2011-01-01

    This paper provides a summary of a novel and integrated way to think about the item response models (most often used in measurement applications in social science areas such as psychology, education, and especially testing of various kinds) from the viewpoint of the statistical theory of generalized linear and nonlinear mixed models. In addition,…

  1. Non-linear controls influence functions in an aircraft dynamics simulator

    NASA Technical Reports Server (NTRS)

    Guerreiro, Nelson M.; Hubbard, James E., Jr.; Motter, Mark A.

    2006-01-01

    In the development and testing of novel structural and controls concepts, such as morphing aircraft wings, appropriate models are needed for proper system characterization. In most instances, available system models do not provide the required additional degrees of freedom for morphing structures but may be modified to some extent to achieve a compatible system. The objective of this study is to apply wind tunnel data collected for an Unmanned Air Vehicle (UAV), that implements trailing edge morphing, to create a non-linear dynamics simulator, using well defined rigid body equations of motion, where the aircraft stability derivatives change with control deflection. An analysis of this wind tunnel data, using data extraction algorithms, was performed to determine the reference aerodynamic force and moment coefficients for the aircraft. Further, non-linear influence functions were obtained for each of the aircraft s control surfaces, including the sixteen trailing edge flap segments. These non-linear controls influence functions are applied to the aircraft dynamics to produce deflection-dependent aircraft stability derivatives in a non-linear dynamics simulator. Time domain analysis of the aircraft motion, trajectory, and state histories can be performed using these nonlinear dynamics and may be visualized using a 3-dimensional aircraft model. Linear system models can be extracted to facilitate frequency domain analysis of the system and for control law development. The results of this study are useful in similar projects where trailing edge morphing is employed and will be instrumental in the University of Maryland s continuing study of active wing load control.

  2. Dynamical modeling approach to risk assessment for radiogenic leukemia among astronauts engaged in interplanetary space missions.

    PubMed

    Smirnova, Olga A; Cucinotta, Francis A

    2018-02-01

    A recently developed biologically motivated dynamical model of the assessment of the excess relative risk (ERR) for radiogenic leukemia among acutely/continuously irradiated humans (Smirnova, 2015, 2017) is applied to estimate the ERR for radiogenic leukemia among astronauts engaged in long-term interplanetary space missions. Numerous scenarios of space radiation exposure during space missions are used in the modeling studies. The dependence of the ERR for leukemia among astronauts on several mission parameters including the dose equivalent rates of galactic cosmic rays (GCR) and large solar particle events (SPEs), the number of large SPEs, the time interval between SPEs, mission duration, the degree of astronaut's additional shielding during SPEs, the degree of their additional 12-hour's daily shielding, as well as the total mission dose equivalent, is examined. The results of the estimation of ERR for radiogenic leukemia among astronauts, which are obtained in the framework of the developed dynamical model for various scenarios of space radiation exposure, are compared with the corresponding results, computed by the commonly used linear model. It is revealed that the developed dynamical model along with the linear model can be applied to estimate ERR for radiogenic leukemia among astronauts engaged in long-term interplanetary space missions in the range of applicability of the latter. In turn, the developed dynamical model is capable of predicting the ERR for leukemia among astronauts for the irradiation regimes beyond the applicability range of the linear model in emergency cases. As a supplement to the estimations of cancer incidence and death (REIC and REID) (Cucinotta et al., 2013, 2017), the developed dynamical model for the assessment of the ERR for leukemia can be employed on the pre-mission design phase for, e.g., the optimization of the regimes of astronaut's additional shielding in the course of interplanetary space missions. The developed model can also be used on the phase of the real-time responses during the space mission to make the decisions on the operational application of appropriate countermeasures to minimize the risks of occurrences of leukemia, especially, for emergency cases. Copyright © 2017 The Committee on Space Research (COSPAR). Published by Elsevier Ltd. All rights reserved.

  3. Hierarchical additive modeling of nonlinear association with spatial correlations--an application to relate alcohol outlet density and neighborhood assault rates.

    PubMed

    Yu, Qingzhao; Li, Bin; Scribner, Richard Allen

    2009-06-30

    Previous studies have suggested a link between alcohol outlets and assaults. In this paper, we explore the effects of alcohol availability on assaults at the census tract level over time. In addition, we use a natural experiment to check whether a sudden loss of alcohol outlets is associated with deeper decreasing in assault violence. Several features of the data raise statistical challenges: (1) the association between covariates (for example, the alcohol outlet density of each census tract) and the assault rates may be complex and therefore cannot be described using a linear model without covariates transformation, (2) the covariates may be highly correlated with each other, (3) there are a number of observations that have missing inputs, and (4) there is spatial association in assault rates at the census tract level. We propose a hierarchical additive model, where the nonlinear correlations and the complex interaction effects are modeled using the multiple additive regression trees and the residual spatial association in the assault rates that cannot be explained in the model are smoothed using a conditional autoregressive (CAR) method. We develop a two-stage algorithm that connects the nonparametric trees with CAR to look for important covariates associated with the assault rates, while taking into account the spatial association of assault rates in adjacent census tracts. The proposed method is applied to the Los Angeles assault data (1990-1999). To assess the efficiency of the method, the results are compared with those obtained from a hierarchical linear model. Copyright (c) 2009 John Wiley & Sons, Ltd.

  4. Modeling exposure–lag–response associations with distributed lag non-linear models

    PubMed Central

    Gasparrini, Antonio

    2014-01-01

    In biomedical research, a health effect is frequently associated with protracted exposures of varying intensity sustained in the past. The main complexity of modeling and interpreting such phenomena lies in the additional temporal dimension needed to express the association, as the risk depends on both intensity and timing of past exposures. This type of dependency is defined here as exposure–lag–response association. In this contribution, I illustrate a general statistical framework for such associations, established through the extension of distributed lag non-linear models, originally developed in time series analysis. This modeling class is based on the definition of a cross-basis, obtained by the combination of two functions to flexibly model linear or nonlinear exposure-responses and the lag structure of the relationship, respectively. The methodology is illustrated with an example application to cohort data and validated through a simulation study. This modeling framework generalizes to various study designs and regression models, and can be applied to study the health effects of protracted exposures to environmental factors, drugs or carcinogenic agents, among others. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24027094

  5. Prediction models for CO2 emission in Malaysia using best subsets regression and multi-linear regression

    NASA Astrophysics Data System (ADS)

    Tan, C. H.; Matjafri, M. Z.; Lim, H. S.

    2015-10-01

    This paper presents the prediction models which analyze and compute the CO2 emission in Malaysia. Each prediction model for CO2 emission will be analyzed based on three main groups which is transportation, electricity and heat production as well as residential buildings and commercial and public services. The prediction models were generated using data obtained from World Bank Open Data. Best subset method will be used to remove irrelevant data and followed by multi linear regression to produce the prediction models. From the results, high R-square (prediction) value was obtained and this implies that the models are reliable to predict the CO2 emission by using specific data. In addition, the CO2 emissions from these three groups are forecasted using trend analysis plots for observation purpose.

  6. Effect of Mantle Rheology on Viscous Heating induced during Ice Sheet Cycles

    NASA Astrophysics Data System (ADS)

    Huang, Pingping; Wu, Patrick; van der Wal, Wouter

    2017-04-01

    Hanyk et al. (2005) studied the viscous shear heating in the mantle induced by the surface loading and unloading of a parabolic-shaped Laurentide-size ice sheet. They found that for linear rheology, viscous heating is mainly concentrated below the ice sheet. The depth extent of the heating in the mantle is determined by the viscosity distribution. Also, the magnitude of viscous heating is significantly affected by the rate of ice thickness change. However, only one ice sheet has been considered in their work and the interactions between ice sheets and ocean loading have been neglected. Furthermore, only linear rheology has been considered, although they suggested that non-Newtonian rheology may have a stronger effect. Here we follow Hanyk et al. (2005) and computed the viscous dissipation for viscoelastic models using the finite element methodology of Wu (2004) and van der Wal et al. (2010). However, the global ICE6G model (Peltier et al. 2015) with realistic oceans is used here to provide the surface loading. In addition, viscous heating in non-linear rheology, composite rheology, in addition to linear rheology with uniform or VM5a profile are computed and compared. Our results for linear rheology mainly confirm the findings of Hanyk et al. (2005). For both non-linear and composite rheologies, viscous heating is also mainly distributed near and under the ice sheets, but, more concentrated; depending on the horizontal dimension of the ice sheet, it can extend into the lower mantle, but for some of the time, not as deep as that for linear rheology. For composite rheology, the viscous heating is dominated by the effect of non-linear relation between the stress and the strain. The ice history controls the time when the local maximum in viscous heating appears. However, the magnitude of the viscous heating is affected by mantle rheology as well as the ice loading. Due to viscosity stratification, the shape of the region with high viscous heating in model VM5a is a little more irregular than that from uniform viscosity model. However, peak heating in the VM5a model is as big as 22.5 times that of the chondritic radiogenic heating, and is much bigger than that from linear rheology with uniform viscosity (3.95 times the chondritic radiogenic heating), non-linear rheology model (10.14 times) and composite rheology model (10.04 times). Applications of viscous heating will also be discussed. References Hanyk, L., Matyska, C., & Yuen, D. A. (2005). Short time-scale heating of the Earth's mantle by ice-sheet dynamics. Earth, planets and space, 57(9), 895-902. Wu, P. (2004). Using commercial finite element packages for the study of earth deformations, sea levels and the state of stress. Geophysical Journal International, 158(2), 401-408. Van der Wal, W., P. Wu, H. Wang & M.G. Sideris, (2010). Sea levels and uplift rate from composite rheology in glacial isostatic adjustment modeling, J. Geod., J. Geod., 50:38-48. Peltier, W., Argus, D., and Drummond, R. (2015). Space geodesy constrains ice age terminal deglaciation: The global ICE-6GC (VM5a) model. Journal of Geophysical Research: Solid Earth, 120(1): 450-487

  7. Protein linear indices of the 'macromolecular pseudograph alpha-carbon atom adjacency matrix' in bioinformatics. Part 1: prediction of protein stability effects of a complete set of alanine substitutions in Arc repressor.

    PubMed

    Marrero-Ponce, Yovani; Medina-Marrero, Ricardo; Castillo-Garit, Juan A; Romero-Zaldivar, Vicente; Torrens, Francisco; Castro, Eduardo A

    2005-04-15

    A novel approach to bio-macromolecular design from a linear algebra point of view is introduced. A protein's total (whole protein) and local (one or more amino acid) linear indices are a new set of bio-macromolecular descriptors of relevance to protein QSAR/QSPR studies. These amino-acid level biochemical descriptors are based on the calculation of linear maps on Rn[f k(xmi):Rn-->Rn] in canonical basis. These bio-macromolecular indices are calculated from the kth power of the macromolecular pseudograph alpha-carbon atom adjacency matrix. Total linear indices are linear functional on Rn. That is, the kth total linear indices are linear maps from Rn to the scalar R[f k(xm):Rn-->R]. Thus, the kth total linear indices are calculated by summing the amino-acid linear indices of all amino acids in the protein molecule. A study of the protein stability effects for a complete set of alanine substitutions in the Arc repressor illustrates this approach. A quantitative model that discriminates near wild-type stability alanine mutants from the reduced-stability ones in a training series was obtained. This model permitted the correct classification of 97.56% (40/41) and 91.67% (11/12) of proteins in the training and test set, respectively. It shows a high Matthews correlation coefficient (MCC=0.952) for the training set and an MCC=0.837 for the external prediction set. Additionally, canonical regression analysis corroborated the statistical quality of the classification model (Rcanc=0.824). This analysis was also used to compute biological stability canonical scores for each Arc alanine mutant. On the other hand, the linear piecewise regression model compared favorably with respect to the linear regression one on predicting the melting temperature (tm) of the Arc alanine mutants. The linear model explains almost 81% of the variance of the experimental tm (R=0.90 and s=4.29) and the LOO press statistics evidenced its predictive ability (q2=0.72 and scv=4.79). Moreover, the TOMOCOMD-CAMPS method produced a linear piecewise regression (R=0.97) between protein backbone descriptors and tm values for alanine mutants of the Arc repressor. A break-point value of 51.87 degrees C characterized two mutant clusters and coincided perfectly with the experimental scale. For this reason, we can use the linear discriminant analysis and piecewise models in combination to classify and predict the stability of the mutant Arc homodimers. These models also permitted the interpretation of the driving forces of such folding process, indicating that topologic/topographic protein backbone interactions control the stability profile of wild-type Arc and its alanine mutants.

  8. Minimizing bias in biomass allometry: Model selection and log transformation of data

    Treesearch

    Joseph Mascaro; undefined undefined; Flint Hughes; Amanda Uowolo; Stefan A. Schnitzer

    2011-01-01

    Nonlinear regression is increasingly used to develop allometric equations for forest biomass estimation (i.e., as opposed to the raditional approach of log-transformation followed by linear regression). Most statistical software packages, however, assume additive errors by default, violating a key assumption of allometric theory and possibly producing spurious models....

  9. Progress Toward Improving Jet Noise Predictions in Hot Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Kenzakowski, Donald C.

    2007-01-01

    An acoustic analogy methodology for improving noise predictions in hot round jets is presented. Past approaches have often neglected the impact of temperature fluctuations on the predicted sound spectral density, which could be significant for heated jets, and this has yielded noticeable acoustic under-predictions in such cases. The governing acoustic equations adopted here are a set of linearized, inhomogeneous Euler equations. These equations are combined into a single third order linear wave operator when the base flow is considered as a locally parallel mean flow. The remaining second-order fluctuations are regarded as the equivalent sources of sound and are modeled. It is shown that the hot jet effect may be introduced primarily through a fluctuating velocity/enthalpy term. Modeling this additional source requires specialized inputs from a RANS-based flowfield simulation. The information is supplied using an extension to a baseline two equation turbulence model that predicts total enthalpy variance in addition to the standard parameters. Preliminary application of this model to a series of unheated and heated subsonic jets shows significant improvement in the acoustic predictions at the 90 degree observer angle.

  10. Analysis of 2D THz-Raman spectroscopy using a non-Markovian Brownian oscillator model with nonlinear system-bath interactions.

    PubMed

    Ikeda, Tatsushi; Ito, Hironobu; Tanimura, Yoshitaka

    2015-06-07

    We explore and describe the roles of inter-molecular vibrations employing a Brownian oscillator (BO) model with linear-linear (LL) and square-linear (SL) system-bath interactions, which we use to analyze two-dimensional (2D) THz-Raman spectra obtained by means of molecular dynamics (MD) simulations. In addition to linear infrared absorption (1D IR), we calculated 2D Raman-THz-THz, THz-Raman-THz, and THz-THz-Raman signals for liquid formamide, water, and methanol using an equilibrium non-equilibrium hybrid MD simulation. The calculated 1D IR and 2D THz-Raman signals are compared with results obtained from the LL+SL BO model applied through use of hierarchal Fokker-Planck equations with non-perturbative and non-Markovian noise. We find that all of the qualitative features of the 2D profiles of the signals obtained from the MD simulations are reproduced with the LL+SL BO model, indicating that this model captures the essential features of the inter-molecular motion. We analyze the fitted 2D profiles in terms of anharmonicity, nonlinear polarizability, and dephasing time. The origins of the echo peaks of the librational motion and the elongated peaks parallel to the probe direction are elucidated using optical Liouville paths.

  11. Passive dendrites enable single neurons to compute linearly non-separable functions.

    PubMed

    Cazé, Romain Daniel; Humphries, Mark; Gutkin, Boris

    2013-01-01

    Local supra-linear summation of excitatory inputs occurring in pyramidal cell dendrites, the so-called dendritic spikes, results in independent spiking dendritic sub-units, which turn pyramidal neurons into two-layer neural networks capable of computing linearly non-separable functions, such as the exclusive OR. Other neuron classes, such as interneurons, may possess only a few independent dendritic sub-units, or only passive dendrites where input summation is purely sub-linear, and where dendritic sub-units are only saturating. To determine if such neurons can also compute linearly non-separable functions, we enumerate, for a given parameter range, the Boolean functions implementable by a binary neuron model with a linear sub-unit and either a single spiking or a saturating dendritic sub-unit. We then analytically generalize these numerical results to an arbitrary number of non-linear sub-units. First, we show that a single non-linear dendritic sub-unit, in addition to the somatic non-linearity, is sufficient to compute linearly non-separable functions. Second, we analytically prove that, with a sufficient number of saturating dendritic sub-units, a neuron can compute all functions computable with purely excitatory inputs. Third, we show that these linearly non-separable functions can be implemented with at least two strategies: one where a dendritic sub-unit is sufficient to trigger a somatic spike; another where somatic spiking requires the cooperation of multiple dendritic sub-units. We formally prove that implementing the latter architecture is possible with both types of dendritic sub-units whereas the former is only possible with spiking dendrites. Finally, we show how linearly non-separable functions can be computed by a generic two-compartment biophysical model and a realistic neuron model of the cerebellar stellate cell interneuron. Taken together our results demonstrate that passive dendrites are sufficient to enable neurons to compute linearly non-separable functions.

  12. Passive Dendrites Enable Single Neurons to Compute Linearly Non-separable Functions

    PubMed Central

    Cazé, Romain Daniel; Humphries, Mark; Gutkin, Boris

    2013-01-01

    Local supra-linear summation of excitatory inputs occurring in pyramidal cell dendrites, the so-called dendritic spikes, results in independent spiking dendritic sub-units, which turn pyramidal neurons into two-layer neural networks capable of computing linearly non-separable functions, such as the exclusive OR. Other neuron classes, such as interneurons, may possess only a few independent dendritic sub-units, or only passive dendrites where input summation is purely sub-linear, and where dendritic sub-units are only saturating. To determine if such neurons can also compute linearly non-separable functions, we enumerate, for a given parameter range, the Boolean functions implementable by a binary neuron model with a linear sub-unit and either a single spiking or a saturating dendritic sub-unit. We then analytically generalize these numerical results to an arbitrary number of non-linear sub-units. First, we show that a single non-linear dendritic sub-unit, in addition to the somatic non-linearity, is sufficient to compute linearly non-separable functions. Second, we analytically prove that, with a sufficient number of saturating dendritic sub-units, a neuron can compute all functions computable with purely excitatory inputs. Third, we show that these linearly non-separable functions can be implemented with at least two strategies: one where a dendritic sub-unit is sufficient to trigger a somatic spike; another where somatic spiking requires the cooperation of multiple dendritic sub-units. We formally prove that implementing the latter architecture is possible with both types of dendritic sub-units whereas the former is only possible with spiking dendrites. Finally, we show how linearly non-separable functions can be computed by a generic two-compartment biophysical model and a realistic neuron model of the cerebellar stellate cell interneuron. Taken together our results demonstrate that passive dendrites are sufficient to enable neurons to compute linearly non-separable functions. PMID:23468600

  13. Does the high–tech industry consistently reduce CO{sub 2} emissions? Results from nonparametric additive regression model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Bin; Research Center of Applied Statistics, Jiangxi University of Finance and Economics, Nanchang, Jiangxi 330013; Lin, Boqiang, E-mail: bqlin@xmu.edu.cn

    China is currently the world's largest carbon dioxide (CO{sub 2}) emitter. Moreover, total energy consumption and CO{sub 2} emissions in China will continue to increase due to the rapid growth of industrialization and urbanization. Therefore, vigorously developing the high–tech industry becomes an inevitable choice to reduce CO{sub 2} emissions at the moment or in the future. However, ignoring the existing nonlinear links between economic variables, most scholars use traditional linear models to explore the impact of the high–tech industry on CO{sub 2} emissions from an aggregate perspective. Few studies have focused on nonlinear relationships and regional differences in China. Basedmore » on panel data of 1998–2014, this study uses the nonparametric additive regression model to explore the nonlinear effect of the high–tech industry from a regional perspective. The estimated results show that the residual sum of squares (SSR) of the nonparametric additive regression model in the eastern, central and western regions are 0.693, 0.054 and 0.085 respectively, which are much less those that of the traditional linear regression model (3.158, 4.227 and 7.196). This verifies that the nonparametric additive regression model has a better fitting effect. Specifically, the high–tech industry produces an inverted “U–shaped” nonlinear impact on CO{sub 2} emissions in the eastern region, but a positive “U–shaped” nonlinear effect in the central and western regions. Therefore, the nonlinear impact of the high–tech industry on CO{sub 2} emissions in the three regions should be given adequate attention in developing effective abatement policies. - Highlights: • The nonlinear effect of the high–tech industry on CO{sub 2} emissions was investigated. • The high–tech industry yields an inverted “U–shaped” effect in the eastern region. • The high–tech industry has a positive “U–shaped” nonlinear effect in other regions. • The linear impact of the high–tech industry in the eastern region is the strongest.« less

  14. Accelerated Testing and Modeling of Potential-Induced Degradation as a Function of Temperature and Relative Humidity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hacke, Peter; Spataru, Sergiu; Terwilliger, Kent

    2015-06-14

    An acceleration model based on the Peck equation was applied to power performance of crystalline silicon cell modules as a function of time and of temperature and humidity, the two main environmental stress factors that promote potential-induced degradation. This model was derived from module power degradation data obtained semi-continuously and statistically by in-situ dark current-voltage measurements in an environmental chamber. The modeling enables prediction of degradation rates and times as functions of temperature and humidity. Power degradation could be modeled linearly as a function of time to the second power; additionally, we found that coulombs transferred from the active cellmore » circuit to ground during the stress test is approximately linear with time. Therefore, the power loss could be linearized as a function of coulombs squared. With this result, we observed that when the module face was completely grounded with a condensed phase conductor, leakage current exceeded the anticipated corresponding degradation rate relative to the other tests performed in damp heat.« less

  15. Genetic evaluation of calf and heifer survival in Iranian Holstein cattle using linear and threshold models.

    PubMed

    Forutan, M; Ansari Mahyari, S; Sargolzaei, M

    2015-02-01

    Calf and heifer survival are important traits in dairy cattle affecting profitability. This study was carried out to estimate genetic parameters of survival traits in female calves at different age periods, until nearly the first calving. Records of 49,583 female calves born during 1998 and 2009 were considered in five age periods as days 1-30, 31-180, 181-365, 366-760 and full period (day 1-760). Genetic components were estimated based on linear and threshold sire models and linear animal models. The models included both fixed effects (month of birth, dam's parity number, calving ease and twin/single) and random effects (herd-year, genetic effect of sire or animal and residual). Rates of death were 2.21, 3.37, 1.97, 4.14 and 12.4% for the above periods, respectively. Heritability estimates were very low ranging from 0.48 to 3.04, 0.62 to 3.51 and 0.50 to 4.24% for linear sire model, animal model and threshold sire model, respectively. Rank correlations between random effects of sires obtained with linear and threshold sire models and with linear animal and sire models were 0.82-0.95 and 0.61-0.83, respectively. The estimated genetic correlations between the five different periods were moderate and only significant for 31-180 and 181-365 (r(g) = 0.59), 31-180 and 366-760 (r(g) = 0.52), and 181-365 and 366-760 (r(g) = 0.42). The low genetic correlations in current study would suggest that survival at different periods may be affected by the same genes with different expression or by different genes. Even though the additive genetic variations of survival traits were small, it might be possible to improve these traits by traditional or genomic selection. © 2014 Blackwell Verlag GmbH.

  16. Functional linear models for association analysis of quantitative traits.

    PubMed

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. © 2013 WILEY PERIODICALS, INC.

  17. Features in visual search combine linearly

    PubMed Central

    Pramod, R. T.; Arun, S. P.

    2014-01-01

    Single features such as line orientation and length are known to guide visual search, but relatively little is known about how multiple features combine in search. To address this question, we investigated how search for targets differing in multiple features (intensity, length, orientation) from the distracters is related to searches for targets differing in each of the individual features. We tested race models (based on reaction times) and co-activation models (based on reciprocal of reaction times) for their ability to predict multiple feature searches. Multiple feature searches were best accounted for by a co-activation model in which feature information combined linearly (r = 0.95). This result agrees with the classic finding that these features are separable i.e., subjective dissimilarity ratings sum linearly. We then replicated the classical finding that the length and width of a rectangle are integral features—in other words, they combine nonlinearly in visual search. However, to our surprise, upon including aspect ratio as an additional feature, length and width combined linearly and this model outperformed all other models. Thus, length and width of a rectangle became separable when considered together with aspect ratio. This finding predicts that searches involving shapes with identical aspect ratio should be more difficult than searches where shapes differ in aspect ratio. We confirmed this prediction on a variety of shapes. We conclude that features in visual search co-activate linearly and demonstrate for the first time that aspect ratio is a novel feature that guides visual search. PMID:24715328

  18. A performance model for GPUs with caches

    DOE PAGES

    Dao, Thanh Tuan; Kim, Jungwon; Seo, Sangmin; ...

    2014-06-24

    To exploit the abundant computational power of the world's fastest supercomputers, an even workload distribution to the typically heterogeneous compute devices is necessary. While relatively accurate performance models exist for conventional CPUs, accurate performance estimation models for modern GPUs do not exist. This paper presents two accurate models for modern GPUs: a sampling-based linear model, and a model based on machine-learning (ML) techniques which improves the accuracy of the linear model and is applicable to modern GPUs with and without caches. We first construct the sampling-based linear model to predict the runtime of an arbitrary OpenCL kernel. Based on anmore » analysis of NVIDIA GPUs' scheduling policies we determine the earliest sampling points that allow an accurate estimation. The linear model cannot capture well the significant effects that memory coalescing or caching as implemented in modern GPUs have on performance. We therefore propose a model based on ML techniques that takes several compiler-generated statistics about the kernel as well as the GPU's hardware performance counters as additional inputs to obtain a more accurate runtime performance estimation for modern GPUs. We demonstrate the effectiveness and broad applicability of the model by applying it to three different NVIDIA GPU architectures and one AMD GPU architecture. On an extensive set of OpenCL benchmarks, on average, the proposed model estimates the runtime performance with less than 7 percent error for a second-generation GTX 280 with no on-chip caches and less than 5 percent for the Fermi-based GTX 580 with hardware caches. On the Kepler-based GTX 680, the linear model has an error of less than 10 percent. On an AMD GPU architecture, Radeon HD 6970, the model estimates with 8 percent of error rates. As a result, the proposed technique outperforms existing models by a factor of 5 to 6 in terms of accuracy.« less

  19. Stochastic models for atomic clocks

    NASA Technical Reports Server (NTRS)

    Barnes, J. A.; Jones, R. H.; Tryon, P. V.; Allan, D. W.

    1983-01-01

    For the atomic clocks used in the National Bureau of Standards Time Scales, an adequate model is the superposition of white FM, random walk FM, and linear frequency drift for times longer than about one minute. The model was tested on several clocks using maximum likelihood techniques for parameter estimation and the residuals were acceptably random. Conventional diagnostics indicate that additional model elements contribute no significant improvement to the model even at the expense of the added model complexity.

  20. Linear Modeling and Evaluation of Controls on Flow Response in Western Post-Fire Watersheds

    NASA Astrophysics Data System (ADS)

    Saxe, S.; Hogue, T. S.; Hay, L.

    2015-12-01

    This research investigates the impact of wildfires on watershed flow regimes throughout the western United States, specifically focusing on evaluation of fire events within specified subregions and determination of the impact of climate and geophysical variables in post-fire flow response. Fire events were collected through federal and state-level databases and streamflow data were collected from U.S. Geological Survey stream gages. 263 watersheds were identified with at least 10 years of continuous pre-fire daily streamflow records and 5 years of continuous post-fire daily flow records. For each watershed, percent changes in runoff ratio (RO), annual seven day low-flows (7Q2) and annual seven day high-flows (7Q10) were calculated from pre- to post-fire. Numerous independent variables were identified for each watershed and fire event, including topographic, land cover, climate, burn severity, and soils data. The national watersheds were divided into five regions through K-clustering and a lasso linear regression model, applying the Leave-One-Out calibration method, was calculated for each region. Nash-Sutcliffe Efficiency (NSE) was used to determine the accuracy of the resulting models. The regions encompassing the United States along and west of the Rocky Mountains, excluding the coastal watersheds, produced the most accurate linear models. The Pacific coast region models produced poor and inconsistent results, indicating that the regions need to be further subdivided. Presently, RO and HF response variables appear to be more easily modeled than LF. Results of linear regression modeling showed varying importance of watershed and fire event variables, with conflicting correlation between land cover types and soil types by region. The addition of further independent variables and constriction of current variables based on correlation indicators is ongoing and should allow for more accurate linear regression modeling.

  1. A three operator split-step method covering a larger set of non-linear partial differential equations

    NASA Astrophysics Data System (ADS)

    Zia, Haider

    2017-06-01

    This paper describes an updated exponential Fourier based split-step method that can be applied to a greater class of partial differential equations than previous methods would allow. These equations arise in physics and engineering, a notable example being the generalized derivative non-linear Schrödinger equation that arises in non-linear optics with self-steepening terms. These differential equations feature terms that were previously inaccessible to model accurately with low computational resources. The new method maintains a 3rd order error even with these additional terms and models the equation in all three spatial dimensions and time. The class of non-linear differential equations that this method applies to is shown. The method is fully derived and implementation of the method in the split-step architecture is shown. This paper lays the mathematical ground work for an upcoming paper employing this method in white-light generation simulations in bulk material.

  2. Structure-Property Relationships of Silicone Biofouling-Release Coatings: Effect of Silicone Network Architecture on Pseudobarnacle Attachment Strengths

    DTIC Science & Technology

    2003-01-01

    ambient conditions prior to testing. A masterbatch for hydrosilylation-curable model systems was prepared by combining 200 g of hexamethydisilazane treated...fumed silica and 800 g of vinylterminated polydimethylsiloxane (equivalent weight ¼ 4111). The masterbatch was combined with additional vinyl polymer...followed by 10ml of Karstedt’s catalyst (10.9% Pt, 4.8mmol Pt). The amounts of masterbatch , linear vinyl, linear hydride, and crosslinkable hydride

  3. Nonlinear Dynamic Models in Advanced Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry

    2002-01-01

    To facilitate analysis, ALS systems are often assumed to be linear and time invariant, but they usually have important nonlinear and dynamic aspects. Nonlinear dynamic behavior can be caused by time varying inputs, changes in system parameters, nonlinear system functions, closed loop feedback delays, and limits on buffer storage or processing rates. Dynamic models are usually cataloged according to the number of state variables. The simplest dynamic models are linear, using only integration, multiplication, addition, and subtraction of the state variables. A general linear model with only two state variables can produce all the possible dynamic behavior of linear systems with many state variables, including stability, oscillation, or exponential growth and decay. Linear systems can be described using mathematical analysis. Nonlinear dynamics can be fully explored only by computer simulations of models. Unexpected behavior is produced by simple models having only two or three state variables with simple mathematical relations between them. Closed loop feedback delays are a major source of system instability. Exceeding limits on buffer storage or processing rates forces systems to change operating mode. Different equilibrium points may be reached from different initial conditions. Instead of one stable equilibrium point, the system may have several equilibrium points, oscillate at different frequencies, or even behave chaotically, depending on the system inputs and initial conditions. The frequency spectrum of an output oscillation may contain harmonics and the sums and differences of input frequencies, but it may also contain a stable limit cycle oscillation not related to input frequencies. We must investigate the nonlinear dynamic aspects of advanced life support systems to understand and counter undesirable behavior.

  4. Investigation of the effects of external current systems on the MAGSAT data utilizing grid cell modeling techniques

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M. (Principal Investigator)

    1982-01-01

    The feasibility of modeling magnetic fields due to certain electrical currents flowing in the Earth's ionosphere and magnetosphere was investigated. A method was devised to carry out forward modeling of the magnetic perturbations that arise from space currents. The procedure utilizes a linear current element representation of the distributed electrical currents. The finite thickness elements are combined into loops which are in turn combined into cells having their base in the ionosphere. In addition to the extensive field modeling, additional software was developed for the reduction and analysis of the MAGSAT data in terms of the external current effects. Direct comparisons between the models and the MAGSAT data are possible.

  5. Using an external surrogate for predictor model training in real-time motion management of lung tumors.

    PubMed

    Rottmann, Joerg; Berbeco, Ross

    2014-12-01

    Precise prediction of respiratory motion is a prerequisite for real-time motion compensation techniques such as beam, dynamic couch, or dynamic multileaf collimator tracking. Collection of tumor motion data to train the prediction model is required for most algorithms. To avoid exposure of patients to additional dose from imaging during this procedure, the feasibility of training a linear respiratory motion prediction model with an external surrogate signal is investigated and its performance benchmarked against training the model with tumor positions directly. The authors implement a lung tumor motion prediction algorithm based on linear ridge regression that is suitable to overcome system latencies up to about 300 ms. Its performance is investigated on a data set of 91 patient breathing trajectories recorded from fiducial marker tracking during radiotherapy delivery to the lung of ten patients. The expected 3D geometric error is quantified as a function of predictor lookahead time, signal sampling frequency and history vector length. Additionally, adaptive model retraining is evaluated, i.e., repeatedly updating the prediction model after initial training. Training length for this is gradually increased with incoming (internal) data availability. To assess practical feasibility model calculation times as well as various minimum data lengths for retraining are evaluated. Relative performance of model training with external surrogate motion data versus tumor motion data is evaluated. However, an internal-external motion correlation model is not utilized, i.e., prediction is solely driven by internal motion in both cases. Similar prediction performance was achieved for training the model with external surrogate data versus internal (tumor motion) data. Adaptive model retraining can substantially boost performance in the case of external surrogate training while it has little impact for training with internal motion data. A minimum adaptive retraining data length of 8 s and history vector length of 3 s achieve maximal performance. Sampling frequency appears to have little impact on performance confirming previously published work. By using the linear predictor, a relative geometric 3D error reduction of about 50% was achieved (using adaptive retraining, a history vector length of 3 s and with results averaged over all investigated lookahead times and signal sampling frequencies). The absolute mean error could be reduced from (2.0 ± 1.6) mm when using no prediction at all to (0.9 ± 0.8) mm and (1.0 ± 0.9) mm when using the predictor trained with internal tumor motion training data and external surrogate motion training data, respectively (for a typical lookahead time of 250 ms and sampling frequency of 15 Hz). A linear prediction model can reduce latency induced tracking errors by an average of about 50% in real-time image guided radiotherapy systems with system latencies of up to 300 ms. Training a linear model for lung tumor motion prediction with an external surrogate signal alone is feasible and results in similar performance as training with (internal) tumor motion. Particularly for scenarios where motion data are extracted from fluoroscopic imaging with ionizing radiation, this may alleviate the need for additional imaging dose during the collection of model training data.

  6. LIGKA: A linear gyrokinetic code for the description of background kinetic and fast particle effects on the MHD stability in tokamaks

    NASA Astrophysics Data System (ADS)

    Lauber, Ph.; Günter, S.; Könies, A.; Pinches, S. D.

    2007-09-01

    In a plasma with a population of super-thermal particles generated by heating or fusion processes, kinetic effects can lead to the additional destabilisation of MHD modes or even to additional energetic particle modes. In order to describe these modes, a new linear gyrokinetic MHD code has been developed and tested, LIGKA (linear gyrokinetic shear Alfvén physics) [Ph. Lauber, Linear gyrokinetic description of fast particle effects on the MHD stability in tokamaks, Ph.D. Thesis, TU München, 2003; Ph. Lauber, S. Günter, S.D. Pinches, Phys. Plasmas 12 (2005) 122501], based on a gyrokinetic model [H. Qin, Gyrokinetic theory and computational methods for electromagnetic perturbations in tokamaks, Ph.D. Thesis, Princeton University, 1998]. A finite Larmor radius expansion together with the construction of some fluid moments and specification to the shear Alfvén regime results in a self-consistent, electromagnetic, non-perturbative model, that allows not only for growing or damped eigenvalues but also for a change in mode-structure of the magnetic perturbation due to the energetic particles and background kinetic effects. Compared to previous implementations [H. Qin, mentioned above], this model is coded in a more general and comprehensive way. LIGKA uses a Fourier decomposition in the poloidal coordinate and a finite element discretisation in the radial direction. Both analytical and numerical equilibria can be treated. Integration over the unperturbed particle orbits is performed with the drift-kinetic HAGIS code [S.D. Pinches, Ph.D. Thesis, The University of Nottingham, 1996; S.D. Pinches et al., CPC 111 (1998) 131] which accurately describes the particles' trajectories. This allows finite-banana-width effects to be implemented in a rigorous way since the linear formulation of the model allows the exchange of the unperturbed orbit integration and the discretisation of the perturbed potentials in the radial direction. Successful benchmarks for toroidal Alfvén eigenmodes (TAEs) and kinetic Alfvén waves (KAWs) with analytical results, ideal MHD codes, drift-kinetic codes and other codes based on kinetic models are reported.

  7. On the classification of the spectrally stable standing waves of the Hartree problem

    NASA Astrophysics Data System (ADS)

    Georgiev, Vladimir; Stefanov, Atanas

    2018-05-01

    We consider the fractional Hartree model, with general power non-linearity and arbitrary spatial dimension. We construct variationally the "normalized" solutions for the corresponding Choquard-Pekar model-in particular a number of key properties, like smoothness and bell-shapedness are established. As a consequence of the construction, we show that these solitons are spectrally stable as solutions to the time-dependent Hartree model. In addition, we analyze the spectral stability of the Moroz-Van Schaftingen solitons of the classical Hartree problem, in any dimensions and power non-linearity. A full classification is obtained, the main conclusion of which is that only and exactly the "normalized" solutions (which exist only in a portion of the range) are spectrally stable.

  8. Model updating strategy for structures with localised nonlinearities using frequency response measurements

    NASA Astrophysics Data System (ADS)

    Wang, Xing; Hill, Thomas L.; Neild, Simon A.; Shaw, Alexander D.; Haddad Khodaparast, Hamed; Friswell, Michael I.

    2018-02-01

    This paper proposes a model updating strategy for localised nonlinear structures. It utilises an initial finite-element (FE) model of the structure and primary harmonic response data taken from low and high amplitude excitations. The underlying linear part of the FE model is first updated using low-amplitude test data with established techniques. Then, using this linear FE model, the nonlinear elements are localised, characterised, and quantified with primary harmonic response data measured under stepped-sine or swept-sine excitations. Finally, the resulting model is validated by comparing the analytical predictions with both the measured responses used in the updating and with additional test data. The proposed strategy is applied to a clamped beam with a nonlinear mechanism and good agreements between the analytical predictions and measured responses are achieved. Discussions on issues of damping estimation and dealing with data from amplitude-varying force input in the updating process are also provided.

  9. Extraction of diffuse correlation spectroscopy flow index by integration of Nth-order linear model with Monte Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shang, Yu; Lin, Yu; Yu, Guoqiang, E-mail: guoqiang.yu@uky.edu

    2014-05-12

    Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo strokemore » model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.« less

  10. The linearized multistage model and the future of quantitative risk assessment.

    PubMed

    Crump, K S

    1996-10-01

    The linearized multistage (LMS) model has for over 15 years been the default dose-response model used by the U.S. Environmental Protection Agency (USEPA) and other federal and state regulatory agencies in the United States for calculating quantitative estimates of low-dose carcinogenic risks from animal data. The LMS model is in essence a flexible statistical model that can describe both linear and non-linear dose-response patterns, and that produces an upper confidence bound on the linear low-dose slope of the dose-response curve. Unlike its namesake, the Armitage-Doll multistage model, the parameters of the LMS do not correspond to actual physiological phenomena. Thus the LMS is 'biological' only to the extent that the true biological dose response is linear at low dose and that low-dose slope is reflected in the experimental data. If the true dose response is non-linear the LMS upper bound may overestimate the true risk by many orders of magnitude. However, competing low-dose extrapolation models, including those derived from 'biologically-based models' that are capable of incorporating additional biological information, have not shown evidence to date of being able to produce quantitative estimates of low-dose risks that are any more accurate than those obtained from the LMS model. Further, even if these attempts were successful, the extent to which more accurate estimates of low-dose risks in a test animal species would translate into improved estimates of human risk is questionable. Thus, it does not appear possible at present to develop a quantitative approach that would be generally applicable and that would offer significant improvements upon the crude bounding estimates of the type provided by the LMS model. Draft USEPA guidelines for cancer risk assessment incorporate an approach similar to the LMS for carcinogens having a linear mode of action. However, under these guidelines quantitative estimates of low-dose risks would not be developed for carcinogens having a non-linear mode of action; instead dose-response modelling would be used in the experimental range to calculate an LED10* (a statistical lower bound on the dose corresponding to a 10% increase in risk), and safety factors would be applied to the LED10* to determine acceptable exposure levels for humans. This approach is very similar to the one presently used by USEPA for non-carcinogens. Rather than using one approach for carcinogens believed to have a linear mode of action and a different approach for all other health effects, it is suggested herein that it would be more appropriate to use an approach conceptually similar to the 'LED10*-safety factor' approach for all health effects, and not to routinely develop quantitative risk estimates from animal data.

  11. Results of including geometric nonlinearities in an aeroelastic model of an F/A-18

    NASA Technical Reports Server (NTRS)

    Buttrill, Carey S.

    1989-01-01

    An integrated, nonlinear simulation model suitable for aeroelastic modeling of fixed-wing aircraft has been developed. While the author realizes that the subject of modeling rotating, elastic structures is not closed, it is believed that the equations of motion developed and applied herein are correct to second order and are suitable for use with typical aircraft structures. The equations are not suitable for large elastic deformation. In addition, the modeling framework generalizes both the methods and terminology of non-linear rigid-body airplane simulation and traditional linear aeroelastic modeling. Concerning the importance of angular/elastic inertial coupling in the dynamic analysis of fixed-wing aircraft, the following may be said. The rigorous inclusion of said coupling is not without peril and must be approached with care. In keeping with the same engineering judgment that guided the development of the traditional aeroelastic equations, the effect of non-linear inertial effects for most airplane applications is expected to be small. A parameter does not tell the whole story, however, and modes flagged by the parameter as significant also need to be checked to see if the coupling is not a one-way path, i.e., the inertially affected modes can influence other modes.

  12. Three-dimensional forward modeling of DC resistivity using the aggregation-based algebraic multigrid method

    NASA Astrophysics Data System (ADS)

    Chen, Hui; Deng, Ju-Zhi; Yin, Min; Yin, Chang-Chun; Tang, Wen-Wu

    2017-03-01

    To speed up three-dimensional (3D) DC resistivity modeling, we present a new multigrid method, the aggregation-based algebraic multigrid method (AGMG). We first discretize the differential equation of the secondary potential field with mixed boundary conditions by using a seven-point finite-difference method to obtain a large sparse system of linear equations. Then, we introduce the theory behind the pairwise aggregation algorithms for AGMG and use the conjugate-gradient method with the V-cycle AGMG preconditioner (AGMG-CG) to solve the linear equations. We use typical geoelectrical models to test the proposed AGMG-CG method and compare the results with analytical solutions and the 3DDCXH algorithm for 3D DC modeling (3DDCXH). In addition, we apply the AGMG-CG method to different grid sizes and geoelectrical models and compare it to different iterative methods, such as ILU-BICGSTAB, ILU-GCR, and SSOR-CG. The AGMG-CG method yields nearly linearly decreasing errors, whereas the number of iterations increases slowly with increasing grid size. The AGMG-CG method is precise and converges fast, and thus can improve the computational efficiency in forward modeling of three-dimensional DC resistivity.

  13. Simulation of Blast Loading on an Ultrastructurally-based Computational Model of the Ocular Lens

    DTIC Science & Technology

    2016-12-01

    organelles. Additionally, the cell membranes demonstrated the classic ball-and-socket loops . For the SEM images, they were placed in two fixatives and mounted...considered (fibrous network and matrix), both components are modelled using a hyper - elastic framework, and the resulting constitutive model is embedded in a...within the framework of hyper - elasticity). Full details on the linearization procedures that were adopted in these previous models or the convergence

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rottmann, Joerg; Berbeco, Ross

    Purpose: Precise prediction of respiratory motion is a prerequisite for real-time motion compensation techniques such as beam, dynamic couch, or dynamic multileaf collimator tracking. Collection of tumor motion data to train the prediction model is required for most algorithms. To avoid exposure of patients to additional dose from imaging during this procedure, the feasibility of training a linear respiratory motion prediction model with an external surrogate signal is investigated and its performance benchmarked against training the model with tumor positions directly. Methods: The authors implement a lung tumor motion prediction algorithm based on linear ridge regression that is suitable tomore » overcome system latencies up to about 300 ms. Its performance is investigated on a data set of 91 patient breathing trajectories recorded from fiducial marker tracking during radiotherapy delivery to the lung of ten patients. The expected 3D geometric error is quantified as a function of predictor lookahead time, signal sampling frequency and history vector length. Additionally, adaptive model retraining is evaluated, i.e., repeatedly updating the prediction model after initial training. Training length for this is gradually increased with incoming (internal) data availability. To assess practical feasibility model calculation times as well as various minimum data lengths for retraining are evaluated. Relative performance of model training with external surrogate motion data versus tumor motion data is evaluated. However, an internal–external motion correlation model is not utilized, i.e., prediction is solely driven by internal motion in both cases. Results: Similar prediction performance was achieved for training the model with external surrogate data versus internal (tumor motion) data. Adaptive model retraining can substantially boost performance in the case of external surrogate training while it has little impact for training with internal motion data. A minimum adaptive retraining data length of 8 s and history vector length of 3 s achieve maximal performance. Sampling frequency appears to have little impact on performance confirming previously published work. By using the linear predictor, a relative geometric 3D error reduction of about 50% was achieved (using adaptive retraining, a history vector length of 3 s and with results averaged over all investigated lookahead times and signal sampling frequencies). The absolute mean error could be reduced from (2.0 ± 1.6) mm when using no prediction at all to (0.9 ± 0.8) mm and (1.0 ± 0.9) mm when using the predictor trained with internal tumor motion training data and external surrogate motion training data, respectively (for a typical lookahead time of 250 ms and sampling frequency of 15 Hz). Conclusions: A linear prediction model can reduce latency induced tracking errors by an average of about 50% in real-time image guided radiotherapy systems with system latencies of up to 300 ms. Training a linear model for lung tumor motion prediction with an external surrogate signal alone is feasible and results in similar performance as training with (internal) tumor motion. Particularly for scenarios where motion data are extracted from fluoroscopic imaging with ionizing radiation, this may alleviate the need for additional imaging dose during the collection of model training data.« less

  15. Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions.

    PubMed

    Liu, Qingshan; Wang, Jun

    2011-04-01

    This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.

  16. The Essential Complexity of Auditory Receptive Fields

    PubMed Central

    Thorson, Ivar L.; Liénard, Jean; David, Stephen V.

    2015-01-01

    Encoding properties of sensory neurons are commonly modeled using linear finite impulse response (FIR) filters. For the auditory system, the FIR filter is instantiated in the spectro-temporal receptive field (STRF), often in the framework of the generalized linear model. Despite widespread use of the FIR STRF, numerous formulations for linear filters are possible that require many fewer parameters, potentially permitting more efficient and accurate model estimates. To explore these alternative STRF architectures, we recorded single-unit neural activity from auditory cortex of awake ferrets during presentation of natural sound stimuli. We compared performance of > 1000 linear STRF architectures, evaluating their ability to predict neural responses to a novel natural stimulus. Many were able to outperform the FIR filter. Two basic constraints on the architecture lead to the improved performance: (1) factorization of the STRF matrix into a small number of spectral and temporal filters and (2) low-dimensional parameterization of the factorized filters. The best parameterized model was able to outperform the full FIR filter in both primary and secondary auditory cortex, despite requiring fewer than 30 parameters, about 10% of the number required by the FIR filter. After accounting for noise from finite data sampling, these STRFs were able to explain an average of 40% of A1 response variance. The simpler models permitted more straightforward interpretation of sensory tuning properties. They also showed greater benefit from incorporating nonlinear terms, such as short term plasticity, that provide theoretical advances over the linear model. Architectures that minimize parameter count while maintaining maximum predictive power provide insight into the essential degrees of freedom governing auditory cortical function. They also maximize statistical power available for characterizing additional nonlinear properties that limit current auditory models. PMID:26683490

  17. Gender-Dependent Association of FTO Polymorphisms with Body Mass Index in Mexicans

    PubMed Central

    Saldaña-Alvarez, Yolanda; Salas-Martínez, María Guadalupe; García-Ortiz, Humberto; Luckie-Duque, Angélica; García-Cárdenas, Gustavo; Vicenteño-Ayala, Hermenegildo; Cordova, Emilio J.; Esparza-Aguilar, Marcelino; Contreras-Cubas, Cecilia; Carnevale, Alessandra; Chávez-Saldaña, Margarita; Orozco, Lorena

    2016-01-01

    To evaluate the associations between six single-nucleotide polymorphisms (SNPs) in intron 1 of FTO and body mass index (BMI), a case-control association study of 2314 unrelated Mexican-Mestizo adult subjects was performed. The association between each SNP and BMI was tested using logistic and linear regression adjusted for age, gender, and ancestry and assuming additive, recessive, and dominant effects of the minor allele. Association analysis after BMI stratification showed that all five FTO SNPs (rs1121980, rs17817449, rs3751812, rs9930506, and rs17817449), were significantly associated with obesity class II/III under an additive model (P<0.05). Interestingly, we also documented a genetic model-dependent influence of gender on the effect of FTO variants on increased BMI. Two SNPs were specifically associated in males under a dominant model, while the remainder were associated with females under additive and recessive models (P<0.05). The SNP rs9930506 showed the highest increased in obesity risk in females (odds ratio = 4.4). Linear regression using BMI as a continuous trait also revealed differential FTO SNP contributions. Homozygous individuals for the risk alleles of rs17817449, rs3751812, and rs9930506 were on average 2.18 kg/m2 heavier than homozygous for the wild-type alleles; rs1121980 and rs8044769 showed significant but less-strong effects on BMI (1.54 kg/m2 and 0.9 kg/m2, respectively). Remarkably, rs9930506 also exhibited positive interactions with age and BMI in a gender-dependent manner. Women carrying the minor allele of this variant have a significant increase in BMI by year (0.42 kg/m2, P = 1.17 x 10−10). Linear regression haplotype analysis under an additive model, confirmed that the TGTGC haplotype harboring all five minor alleles, increased the BMI of carriers by 2.36 kg/m2 (P = 1.15 x 10−5). Our data suggest that FTO SNPs make differential contributions to obesity risk and support the hypothesis that gender differences in the mechanisms involving these variants may contribute to disease development. PMID:26726774

  18. New non-linear model of groundwater recharge: Inclusion of memory, heterogeneity and visco-elasticity

    NASA Astrophysics Data System (ADS)

    Spannenberg, Jescica; Atangana, Abdon; Vermeulen, P. D.

    2017-09-01

    Fractional differentiation has adequate use for investigating real world scenarios related to geological formations associated with elasticity, heterogeneity, viscoelasticity, and the memory effect. Since groundwater systems exist in these geological formations, modelling groundwater recharge as a real world scenario is a challenging task to do because existing recharge estimation methods are governed by linear equations which make use of constant field parameters. This is inadequate because in reality these parameters are a function of both space and time. This study therefore concentrates on modifying the recharge equation governing the EARTH model, by application of the Eton approach. Accordingly, this paper presents a modified equation which is non-linear, and accounts for parameters in a way that it is a function of both space and time. To be more specific, herein, recharge and drainage resistance which are parameters within the equation, became a function of both space and time. Additionally, the study entailed solving the non-linear equation using an iterative method as well as numerical solutions by means of the Crank-Nicolson scheme. The numerical solutions were used alongside the Riemann-Liouville, Caputo-Fabrizio, and Atangana-Baleanu derivatives, so that account was taken for elasticity, heterogeneity, viscoelasticity, and the memory effect. In essence, this paper presents a more adequate model for recharge estimation.

  19. Techniques for the Enhancement of Linear Predictive Speech Coding in Adverse Conditions

    NASA Astrophysics Data System (ADS)

    Wrench, Alan A.

    Available from UMI in association with The British Library. Requires signed TDF. The Linear Prediction model was first applied to speech two and a half decades ago. Since then it has been the subject of intense research and continues to be one of the principal tools in the analysis of speech. Its mathematical tractability makes it a suitable subject for study and its proven success in practical applications makes the study worthwhile. The model is known to be unsuited to speech corrupted by background noise. This has led many researchers to investigate ways of enhancing the speech signal prior to Linear Predictive analysis. In this thesis this body of work is extended. The chosen application is low bit-rate (2.4 kbits/sec) speech coding. For this task the performance of the Linear Prediction algorithm is crucial because there is insufficient bandwidth to encode the error between the modelled speech and the original input. A review of the fundamentals of Linear Prediction and an independent assessment of the relative performance of methods of Linear Prediction modelling are presented. A new method is proposed which is fast and facilitates stability checking, however, its stability is shown to be unacceptably poorer than existing methods. A novel supposition governing the positioning of the analysis frame relative to a voiced speech signal is proposed and supported by observation. The problem of coding noisy speech is examined. Four frequency domain speech processing techniques are developed and tested. These are: (i) Combined Order Linear Prediction Spectral Estimation; (ii) Frequency Scaling According to an Aural Model; (iii) Amplitude Weighting Based on Perceived Loudness; (iv) Power Spectrum Squaring. These methods are compared with the Recursive Linearised Maximum a Posteriori method. Following on from work done in the frequency domain, a time domain implementation of spectrum squaring is developed. In addition, a new method of power spectrum estimation is developed based on the Minimum Variance approach. This new algorithm is shown to be closely related to Linear Prediction but produces slightly broader spectral peaks. Spectrum squaring is applied to both the new algorithm and standard Linear Prediction and their relative performance is assessed. (Abstract shortened by UMI.).

  20. Assessing direct analysis in real-time-mass spectrometry (DART-MS) for the rapid identification of additives in food packaging.

    PubMed

    Ackerman, L K; Noonan, G O; Begley, T H

    2009-12-01

    The ambient ionization technique direct analysis in real time (DART) was characterized and evaluated for the screening of food packaging for the presence of packaging additives using a benchtop mass spectrometer (MS). Approximate optimum conditions were determined for 13 common food-packaging additives, including plasticizers, anti-oxidants, colorants, grease-proofers, and ultraviolet light stabilizers. Method sensitivity and linearity were evaluated using solutions and characterized polymer samples. Additionally, the response of a model additive (di-ethyl-hexyl-phthalate) was examined across a range of sample positions, DART, and MS conditions (temperature, voltage and helium flow). Under optimal conditions, molecular ion (M+H+) was the major ion for most additives. Additive responses were highly sensitive to sample and DART source orientation, as well as to DART flow rates, temperatures, and MS inlet voltages, respectively. DART-MS response was neither consistently linear nor quantitative in this setting, and sensitivity varied by additive. All additives studied were rapidly identified in multiple food-packaging materials by DART-MS/MS, suggesting this technique can be used to screen food packaging rapidly. However, method sensitivity and quantitation requires further study and improvement.

  1. Optimal design of focused experiments and surveys

    NASA Astrophysics Data System (ADS)

    Curtis, Andrew

    1999-10-01

    Experiments and surveys are often performed to obtain data that constrain some previously underconstrained model. Often, constraints are most desired in a particular subspace of model space. Experiment design optimization requires that the quality of any particular design can be both quantified and then maximized. This study shows how the quality can be defined such that it depends on the amount of information that is focused in the particular subspace of interest. In addition, algorithms are presented which allow one particular focused quality measure (from the class of focused measures) to be evaluated efficiently. A subclass of focused quality measures is also related to the standard variance and resolution measures from linearized inverse theory. The theory presented here requires that the relationship between model parameters and data can be linearized around a reference model without significant loss of information. Physical and financial constraints define the space of possible experiment designs. Cross-well tomographic examples are presented, plus a strategy for survey design to maximize information about linear combinations of parameters such as bulk modulus, κ =λ+ 2μ/3.

  2. Cocaine Dependence Treatment Data: Methods for Measurement Error Problems With Predictors Derived From Stationary Stochastic Processes

    PubMed Central

    Guan, Yongtao; Li, Yehua; Sinha, Rajita

    2011-01-01

    In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854

  3. Linear diffusion model dating of cinder cones in Central Anatolia, Turkey

    NASA Astrophysics Data System (ADS)

    O'Sadnick, L. G.; Reid, M. R.; Cline, M. L.; Cosca, M. A.; Kuscu, G.

    2013-12-01

    The progressive decrease in slope angle, cone height and cone height/width ratio over time provides the basis for geomorphic dating of cinder cones using linear diffusion models. Previous research using diffusion models to date cinder cones has focused on the cone height/width ratio as the basis for dating cones of unknown age [1,2]. Here we apply linear diffusion models to dating cinder cones. A suite of 16 cinder cones from the Hasandağ volcano area of the Neogene-Quaternary Central Anatolian Volcanic Zone, for which samples are available, were selected for morphologic dating analysis. New 40Ar/39Ar dates for five of these cones range from 62 × 4 to 517 × 9 ka. Linear diffusion models were used to model the erosional degradation of each cone. Diffusion coefficients (κ) for the 5 cinder cones with known ages were constrained by comparing various modeled slope profiles to the current slope profile. The resulting κ is 7.5×0.5 m2kyr-1. Using this κ value, eruption ages were modeled for the remaining 11 cinder cones and range from 53×3 to 455×30 ka. These ages are within the range of ages previously reported for cinder cones in the Hasandağ region. The linear diffusion model-derived ages are being compared to additional new 40Ar/39Ar dates in order to further assess the applicability of morphological dating to constrain the ages of cinder cones. The relatively well-constrained κ value we obtained by applying the linear diffusion model to cinder cones that range in age by nearly 500 ka suggests that this model can be used to date cinder cones. This κ value is higher than the well-established value of κ =3.9 for a cinder cone in a similar climate [3]. Therefore our work confirms the importance of determining appropriate κ values from nearby cones with known ages. References 1. C.A. Wood, J. Volcanol. Geotherm. Res. 8, 137 (1980) 2. D.M. Wood, M.F. Sheridan, J. Volcanol. Geotherm. Res. 83, 241 (1998) 3. J.D. Pelletier, M.L. Cline, Geology 35, 1067 (2007)

  4. A Reduced-Order Successive Linear Estimator for Geostatistical Inversion and its Application in Hydraulic Tomography

    NASA Astrophysics Data System (ADS)

    Zha, Yuanyuan; Yeh, Tian-Chyi J.; Illman, Walter A.; Zeng, Wenzhi; Zhang, Yonggen; Sun, Fangqiang; Shi, Liangsheng

    2018-03-01

    Hydraulic tomography (HT) is a recently developed technology for characterizing high-resolution, site-specific heterogeneity using hydraulic data (nd) from a series of cross-hole pumping tests. To properly account for the subsurface heterogeneity and to flexibly incorporate additional information, geostatistical inverse models, which permit a large number of spatially correlated unknowns (ny), are frequently used to interpret the collected data. However, the memory storage requirements for the covariance of the unknowns (ny × ny) in these models are prodigious for large-scale 3-D problems. Moreover, the sensitivity evaluation is often computationally intensive using traditional difference method (ny forward runs). Although employment of the adjoint method can reduce the cost to nd forward runs, the adjoint model requires intrusive coding effort. In order to resolve these issues, this paper presents a Reduced-Order Successive Linear Estimator (ROSLE) for analyzing HT data. This new estimator approximates the covariance of the unknowns using Karhunen-Loeve Expansion (KLE) truncated to nkl order, and it calculates the directional sensitivities (in the directions of nkl eigenvectors) to form the covariance and cross-covariance used in the Successive Linear Estimator (SLE). In addition, the covariance of unknowns is updated every iteration by updating the eigenvalues and eigenfunctions. The computational advantages of the proposed algorithm are demonstrated through numerical experiments and a 3-D transient HT analysis of data from a highly heterogeneous field site.

  5. Combined effects of slip and convective boundary condition on MHD 3D stretched flow of nanofluid through porous media inspired by non-linear thermal radiation

    NASA Astrophysics Data System (ADS)

    Nayak, M. K.; Shaw, Sachin; Pandey, V. S.; Chamkha, Ali J.

    2018-02-01

    In the present study, the main concern is to investigate the magnetohydrodynamic nanofluid flow subject to porous matrix and convective heating past a permeable linear stretching sheet. In addition, the influence of velocity slip, viscous dissipation, Joule heating and non-linear thermal radiation are considered. A new micro-convection model known as the Patel model is implemented for considerable enhancement of the thermal conductivity and hence, the heat transfer capability of nanofluids. Moreover, a convective heat transfer model is introduced where the bottom surface of the sheet gets heated due to a convection mechanism from a hot fluid of particular temperature. The numerical results of the transformed governing differential equations have been obtained by using fourth-order Runge-Kutta method along with shooting approach and secant method is used for better approximation. In the present analysis, base fluids such as water and Ethylene glycol and Copper, Silver and Aluminum oxide nanoparticles are considered. Results of the present investigation show that inclusion of porous matrix contributes to slow down the fluid velocity and diminution of wall shear stress (axial as well as transverse). Drag force due to magnetic field strength, velocity slip and imposed fluid suction impede the fluid motion and upsurge the heat transfer rate from the surface. In addition, rise in viscous dissipation widens the thermal boundary layer.

  6. Lidar and Hyperspectral Remote Sensing for the Analysis of Coniferous Biomass Stocks and Fluxes

    NASA Astrophysics Data System (ADS)

    Halligan, K. Q.; Roberts, D. A.

    2006-12-01

    Airborne lidar and hyperspectral data can improve estimates of aboveground carbon stocks and fluxes through their complimentary responses to vegetation structure and biochemistry. While strong relationships have been demonstrated between lidar-estimated vegetation structural parameters and field data, research is needed to explore the portability of these methods across a range of topographic conditions, disturbance histories, vegetation type and climate. Additionally, research is needed to evaluate contributions of hyperspectral data in refining biomass estimates and determination of fluxes. To address these questions we are a conducting study of lidar and hyperspectral remote sensing data across sites including coniferous forests, broadleaf deciduous forests and a tropical rainforest. Here we focus on a single study site, Yellowstone National Park, where tree heights, stem locations, above ground biomass and basal area were mapped using first-return small-footprint lidar data. A new method using lidar intensity data was developed for separating the terrain and vegetation components in lidar data using a two-scale iterative local minima filter. Resulting Digital Terrain Models (DTM) and Digital Canopy Models (DCM) were then processed to retrieve a diversity of vertical and horizontal structure metrics. Univariate linear models were used to estimate individual tree heights while stepwise linear regression was used to estimate aboveground biomass and basal area. Three small-area field datasets were compared for their utility in model building and validation of vegetation structure parameters. All structural parameters were linearly correlated with lidar-derived metrics, with higher accuracies obtained where field and imagery data were precisely collocated . Initial analysis of hyperspectral data suggests that vegetation health metrics including measures of live and dead vegetation and stress indices may provide good indicators of carbon flux by mapping vegetation vigor or senescence. Additionally, the strength of hyperspectral data for vegetation classification suggests these data have additional utility for modeling carbon flux dynamics by allowing more accurate plant functional type mapping.

  7. Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.

    PubMed

    Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C

    2014-12-01

    D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase advanced sleep model was computed and provided more efficient trial designs and increased nonlinear mixed-effects modeling parameter precision.

  8. Improved Speech Coding Based on Open-Loop Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Chen, Ya-Chin; Longman, Richard W.

    2000-01-01

    A nonlinear optimization algorithm for linear predictive speech coding was developed early that not only optimizes the linear model coefficients for the open loop predictor, but does the optimization including the effects of quantization of the transmitted residual. It also simultaneously optimizes the quantization levels used for each speech segment. In this paper, we present an improved method for initialization of this nonlinear algorithm, and demonstrate substantial improvements in performance. In addition, the new procedure produces monotonically improving speech quality with increasing numbers of bits used in the transmitted error residual. Examples of speech encoding and decoding are given for 8 speech segments and signal to noise levels as high as 47 dB are produced. As in typical linear predictive coding, the optimization is done on the open loop speech analysis model. Here we demonstrate that minimizing the error of the closed loop speech reconstruction, instead of the simpler open loop optimization, is likely to produce negligible improvement in speech quality. The examples suggest that the algorithm here is close to giving the best performance obtainable from a linear model, for the chosen order with the chosen number of bits for the codebook.

  9. A varying-coefficient method for analyzing longitudinal clinical trials data with nonignorable dropout

    PubMed Central

    Forster, Jeri E.; MaWhinney, Samantha; Ball, Erika L.; Fairclough, Diane

    2011-01-01

    Dropout is common in longitudinal clinical trials and when the probability of dropout depends on unobserved outcomes even after conditioning on available data, it is considered missing not at random and therefore nonignorable. To address this problem, mixture models can be used to account for the relationship between a longitudinal outcome and dropout. We propose a Natural Spline Varying-coefficient mixture model (NSV), which is a straightforward extension of the parametric Conditional Linear Model (CLM). We assume that the outcome follows a varying-coefficient model conditional on a continuous dropout distribution. Natural cubic B-splines are used to allow the regression coefficients to semiparametrically depend on dropout and inference is therefore more robust. Additionally, this method is computationally stable and relatively simple to implement. We conduct simulation studies to evaluate performance and compare methodologies in settings where the longitudinal trajectories are linear and dropout time is observed for all individuals. Performance is assessed under conditions where model assumptions are both met and violated. In addition, we compare the NSV to the CLM and a standard random-effects model using an HIV/AIDS clinical trial with probable nonignorable dropout. The simulation studies suggest that the NSV is an improvement over the CLM when dropout has a nonlinear dependence on the outcome. PMID:22101223

  10. Health effects models for nuclear power plant accident consequence analysis: Low LET radiation: Part 2, Scientific bases for health effects models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abrahamson, S.; Bender, M.; Book, S.

    1989-05-01

    This report provides dose-response models intended to be used in estimating the radiological health effects of nuclear power plant accidents. Models of early and continuing effects, cancers and thyroid nodules, and genetic effects are provided. Two-parameter Weibull hazard functions are recommended for estimating the risks of early and continuing health effects. Three potentially lethal early effects -- the hematopoietic, pulmonary and gastrointestinal syndromes -- are considered. Linear and linear-quadratic models are recommended for estimating cancer risks. Parameters are given for analyzing the risks of seven types of cancer in adults -- leukemia, bone, lung, breast, gastrointestinal, thyroid and ''other''. Themore » category, ''other'' cancers, is intended to reflect the combined risks of multiple myeloma, lymphoma, and cancers of the bladder, kidney, brain, ovary, uterus and cervix. Models of childhood cancers due to in utero exposure are also provided. For most cancers, both incidence and mortality are addressed. Linear and linear-quadratic models are also recommended for assessing genetic risks. Five classes of genetic disease -- dominant, x-linked, aneuploidy, unbalanced translocation and multifactorial diseases --are considered. In addition, the impact of radiation-induced genetic damage on the incidence of peri-implantation embryo losses is discussed. The uncertainty in modeling radiological health risks is addressed by providing central, upper, and lower estimates of all model parameters. Data are provided which should enable analysts to consider the timing and severity of each type of health risk. 22 refs., 14 figs., 51 tabs.« less

  11. A hierarchy for modeling high speed propulsion systems

    NASA Technical Reports Server (NTRS)

    Hartley, Tom T.; Deabreu, Alex

    1991-01-01

    General research efforts on reduced order propulsion models for control systems design are overviewed. Methods for modeling high speed propulsion systems are discussed including internal flow propulsion systems that do not contain rotating machinery, such as inlets, ramjets, and scramjets. The discussion is separated into four areas: (1) computational fluid dynamics models for the entire nonlinear system or high order nonlinear models; (2) high order linearized models derived from fundamental physics; (3) low order linear models obtained from the other high order models; and (4) low order nonlinear models (order here refers to the number of dynamic states). Included in the discussion are any special considerations based on the relevant control system designs. The methods discussed are for the quasi-one-dimensional Euler equations of gasdynamic flow. The essential nonlinear features represented are large amplitude nonlinear waves, including moving normal shocks, hammershocks, simple subsonic combustion via heat addition, temperature dependent gases, detonations, and thermal choking. The report also contains a comprehensive list of papers and theses generated by this grant.

  12. Spatio-temporal Bayesian model selection for disease mapping

    PubMed Central

    Carroll, R; Lawson, AB; Faes, C; Kirby, RS; Aregay, M; Watjou, K

    2016-01-01

    Spatio-temporal analysis of small area health data often involves choosing a fixed set of predictors prior to the final model fit. In this paper, we propose a spatio-temporal approach of Bayesian model selection to implement model selection for certain areas of the study region as well as certain years in the study time line. Here, we examine the usefulness of this approach by way of a large-scale simulation study accompanied by a case study. Our results suggest that a special case of the model selection methods, a mixture model allowing a weight parameter to indicate if the appropriate linear predictor is spatial, spatio-temporal, or a mixture of the two, offers the best option to fitting these spatio-temporal models. In addition, the case study illustrates the effectiveness of this mixture model within the model selection setting by easily accommodating lifestyle, socio-economic, and physical environmental variables to select a predominantly spatio-temporal linear predictor. PMID:28070156

  13. Functional Additive Mixed Models

    PubMed Central

    Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja

    2014-01-01

    We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach. PMID:26347592

  14. Functional Additive Mixed Models.

    PubMed

    Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja

    2015-04-01

    We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach.

  15. Robust model reference adaptive output feedback tracking for uncertain linear systems with actuator fault based on reinforced dead-zone modification.

    PubMed

    Bagherpoor, H M; Salmasi, Farzad R

    2015-07-01

    In this paper, robust model reference adaptive tracking controllers are considered for Single-Input Single-Output (SISO) and Multi-Input Multi-Output (MIMO) linear systems containing modeling uncertainties, unknown additive disturbances and actuator fault. Two new lemmas are proposed for both SISO and MIMO, under which dead-zone modification rule is improved such that the tracking error for any reference signal tends to zero in such systems. In the conventional approach, adaption of the controller parameters is ceased inside the dead-zone region which results tracking error, while preserving the system stability. In the proposed scheme, control signal is reinforced with an additive term based on tracking error inside the dead-zone which results in full reference tracking. In addition, no Fault Detection and Diagnosis (FDD) unit is needed in the proposed approach. Closed loop system stability and zero tracking error are proved by considering a suitable Lyapunov functions candidate. It is shown that the proposed control approach can assure that all the signals of the close loop system are bounded in faulty conditions. Finally, validity and performance of the new schemes have been illustrated through numerical simulations of SISO and MIMO systems in the presence of actuator faults, modeling uncertainty and output disturbance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Wavelet-based functional linear mixed models: an application to measurement error-corrected distributed lag models.

    PubMed

    Malloy, Elizabeth J; Morris, Jeffrey S; Adar, Sara D; Suh, Helen; Gold, Diane R; Coull, Brent A

    2010-07-01

    Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1-7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants.

  17. Loudspeakers: Modeling and control

    NASA Astrophysics Data System (ADS)

    Al-Ali, Khalid Mohammad

    This thesis documented a comprehensive study of loudspeaker modeling and control. A lumped-parameter model for a voice-coil loudspeaker in a vented enclosure was presented that derived from a consideration of physical principles. In addition, a low-frequency (20 Hz to 100 Hz), feedback control method designed to improve the nonlinear performance of the loudspeaker and a suitable performance measure for use in design and evaluation were proposed. Data from experiments performed on a variety of actual loudspeakers confirmed the practicality of the theory developed in this work. The lumped-parameter loudspeaker model, although simple, captured much of the nonlinear behavior of the loudspeaker. In addition, the model formulation allowed a straightforward application of modern control system methods and lent itself well to modern parametric identification techniques. The nonlinear performance of the loudspeaker system was evaluated using a suitable distortion measure that was proposed and compared with other distortion measures currently used in practice. Furthermore, the linearizing effect of feedback using a linear controller (both static and dynamic) was studied on a class of nonlinear systems. The results illustrated that the distortion reduction was potentially significant and a useful upper bound on the closed-loop distortion was found based on the sensitivity function of the system's linearization. A feedback scheme based on robust control theory was chosen for application to the loudspeaker system. Using the pressure output of the loudspeaker system for feedback, the technique offered significant advantages over those previously attempted. Illustrative examples were presented that proved the applicability of the theory developed in this dissertation to a variety of loudspeaker systems. The examples included a vented loudspeaker model and actual loudspeakers enclosed in both vented and sealed configurations. In each example, predictable and measurable distortion reduction at the output of the closed-loop system was recorded.

  18. A fundamental reconsideration of the CRASH3 damage analysis algorithm: the case against uniform ubiquitous linearity between BEV, peak collision force magnitude, and residual damage depth.

    PubMed

    Singh, Jai

    2013-01-01

    The objective of this study was a thorough reconsideration, within the framework of Newtonian mechanics and work-energy relationships, of the empirically interpreted relationships employed within the CRASH3 damage analysis algorithm in regards to linearity between barrier equivalent velocity (BEV) or peak collision force magnitude and residual damage depth. The CRASH3 damage analysis algorithm was considered, first in terms of the cases of collisions that produced no residual damage, in order to properly explain the damage onset speed and crush resistance terms. Under the modeling constraints of the collision partners representing a closed system and the a priori assumption of linearity between BEV or peak collision force magnitude and residual damage depth, the equations for the sole realistic model were derived. Evaluation of the work-energy relationships for collisions at or below the elastic limit revealed that the BEV or peak collision force magnitude relationships are bifurcated based upon the residual damage depth. Rather than being additive terms from the linear curve fits employed in the CRASH3 damage analysis algorithm, the Campbell b 0 and CRASH3 AL terms represent the maximum values that can be ascribed to the BEV or peak collision force magnitude, respectively, for collisions that produce zero residual damage. Collisions resulting in the production of non-zero residual damage depth already account for the surpassing of the elastic limit during closure and therefore the secondary addition of the elastic limit terms represents a double accounting of the same. This evaluation shows that the current energy absorbed formulation utilized in the CRASH3 damage analysis algorithm extraneously includes terms associated with the A and G stiffness coefficients. This sole realistic model, however, is limited, secondary to reducing the coefficient of restitution to a constant value for all cases in which the residual damage depth is nonzero. Linearity between BEV or peak collision force magnitude and residual damage depth may be applicable for particular ranges of residual damage depth for any given region of any given vehicle. Within the modeling construct employed by the CRASH3 damage algorithm, the case of uniform and ubiquitous linearity cannot be supported. Considerations regarding the inclusion of internal work recovered and restitution for modeling the separation phase change in velocity magnitude should account for not only the effects present during the evaluation of a vehicle-to-vehicle collision of interest but also to the approach taken for modeling the force-deflection response for each collision partner.

  19. Nonlinear control of linear parameter varying systems with applications to hypersonic vehicles

    NASA Astrophysics Data System (ADS)

    Wilcox, Zachary Donald

    The focus of this dissertation is to design a controller for linear parameter varying (LPV) systems, apply it specifically to air-breathing hypersonic vehicles, and examine the interplay between control performance and the structural dynamics design. Specifically a Lyapunov-based continuous robust controller is developed that yields exponential tracking of a reference model, despite the presence of bounded, nonvanishing disturbances. The hypersonic vehicle has time varying parameters, specifically temperature profiles, and its dynamics can be reduced to an LPV system with additive disturbances. Since the HSV can be modeled as an LPV system the proposed control design is directly applicable. The control performance is directly examined through simulations. A wide variety of applications exist that can be effectively modeled as LPV systems. In particular, flight systems have historically been modeled as LPV systems and associated control tools have been applied such as gain-scheduling, linear matrix inequalities (LMIs), linear fractional transformations (LFT), and mu-types. However, as the type of flight environments and trajectories become more demanding, the traditional LPV controllers may no longer be sufficient. In particular, hypersonic flight vehicles (HSVs) present an inherently difficult problem because of the nonlinear aerothermoelastic coupling effects in the dynamics. HSV flight conditions produce temperature variations that can alter both the structural dynamics and flight dynamics. Starting with the full nonlinear dynamics, the aerothermoelastic effects are modeled by a temperature dependent, parameter varying state-space representation with added disturbances. The model includes an uncertain parameter varying state matrix, an uncertain parameter varying non-square (column deficient) input matrix, and an additive bounded disturbance. In this dissertation, a robust dynamic controller is formulated for a uncertain and disturbed LPV system. The developed controller is then applied to a HSV model, and a Lyapunov analysis is used to prove global exponential reference model tracking in the presence of uncertainty in the state and input matrices and exogenous disturbances. Simulations with a spectrum of gains and temperature profiles on the full nonlinear dynamic model of the HSV is used to illustrate the performance and robustness of the developed controller. In addition, this work considers how the performance of the developed controller varies over a wide variety of control gains and temperature profiles and are optimized with respect to different performance metrics. Specifically, various temperature profile models and related nonlinear temperature dependent disturbances are used to characterize the relative control performance and effort for each model. Examining such metrics as a function of temperature provides a potential inroad to examine the interplay between structural/thermal protection design and control development and has application for future HSV design and control implementation.

  20. Induction of Chromosomal Aberrations at Fluences of Less Than One HZE Particle per Cell Nucleus

    NASA Technical Reports Server (NTRS)

    Hada, Megumi; Chappell, Lori J.; Wang, Minli; George, Kerry A.; Cucinotta, Francis A.

    2014-01-01

    The assumption of a linear dose response used to describe the biological effects of high LET radiation is fundamental in radiation protection methodologies. We investigated the dose response for chromosomal aberrations for exposures corresponding to less than one particle traversal per cell nucleus by high energy and charge (HZE) nuclei. Human fibroblast and lymphocyte cells where irradiated with several low doses of <0.1 Gy, and several higher doses of up to 1 Gy with O (77 keV/ (long-s)m), Si (99 keV/ (long-s)m), Fe (175 keV/ (long-s)m), Fe (195 keV/ (long-s)m) or Fe (240 keV/ (long-s)m) particles. Chromosomal aberrations at first mitosis were scored using fluorescence in situ hybridization (FISH) with chromosome specific paints for chromosomes 1, 2 and 4 and DAPI staining of background chromosomes. Non-linear regression models were used to evaluate possible linear and non-linear dose response models based on these data. Dose responses for simple exchanges for human fibroblast irradiated under confluent culture conditions were best fit by non-linear models motivated by a non-targeted effect (NTE). Best fits for the dose response data for human lymphocytes irradiated in blood tubes were a NTE model for O and a linear response model fit best for Si and Fe particles. Additional evidence for NTE were found in low dose experiments measuring gamma-H2AX foci, a marker of double strand breaks (DSB), and split-dose experiments with human fibroblasts. Our results suggest that simple exchanges in normal human fibroblasts have an important NTE contribution at low particle fluence. The current and prior experimental studies provide important evidence against the linear dose response assumption used in radiation protection for HZE particles and other high LET radiation at the relevant range of low doses.

  1. Nutrient profiling can help identify foods of good nutritional quality for their price: a validation study with linear programming.

    PubMed

    Maillot, Matthieu; Ferguson, Elaine L; Drewnowski, Adam; Darmon, Nicole

    2008-06-01

    Nutrient profiling ranks foods based on their nutrient content. They may help identify foods with a good nutritional quality for their price. This hypothesis was tested using diet modeling with linear programming. Analyses were undertaken using food intake data from the nationally representative French INCA (enquête Individuelle et Nationale sur les Consommations Alimentaires) survey and its associated food composition and price database. For each food, a nutrient profile score was defined as the ratio between the previously published nutrient density score (NDS) and the limited nutrient score (LIM); a nutritional quality for price indicator was developed and calculated from the relationship between its NDS:LIM and energy cost (in euro/100 kcal). We developed linear programming models to design diets that fulfilled increasing levels of nutritional constraints at a minimal cost. The median NDS:LIM values of foods selected in modeled diets increased as the levels of nutritional constraints increased (P = 0.005). In addition, the proportion of foods with a good nutritional quality for price indicator was higher (P < 0.0001) among foods selected (81%) than among foods not selected (39%) in modeled diets. This agreement between the linear programming and the nutrient profiling approaches indicates that nutrient profiling can help identify foods of good nutritional quality for their price. Linear programming is a useful tool for testing nutrient profiling systems and validating the concept of nutrient profiling.

  2. Probing the solar core with low-degree p modes

    NASA Astrophysics Data System (ADS)

    Roxburgh, I. W.; Vorontsov, S. V.

    2002-01-01

    We address the question of what could be learned about the solar core structure if the seismic data were limited to low-degree modes only. The results of three different experiments are described. The first is the linearized structural inversion of the p-mode frequencies of a solar model modified slightly in the energy-generating core, using the original (unmodified) model as an initial guess. In the second experiment, we invert the solar p-mode frequencies measured in the 32-month subset of BiSON data (Chaplin et al. 1998), degraded with additional 0.1 μHz random errors, using a model of 2.6 Gyr age from the solar evolutionary sequence as an initial approximation. This second inversion is non-linear. In the third experiment, we compare the same set of BiSON frequencies with current reference solar model.

  3. Linear theory of plasma Čerenkov masers

    NASA Astrophysics Data System (ADS)

    Birau, M.

    1996-11-01

    A different theoretical model of Čerenkov instability in the linear amplification regime of plasma Čerenkov masers is developed. The model assumes a cold relativistic annular electron beam propagating through a column of cold dense plasma, the two bodies being immersed in an infinite magnetic guiding field inside a perfect cylindrical waveguide. In order to simplify the calculations, a radial rectangular distribution of plasma and beam density is assumed and only azimuthal symmetric modes are under investigation. The model's difference consists of taking into account the whole plasma and beam electromagnetic structures in the interpretation of the Čerenkov instability. This model leads to alternative results such as the possibility of emission at several frequencies. In addition, the electric field is calculated taking into account its radial phase dependence, so that a map of the field in the interaction region can be presented.

  4. Theory of bimolecular reactions in a solution with linear traps: Application to the problem of target search on DNA.

    PubMed

    Turkin, Alexander; van Oijen, Antoine M; Turkin, Anatoliy A

    2015-01-01

    One-dimensional sliding along DNA as a means to accelerate protein target search is a well-known phenomenon occurring in various biological systems. Using a biomimetic approach, we have recently demonstrated the practical use of DNA-sliding peptides to speed up bimolecular reactions more than an order of magnitude by allowing the reactants to associate not only in the solution by three-dimensional (3D) diffusion, but also on DNA via one-dimensional (1D) diffusion [A. Turkin et al., Chem. Sci. (2015)]. Here we present a mean-field kinetic model of a bimolecular reaction in a solution with linear extended sinks (e.g., DNA) that can intermittently trap molecules present in a solution. The model consists of chemical rate equations for mean concentrations of reacting species. Our model demonstrates that addition of linear traps to the solution can significantly accelerate reactant association. We show that at optimum concentrations of linear traps the 1D reaction pathway dominates in the kinetics of the bimolecular reaction; i.e., these 1D traps function as an assembly line of the reaction product. Moreover, we show that the association reaction on linear sinks between trapped reactants exhibits a nonclassical third-order behavior. Predictions of the model agree well with our experimental observations. Our model provides a general description of bimolecular reactions that are controlled by a combined 3D+1D mechanism and can be used to quantitatively describe both naturally occurring as well as biomimetic biochemical systems that reduce the dimensionality of search.

  5. Fitting and forecasting coupled dark energy in the non-linear regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casas, Santiago; Amendola, Luca; Pettorino, Valeria

    2016-01-01

    We consider cosmological models in which dark matter feels a fifth force mediated by the dark energy scalar field, also known as coupled dark energy. Our interest resides in estimating forecasts for future surveys like Euclid when we take into account non-linear effects, relying on new fitting functions that reproduce the non-linear matter power spectrum obtained from N-body simulations. We obtain fitting functions for models in which the dark matter-dark energy coupling is constant. Their validity is demonstrated for all available simulations in the redshift range 0z=–1.6 and wave modes below 0k=1 h/Mpc. These fitting formulas can be used tomore » test the predictions of the model in the non-linear regime without the need for additional computing-intensive N-body simulations. We then use these fitting functions to perform forecasts on the constraining power that future galaxy-redshift surveys like Euclid will have on the coupling parameter, using the Fisher matrix method for galaxy clustering (GC) and weak lensing (WL). We find that by using information in the non-linear power spectrum, and combining the GC and WL probes, we can constrain the dark matter-dark energy coupling constant squared, β{sup 2}, with precision smaller than 4% and all other cosmological parameters better than 1%, which is a considerable improvement of more than an order of magnitude compared to corresponding linear power spectrum forecasts with the same survey specifications.« less

  6. Predicting microRNA-disease associations using label propagation based on linear neighborhood similarity.

    PubMed

    Li, Guanghui; Luo, Jiawei; Xiao, Qiu; Liang, Cheng; Ding, Pingjian

    2018-05-12

    Interactions between microRNAs (miRNAs) and diseases can yield important information for uncovering novel prognostic markers. Since experimental determination of disease-miRNA associations is time-consuming and costly, attention has been given to designing efficient and robust computational techniques for identifying undiscovered interactions. In this study, we present a label propagation model with linear neighborhood similarity, called LPLNS, to predict unobserved miRNA-disease associations. Additionally, a preprocessing step is performed to derive new interaction likelihood profiles that will contribute to the prediction since new miRNAs and diseases lack known associations. Our results demonstrate that the LPLNS model based on the known disease-miRNA associations could achieve impressive performance with an AUC of 0.9034. Furthermore, we observed that the LPLNS model based on new interaction likelihood profiles could improve the performance to an AUC of 0.9127. This was better than other comparable methods. In addition, case studies also demonstrated our method's outstanding performance for inferring undiscovered interactions between miRNAs and diseases, especially for novel diseases. Copyright © 2018. Published by Elsevier Inc.

  7. Modeling and simulation of driver's anticipation effect in a two lane system on curved road with slope

    NASA Astrophysics Data System (ADS)

    Kaur, Ramanpreet; Sharma, Sapna

    2018-06-01

    The complexity of traffic flow phenomena on curved road with slope is investigated and a new lattice model is presented with the addition of driver's anticipation effect for two lane system. The condition under which the free flow turns into the jammed one, is obtained theoretically by using stability analysis. The results obtained through linear analysis indicates that the stable region increases (decreases) corresponding to uphill (downhill) case due to increasing slope angle for fixed anticipation parameter. It is found that when the vehicular density becomes higher than a critical value, traffic jam appears in the form of kink antikink density waves. Analytically, the kink antikink density waves are described by the solution of mKdV equation obtained from non linear analysis. In addition, the theoretical results has been verified through numerical simulation, which confirm that the slope on a curved highway significantly influence the traffic dynamics and traffic jam can be suppressed efficiently by considering the anticipation parameter in a two lane lattice model when lane changing is allowed.

  8. Cost Effective Persistent Regional Surveillance with Reconfigurable Satellite Constellations

    DTIC Science & Technology

    2015-04-24

    region where both models show the most agreement and therefore the blended curves (in the bottom plot) are fairly smooth. Additionally, a learning ...payload cost Cpay. Cpay = 38000D1.6 + 60615D2.67 ($k FY2010) (11) Satellite cost is modeled by blending the output from the Small Satellite Cost Model...SSCM was used for Md ≤ 400kg and the USCM8 cost model was used for Md ≥ 200kg, and linear blending was used to smooth out the transition between models

  9. From Feynman rules to conserved quantum numbers, I

    NASA Astrophysics Data System (ADS)

    Nogueira, P.

    2017-05-01

    In the context of Quantum Field Theory (QFT) there is often the need to find sets of graph-like diagrams (the so-called Feynman diagrams) for a given physical model. If negative, the answer to the related problem 'Are there any diagrams with this set of external fields?' may settle certain physical questions at once. Here the latter problem is formulated in terms of a system of linear diophantine equations derived from the Lagrangian density, from which necessary conditions for the existence of the required diagrams may be obtained. Those conditions are equalities that look like either linear diophantine equations or linear modular (i.e. congruence) equations, and may be found by means of fairly simple algorithms that involve integer computations. The diophantine equations so obtained represent (particle) number conservation rules, and are related to the conserved (additive) quantum numbers that may be assigned to the fields of the model.

  10. Mean-square state and parameter estimation for stochastic linear systems with Gaussian and Poisson noises

    NASA Astrophysics Data System (ADS)

    Basin, M.; Maldonado, J. J.; Zendejo, O.

    2016-07-01

    This paper proposes new mean-square filter and parameter estimator design for linear stochastic systems with unknown parameters over linear observations, where unknown parameters are considered as combinations of Gaussian and Poisson white noises. The problem is treated by reducing the original problem to a filtering problem for an extended state vector that includes parameters as additional states, modelled as combinations of independent Gaussian and Poisson processes. The solution to this filtering problem is based on the mean-square filtering equations for incompletely polynomial states confused with Gaussian and Poisson noises over linear observations. The resulting mean-square filter serves as an identifier for the unknown parameters. Finally, a simulation example shows effectiveness of the proposed mean-square filter and parameter estimator.

  11. Comparison of GWAS models to identify non-additive genetic control of flowering time in sunflower hybrids.

    PubMed

    Bonnafous, Fanny; Fievet, Ghislain; Blanchet, Nicolas; Boniface, Marie-Claude; Carrère, Sébastien; Gouzy, Jérôme; Legrand, Ludovic; Marage, Gwenola; Bret-Mestries, Emmanuelle; Munos, Stéphane; Pouilly, Nicolas; Vincourt, Patrick; Langlade, Nicolas; Mangin, Brigitte

    2018-02-01

    This study compares five models of GWAS, to show the added value of non-additive modeling of allelic effects to identify genomic regions controlling flowering time of sunflower hybrids. Genome-wide association studies are a powerful and widely used tool to decipher the genetic control of complex traits. One of the main challenges for hybrid crops, such as maize or sunflower, is to model the hybrid vigor in the linear mixed models, considering the relatedness between individuals. Here, we compared two additive and three non-additive association models for their ability to identify genomic regions associated with flowering time in sunflower hybrids. A panel of 452 sunflower hybrids, corresponding to incomplete crossing between 36 male lines and 36 female lines, was phenotyped in five environments and genotyped for 2,204,423 SNPs. Intra-locus effects were estimated in multi-locus models to detect genomic regions associated with flowering time using the different models. Thirteen quantitative trait loci were identified in total, two with both model categories and one with only non-additive models. A quantitative trait loci on LG09, detected by both the additive and non-additive models, is located near a GAI homolog and is presented in detail. Overall, this study shows the added value of non-additive modeling of allelic effects for identifying genomic regions that control traits of interest and that could participate in the heterosis observed in hybrids.

  12. Flight dynamics analysis and simulation of heavy lift airships. Volume 2: Technical manual

    NASA Technical Reports Server (NTRS)

    Ringland, R. F.; Tischler, M. B.; Jex, H. R.; Emmen, R. D.; Ashkenas, I. L.

    1982-01-01

    The mathematical models embodied in the simulation are described in considerable detail and with supporting evidence for the model forms chosen. In addition the trimming and linearization algorithms used in the simulation are described. Appendices to the manual identify reference material for estimating the needed coefficients for the input data and provide example simulation results.

  13. Linearized T-Matrix and Mie Scattering Computations

    NASA Technical Reports Server (NTRS)

    Spurr, R.; Wang, J.; Zeng, J.; Mishchenko, M. I.

    2011-01-01

    We present a new linearization of T-Matrix and Mie computations for light scattering by non-spherical and spherical particles, respectively. In addition to the usual extinction and scattering cross-sections and the scattering matrix outputs, the linearized models will generate analytical derivatives of these optical properties with respect to the real and imaginary parts of the particle refractive index, and (for non-spherical scatterers) with respect to the ''shape'' parameter (the spheroid aspect ratio, cylinder diameter/height ratio, Chebyshev particle deformation factor). These derivatives are based on the essential linearity of Maxwell's theory. Analytical derivatives are also available for polydisperse particle size distribution parameters such as the mode radius. The T-matrix formulation is based on the NASA Goddard Institute for Space Studies FORTRAN 77 code developed in the 1990s. The linearized scattering codes presented here are in FORTRAN 90 and will be made publicly available.

  14. Traveling-wave piezoelectric linear motor part II: experiment and performance evaluation.

    PubMed

    Ting, Yung; Li, Chun-Chung; Chen, Liang-Chiang; Yang, Chieh-Min

    2007-04-01

    This article continues the discussion of a traveling-wave piezoelectric linear motor. Part I of this article dealt with the design and analysis of the stator of a traveling-wave piezoelectric linear motor. In this part, the discussion focuses on the structure and modeling of the contact layer and the carriage. In addition, the performance analysis and evaluation of the linear motor also are dealt with in this study. The traveling wave is created by stator, which is constructed by a series of bimorph actuators arranged in a line and connected to form a meander-line structure. Analytical and experimental results of the performance are presented and shown to be almost in agreement. Power losses due to friction and transmission are studied and found to be significant. Compared with other types of linear motors, the motor in this study is capable of supporting heavier loads and provides a larger thrust force.

  15. Combined genetic algorithm and multiple linear regression (GA-MLR) optimizer: Application to multi-exponential fluorescence decay surface.

    PubMed

    Fisz, Jacek J

    2006-12-07

    The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.

  16. ADM Analysis of gravity models within the framework of bimetric variational formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Golovnev, Alexey; Karčiauskas, Mindaugas; Nyrhinen, Hannu J., E-mail: agolovnev@yandex.ru, E-mail: mindaugas.karciauskas@helsinki.fi, E-mail: hannu.nyrhinen@helsinki.fi

    2015-05-01

    Bimetric variational formalism was recently employed to construct novel bimetric gravity models. In these models an affine connection is generated by an additional tensor field which is independent of the physical metric. In this work we demonstrate how the ADM decomposition can be applied to study such models and provide some technical intermediate details. Using ADM decomposition we are able to prove that a linear model is unstable as has previously been indicated by perturbative analysis. Moreover, we show that it is also very difficult if not impossible to construct a non-linear model which is ghost-free within the framework ofmore » bimetric variational formalism. However, we demonstrate that viable models are possible along similar lines of thought. To this end, we consider a set up in which the affine connection is a variation of the Levi-Civita one. As a proof of principle we construct a gravity model with a massless scalar field obtained this way.« less

  17. Genetic Algorithm for Solving Fuzzy Shortest Path Problem in a Network with mixed fuzzy arc lengths

    NASA Astrophysics Data System (ADS)

    Mahdavi, Iraj; Tajdin, Ali; Hassanzadeh, Reza; Mahdavi-Amiri, Nezam; Shafieian, Hosna

    2011-06-01

    We are concerned with the design of a model and an algorithm for computing a shortest path in a network having various types of fuzzy arc lengths. First, we develop a new technique for the addition of various fuzzy numbers in a path using α -cuts by proposing a linear least squares model to obtain membership functions for the considered additions. Then, using a recently proposed distance function for comparison of fuzzy numbers. we propose a new approach to solve the fuzzy APSPP using of genetic algorithm. Examples are worked out to illustrate the applicability of the proposed model.

  18. A nonlinear Kalman filtering approach to embedded control of turbocharged diesel engines

    NASA Astrophysics Data System (ADS)

    Rigatos, Gerasimos; Siano, Pierluigi; Arsie, Ivan

    2014-10-01

    The development of efficient embedded control for turbocharged Diesel engines, requires the programming of elaborated nonlinear control and filtering methods. To this end, in this paper nonlinear control for turbocharged Diesel engines is developed with the use of Differential flatness theory and the Derivative-free nonlinear Kalman Filter. It is shown that the dynamic model of the turbocharged Diesel engine is differentially flat and admits dynamic feedback linearization. It is also shown that the dynamic model can be written in the linear Brunovsky canonical form for which a state feedback controller can be easily designed. To compensate for modeling errors and external disturbances the Derivative-free nonlinear Kalman Filter is used and redesigned as a disturbance observer. The filter consists of the Kalman Filter recursion on the linearized equivalent of the Diesel engine model and of an inverse transformation based on differential flatness theory which enables to obtain estimates for the state variables of the initial nonlinear model. Once the disturbances variables are identified it is possible to compensate them by including an additional control term in the feedback loop. The efficiency of the proposed control method is tested through simulation experiments.

  19. Vocal fold tissue failure: preliminary data and constitutive modeling.

    PubMed

    Chan, Roger W; Siegmund, Thomas

    2004-08-01

    In human voice production (phonation), linear small-amplitude vocal fold oscillation occurs only under restricted conditions. Physiologically, phonation more often involves large-amplitude oscillation associated with tissue stresses and strains beyond their linear viscoelastic limits, particularly in the lamina propria extracellular matrix (ECM). This study reports some preliminary measurements of tissue deformation and failure response of the vocal fold ECM under large-strain shear The primary goal was to formulate and test a novel constitutive model for vocal fold tissue failure, based on a standard-linear cohesive-zone (SL-CZ) approach. Tissue specimens of the sheep vocal fold mucosa were subjected to torsional deformation in vitro, at constant strain rates corresponding to twist rates of 0.01, 0.1, and 1.0 rad/s. The vocal fold ECM demonstrated nonlinear stress-strain and rate-dependent failure response with a failure strain as low as 0.40 rad. A finite-element implementation of the SL-CZ model was capable of capturing the rate dependence in these preliminary data, demonstrating the model's potential for describing tissue failure. Further studies with additional tissue specimens and model improvements are needed to better understand vocal fold tissue failure.

  20. Influence of dynamic inflow on the helicopter vertical response

    NASA Technical Reports Server (NTRS)

    Chen, Robert T. N.; Hindson, William S.

    1986-01-01

    A study was conducted to investigate the effects of dynamic inflow on rotor-blade flapping and vertical motion of the helicopter in hover. Linearized versions of two dynamic inflow models, one developed by Carpenter and Fridovich and the other by Pitt and Peters, were incorporated in simplified rotor-body models and were compared for variations in thrust coefficient and the blade Lock number. In addition, a comparison was made between the results of the linear analysis, and the transient and frequency responses measured in flight on the CH-47B variable-stability helicopter. Results indicate that the correlations are good, considering the simplified model used. The linear analysis also shows that dynamic inflow plays a key role in destabilizing the flapping mode. The destabilized flapping mode, along with the inflow mode that the dynamic inflow introduces, results in a large initial overshoot in the vertical acceleration response to an abrupt input in the collective pitch. This overshoot becomes more pronounced as either the thrust coefficient or the blade Lock number is reduced. Compared with Carpenter's inflow model, Pitt's model tends to produce more oscillatory responses because of the less stable flapping mode predicted by it.

  1. Modelization of highly nonlinear waves in coastal regions

    NASA Astrophysics Data System (ADS)

    Gouin, Maïté; Ducrozet, Guillaume; Ferrant, Pierre

    2015-04-01

    The proposed work deals with the development of a highly non-linear model for water wave propagation in coastal regions. The accurate modelization of surface gravity waves is of major interest in ocean engineering, especially in the field of marine renewable energy. These marine structures are intended to be settled in coastal regions where the effect of variable bathymetry may be significant on local wave conditions. This study presents a numerical model for the wave propagation with complex bathymetry. It is based on High-Order Spectral (HOS) method, initially limited to the propagation of non-linear wave fields over flat bottom. Such a model has been developed and validated at the LHEEA Lab. (Ecole Centrale Nantes) over the past few years and the current developments will enlarge its application range. This new numerical model will keep the interesting numerical properties of the original pseudo-spectral approach (convergence, efficiency with the use of FFTs, …) and enable the possibility to propagate highly non-linear wave fields over long time and large distance. Different validations will be provided in addition to the presentation of the method. At first, Bragg reflection will be studied with the proposed approach. If the Bragg condition is satisfied, the reflected wave generated by a sinusoidal bottom patch should be amplified as a result of resonant quadratic interactions between incident wave and bottom. Comparisons will be provided with experiments and reference solutions. Then, the method will be used to consider the transformation of a non-linear monochromatic wave as it propagates up and over a submerged bar. As the waves travel up the front slope of the bar, it steepens and high harmonics are generated due to non-linear interactions. Comparisons with experimental data will be provided. The different test cases will assess the accuracy and efficiency of the method proposed.

  2. A primer for biomedical scientists on how to execute model II linear regression analysis.

    PubMed

    Ludbrook, John

    2012-04-01

    1. There are two very different ways of executing linear regression analysis. One is Model I, when the x-values are fixed by the experimenter. The other is Model II, in which the x-values are free to vary and are subject to error. 2. I have received numerous complaints from biomedical scientists that they have great difficulty in executing Model II linear regression analysis. This may explain the results of a Google Scholar search, which showed that the authors of articles in journals of physiology, pharmacology and biochemistry rarely use Model II regression analysis. 3. I repeat my previous arguments in favour of using least products linear regression analysis for Model II regressions. I review three methods for executing ordinary least products (OLP) and weighted least products (WLP) regression analysis: (i) scientific calculator and/or computer spreadsheet; (ii) specific purpose computer programs; and (iii) general purpose computer programs. 4. Using a scientific calculator and/or computer spreadsheet, it is easy to obtain correct values for OLP slope and intercept, but the corresponding 95% confidence intervals (CI) are inaccurate. 5. Using specific purpose computer programs, the freeware computer program smatr gives the correct OLP regression coefficients and obtains 95% CI by bootstrapping. In addition, smatr can be used to compare the slopes of OLP lines. 6. When using general purpose computer programs, I recommend the commercial programs systat and Statistica for those who regularly undertake linear regression analysis and I give step-by-step instructions in the Supplementary Information as to how to use loss functions. © 2011 The Author. Clinical and Experimental Pharmacology and Physiology. © 2011 Blackwell Publishing Asia Pty Ltd.

  3. Bayesian spatiotemporal crash frequency models with mixture components for space-time interactions.

    PubMed

    Cheng, Wen; Gill, Gurdiljot Singh; Zhang, Yongping; Cao, Zhong

    2018-03-01

    The traffic safety research has developed spatiotemporal models to explore the variations in the spatial pattern of crash risk over time. Many studies observed notable benefits associated with the inclusion of spatial and temporal correlation and their interactions. However, the safety literature lacks sufficient research for the comparison of different temporal treatments and their interaction with spatial component. This study developed four spatiotemporal models with varying complexity due to the different temporal treatments such as (I) linear time trend; (II) quadratic time trend; (III) Autoregressive-1 (AR-1); and (IV) time adjacency. Moreover, the study introduced a flexible two-component mixture for the space-time interaction which allows greater flexibility compared to the traditional linear space-time interaction. The mixture component allows the accommodation of global space-time interaction as well as the departures from the overall spatial and temporal risk patterns. This study performed a comprehensive assessment of mixture models based on the diverse criteria pertaining to goodness-of-fit, cross-validation and evaluation based on in-sample data for predictive accuracy of crash estimates. The assessment of model performance in terms of goodness-of-fit clearly established the superiority of the time-adjacency specification which was evidently more complex due to the addition of information borrowed from neighboring years, but this addition of parameters allowed significant advantage at posterior deviance which subsequently benefited overall fit to crash data. The Base models were also developed to study the comparison between the proposed mixture and traditional space-time components for each temporal model. The mixture models consistently outperformed the corresponding Base models due to the advantages of much lower deviance. For cross-validation comparison of predictive accuracy, linear time trend model was adjudged the best as it recorded the highest value of log pseudo marginal likelihood (LPML). Four other evaluation criteria were considered for typical validation using the same data for model development. Under each criterion, observed crash counts were compared with three types of data containing Bayesian estimated, normal predicted, and model replicated ones. The linear model again performed the best in most scenarios except one case of using model replicated data and two cases involving prediction without including random effects. These phenomena indicated the mediocre performance of linear trend when random effects were excluded for evaluation. This might be due to the flexible mixture space-time interaction which can efficiently absorb the residual variability escaping from the predictable part of the model. The comparison of Base and mixture models in terms of prediction accuracy further bolstered the superiority of the mixture models as the mixture ones generated more precise estimated crash counts across all four models, suggesting that the advantages associated with mixture component at model fit were transferable to prediction accuracy. Finally, the residual analysis demonstrated the consistently superior performance of random effect models which validates the importance of incorporating the correlation structures to account for unobserved heterogeneity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. The development of global GRAPES 4DVAR

    NASA Astrophysics Data System (ADS)

    Liu, Yongzhu

    2017-04-01

    Four-dimensional variation data assimilation (4DVAR) has given a great contribution to the improvement of NWP system over the past twenty years. Therefore, our strategy is to develop an operational global 4D-Var system from the outset. The aim at the paper is to introduce the development of the global GRAPES four-dimensional variation data assimilation (4DVAR) using incremental analysis schemes and to presents results of a comparison between 4DVAR using 6-hour assimilation window and simplified physics during the minimization with three-dimensional variation data assimilation (3DVAR). The dynamical cores of the tangent-linear and adjoint models are developed directly based on the non-hydrostatic forecast model. In addition, the standard correctness checks have been performed. As well as the development adjoint codes, most of our work is focused on improving the computational efficiency since the bulk of the computational cost of 4D-Var is in the integration of the tangent-linear and adjoint models. In terms of tangent-linear model, the wall-clock time is reduced to about 1.2 times as much as one of nonlinear model through the optimizing of the software framework. The significant computational cost savings on adjoint model result from the removing the redundant recompilations of model trajectories. It is encouraging that the wall-clock time of adjoint model is less than 1.5 times as much as one of nonlinear model. The current difficulty is that the numerical scheme used within the linear model is based on strategically on the numeric of the corresponding nonlinear model. Further computational acceleration should be expected from the improvement on nonlinear numerical algorithm. A series of linearized physical parameterization schemes has been developed to improve the representation of perturbed fields in the linear model. It consists of horizontal and vertical diffusion, sub-grid scale orographic gravity wave drag, large-scale condensation and cumulus convection schemes. We also found the straightforward linearization based on the nonlinear physical scheme might lead to significant growing of spurious unstable perturbations. It is essential to simplify the linear physics with respect to the non-linear schemes. The improvement on the perturbed fields in the tangent-linear model is visible with the linear physics included, especially at the low level. GRAPES variation data assimilation system adopts the incremental approach. The work is ongoing to develop a pre-operational 4DVAR suite with 0.25° outer loop resolution and multiple outer-loops configurations. One 4DVAR analysis using 6-hour assimilation windows can be finished within 40-minutes when using the available conventional and satellite data. In summary, it was found that the analysis over the northern, southern hemispheres, tropical region and East Asian area of GRAPES 4DVAR performed better than GRAPES 3DVAR for one month experiments. Moreover, the forecast results show that northern and southern extra-tropical scores for GRAPES 4DVAR are already better than GRAPES 3DVAR, but the tropical performance needs further investigations. Therefore, the subsequent main improvements will aim to enhance its computational efficiency and accuracy in 2017. The global GRAPES 4DVAR is planned for operation in 2018.

  5. A 1-D model of the nonlinear dynamics of the human lumbar intervertebral disc

    NASA Astrophysics Data System (ADS)

    Marini, Giacomo; Huber, Gerd; Püschel, Klaus; Ferguson, Stephen J.

    2017-01-01

    Lumped parameter models of the spine have been developed to investigate its response to whole body vibration. However, these models assume the behaviour of the intervertebral disc to be linear-elastic. Recently, the authors have reported on the nonlinear dynamic behaviour of the human lumbar intervertebral disc. This response was shown to be dependent on the applied preload and amplitude of the stimuli. However, the mechanical properties of a standard linear elastic model are not dependent on the current deformation state of the system. The aim of this study was therefore to develop a model that is able to describe the axial, nonlinear quasi-static response and to predict the nonlinear dynamic characteristics of the disc. The ability to adapt the model to an individual disc's response was a specific focus of the study, with model validation performed against prior experimental data. The influence of the numerical parameters used in the simulations was investigated. The developed model exhibited an axial quasi-static and dynamic response, which agreed well with the corresponding experiments. However, the model needs further improvement to capture additional peculiar characteristics of the system dynamics, such as the change of mean point of oscillation exhibited by the specimens when oscillating in the region of nonlinear resonance. Reference time steps were identified for specific integration scheme. The study has demonstrated that taking into account the nonlinear-elastic behaviour typical of the intervertebral disc results in a predicted system oscillation much closer to the physiological response than that provided by linear-elastic models. For dynamic analysis, the use of standard linear-elastic models should be avoided, or restricted to study cases where the amplitude of the stimuli is relatively small.

  6. Non-fragile consensus algorithms for a network of diffusion PDEs with boundary local interaction

    NASA Astrophysics Data System (ADS)

    Xiong, Jun; Li, Junmin

    2017-07-01

    In this study, non-fragile consensus algorithm is proposed to solve the average consensus problem of a network of diffusion PDEs, modelled by boundary controlled heat equations. The problem deals with the case where the Neumann-type boundary controllers are corrupted by additive persistent disturbances. To achieve consensus between agents, a linear local interaction rule addressing this requirement is given. The proposed local interaction rules are analysed by applying a Lyapunov-based approach. The multiplicative and additive non-fragile feedback control algorithms are designed and sufficient conditions for the consensus of the multi-agent systems are presented in terms of linear matrix inequalities, respectively. Simulation results are presented to support the effectiveness of the proposed algorithms.

  7. Electroacoustics modeling of piezoelectric welders for ultrasonic additive manufacturing processes

    NASA Astrophysics Data System (ADS)

    Hehr, Adam; Dapino, Marcelo J.

    2016-04-01

    Ultrasonic additive manufacturing (UAM) is a recent 3D metal printing technology which utilizes ultrasonic vibrations from high power piezoelectric transducers to additively weld similar and dissimilar metal foils. CNC machining is used intermittent of welding to create internal channels, embed temperature sensitive components, sensors, and materials, and for net shaping parts. Structural dynamics of the welder and work piece influence the performance of the welder and part quality. To understand the impact of structural dynamics on UAM, a linear time-invariant model is used to relate system shear force and electric current inputs to the system outputs of welder velocity and voltage. Frequency response measurements are combined with in-situ operating measurements of the welder to identify model parameters and to verify model assumptions. The proposed LTI model can enhance process consistency, performance, and guide the development of improved quality monitoring and control strategies.

  8. Estimating health state utility values for comorbid health conditions using SF-6D data.

    PubMed

    Ara, Roberta; Brazier, John

    2011-01-01

    When health state utility values for comorbid health conditions are not available, data from cohorts with single conditions are used to estimate scores. The methods used can produce very different results and there is currently no consensus on which is the most appropriate approach. The objective of the current study was to compare the accuracy of five different methods within the same dataset. Data collected during five Welsh Health Surveys were subgrouped by health status. Mean short-form 6 dimension (SF-6D) scores for cohorts with a specific health condition were used to estimate mean SF-6D scores for cohorts with comorbid conditions using the additive, multiplicative, and minimum methods, the adjusted decrement estimator (ADE), and a linear regression model. The mean SF-6D for subgroups with comorbid health conditions ranged from 0.4648 to 0.6068. The linear model produced the most accurate scores for the comorbid health conditions with 88% of values accurate to within the minimum important difference for the SF-6D. The additive and minimum methods underestimated or overestimated the actual SF-6D scores respectively. The multiplicative and ADE methods both underestimated the majority of scores. However, both methods performed better when estimating scores smaller than 0.50. Although the range in actual health state utility values (HSUVs) was relatively small, our data covered the lower end of the index and the majority of previous research has involved actual HSUVs at the upper end of possible ranges. Although the linear model gave the most accurate results in our data, additional research is required to validate our findings. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  9. Whole-body PET parametric imaging employing direct 4D nested reconstruction and a generalized non-linear Patlak model

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Rahmim, Arman

    2014-03-01

    Graphical analysis is employed in the research setting to provide quantitative estimation of PET tracer kinetics from dynamic images at a single bed. Recently, we proposed a multi-bed dynamic acquisition framework enabling clinically feasible whole-body parametric PET imaging by employing post-reconstruction parameter estimation. In addition, by incorporating linear Patlak modeling within the system matrix, we enabled direct 4D reconstruction in order to effectively circumvent noise amplification in dynamic whole-body imaging. However, direct 4D Patlak reconstruction exhibits a relatively slow convergence due to the presence of non-sparse spatial correlations in temporal kinetic analysis. In addition, the standard Patlak model does not account for reversible uptake, thus underestimating the influx rate Ki. We have developed a novel whole-body PET parametric reconstruction framework in the STIR platform, a widely employed open-source reconstruction toolkit, a) enabling accelerated convergence of direct 4D multi-bed reconstruction, by employing a nested algorithm to decouple the temporal parameter estimation from the spatial image update process, and b) enhancing the quantitative performance particularly in regions with reversible uptake, by pursuing a non-linear generalized Patlak 4D nested reconstruction algorithm. A set of published kinetic parameters and the XCAT phantom were employed for the simulation of dynamic multi-bed acquisitions. Quantitative analysis on the Ki images demonstrated considerable acceleration in the convergence of the nested 4D whole-body Patlak algorithm. In addition, our simulated and patient whole-body data in the postreconstruction domain indicated the quantitative benefits of our extended generalized Patlak 4D nested reconstruction for tumor diagnosis and treatment response monitoring.

  10. Forming limit strains for non-linear strain path of AA6014 aluminium sheet deformed at room temperature

    NASA Astrophysics Data System (ADS)

    Bressan, José Divo; Liewald, Mathias; Drotleff, Klaus

    2017-10-01

    Forming limit strain curves of conventional aluminium alloy AA6014 sheets after loading with non-linear strain paths are presented and compared with D-Bressan macroscopic model of sheet metal rupture by critical shear stress criterion. AA6014 exhibits good formability at room temperature and, thus, is mainly employed in car body external parts by manufacturing at room temperature. According to Weber et al., experimental bi-linear strain paths were carried out in specimens with 1mm thickness by pre-stretching in uniaxial and biaxial directions up to 5%, 10% and 20% strain levels before performing Nakajima testing experiments to obtain the forming limit strain curves, FLCs. In addition, FLCs of AA6014 were predicted by employing D-Bressan critical shear stress criterion for bi-linear strain path and comparisons with the experimental FLCs were analyzed and discussed. In order to obtain the material coefficients of plastic anisotropy, strain and strain rate hardening behavior and calibrate the D-Bressan model, tensile tests, two different strain rate on specimens cut at 0°, 45° and 90° to the rolling direction and also bulge test were carried out at room temperature. The correlation of experimental bi-linear strain path FLCs is reasonably good with the predicted limit strains from D-Bressan model, assuming equivalent pre-strain calculated by Hill 1979 yield criterion.

  11. Analysis of dynamic cerebral autoregulation using an ARX model based on arterial blood pressure and middle cerebral artery velocity simulation.

    PubMed

    Liu, Y; Allen, R

    2002-09-01

    The study aimed to model the cerebrovascular system, using a linear ARX model based on data simulated by a comprehensive physiological model, and to assess the range of applicability of linear parametric models. Arterial blood pressure (ABP) and middle cerebral arterial blood flow velocity (MCAV) were measured from 11 subjects non-invasively, following step changes in ABP, using the thigh cuff technique. By optimising parameters associated with autoregulation, using a non-linear optimisation technique, the physiological model showed a good performance (r=0.83+/-0.14) in fitting MCAV. An additional five sets of measured ABP of length 236+/-154 s were acquired from a subject at rest. These were normalised and rescaled to coefficients of variation (CV=SD/mean) of 2% and 10% for model comparisons. Randomly generated Gaussian noise with standard deviation (SD) from 1% to 5% was added to both ABP and physiologically simulated MCAV (SMCAV), with 'normal' and 'impaired' cerebral autoregulation, to simulate the real measurement conditions. ABP and SMCAV were fitted by ARX modelling, and cerebral autoregulation was quantified by a 5 s recovery percentage R5% of the step responses of the ARX models. The study suggests that cerebral autoregulation can be assessed by computing the R5% of the step response of an ARX model of appropriate order, even when measurement noise is considerable.

  12. Using generalized additive (mixed) models to analyze single case designs.

    PubMed

    Shadish, William R; Zuur, Alain F; Sullivan, Kristynn J

    2014-04-01

    This article shows how to apply generalized additive models and generalized additive mixed models to single-case design data. These models excel at detecting the functional form between two variables (often called trend), that is, whether trend exists, and if it does, what its shape is (e.g., linear and nonlinear). In many respects, however, these models are also an ideal vehicle for analyzing single-case designs because they can consider level, trend, variability, overlap, immediacy of effect, and phase consistency that single-case design researchers examine when interpreting a functional relation. We show how these models can be implemented in a wide variety of ways to test whether treatment is effective, whether cases differ from each other, whether treatment effects vary over cases, and whether trend varies over cases. We illustrate diagnostic statistics and graphs, and we discuss overdispersion of data in detail, with examples of quasibinomial models for overdispersed data, including how to compute dispersion and quasi-AIC fit indices in generalized additive models. We show how generalized additive mixed models can be used to estimate autoregressive models and random effects and discuss the limitations of the mixed models compared to generalized additive models. We provide extensive annotated syntax for doing all these analyses in the free computer program R. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  13. The linear -- non-linear frontier for the Goldstone Higgs

    DOE PAGES

    Gavela, M. B.; Kanshin, K.; Machado, P. A. N.; ...

    2016-12-01

    The minimalmore » $SO(5)/SO(4)$ sigma model is used as a template for the ultraviolet completion of scenarios in which the Higgs particle is a low-energy remnant of some high-energy dynamics, enjoying a (pseudo) Nambu-Goldstone boson ancestry. Varying the $$\\sigma$$ mass allows to sweep from the perturbative regime to the customary non-linear implementations. The low-energy benchmark effective non-linear Lagrangian for bosons and fermions is obtained, determining as well the operator coefficients including linear corrections. At first order in the latter, three effective bosonic operators emerge which are independent of the explicit soft breaking assumed. The Higgs couplings to vector bosons and fermions turn out to be quite universal: the linear corrections are proportional to the explicit symmetry breaking parameters. Furthermore, we define an effective Yukawa operator which allows a simple parametrization and comparison of different heavy fermion ultraviolet completions. In addition, one particular fermionic completion is explored in detail, obtaining the corresponding leading low-energy fermionic operators.« less

  14. Selection of higher order regression models in the analysis of multi-factorial transcription data.

    PubMed

    Prazeres da Costa, Olivia; Hoffman, Arthur; Rey, Johannes W; Mansmann, Ulrich; Buch, Thorsten; Tresch, Achim

    2014-01-01

    Many studies examine gene expression data that has been obtained under the influence of multiple factors, such as genetic background, environmental conditions, or exposure to diseases. The interplay of multiple factors may lead to effect modification and confounding. Higher order linear regression models can account for these effects. We present a new methodology for linear model selection and apply it to microarray data of bone marrow-derived macrophages. This experiment investigates the influence of three variable factors: the genetic background of the mice from which the macrophages were obtained, Yersinia enterocolitica infection (two strains, and a mock control), and treatment/non-treatment with interferon-γ. We set up four different linear regression models in a hierarchical order. We introduce the eruption plot as a new practical tool for model selection complementary to global testing. It visually compares the size and significance of effect estimates between two nested models. Using this methodology we were able to select the most appropriate model by keeping only relevant factors showing additional explanatory power. Application to experimental data allowed us to qualify the interaction of factors as either neutral (no interaction), alleviating (co-occurring effects are weaker than expected from the single effects), or aggravating (stronger than expected). We find a biologically meaningful gene cluster of putative C2TA target genes that appear to be co-regulated with MHC class II genes. We introduced the eruption plot as a tool for visual model comparison to identify relevant higher order interactions in the analysis of expression data obtained under the influence of multiple factors. We conclude that model selection in higher order linear regression models should generally be performed for the analysis of multi-factorial microarray data.

  15. Simultaneous masking additivity for short Gaussian-shaped tones: spectral effects.

    PubMed

    Laback, Bernhard; Necciari, Thibaud; Balazs, Peter; Savel, Sophie; Ystad, Sølvi

    2013-08-01

    Laback et al. [(2011). J. Acoust. Soc. Am. 129, 888-897] investigated the additivity of nonsimultaneous masking using short Gaussian-shaped tones as maskers and target. The present study involved Gaussian stimuli to measure the additivity of simultaneous masking for combinations of up to four spectrally separated maskers. According to most basilar membrane measurements, the maskers should be processed linearly at the characteristic frequency (CF) of the target. Assuming also compression of the target, all masker combinations should produce excess masking (exceeding linear additivity). The results for a pair of maskers flanking the target indeed showed excess masking. The amount of excess masking could be predicted by a model assuming summation of masker-evoked excitations in intensity units at the target CF and compression of the target, using compressive input/output functions derived from the nonsimultaneous masking study. However, the combinations of lower-frequency maskers showed much less excess masking than predicted by the model. This cannot easily be attributed to factors like off-frequency listening, combination tone perception, or between-masker suppression. It was better predicted, however, by assuming weighted intensity summation of masker excitations. The optimum weights for the lower-frequency maskers were smaller than one, consistent with partial masker compression as indicated by recent psychoacoustic data.

  16. Meteorological influences on the interannual variability of meningitis incidence in northwest Nigeria.

    NASA Astrophysics Data System (ADS)

    Abdussalam, Auwal; Monaghan, Andrew; Dukic, Vanja; Hayden, Mary; Hopson, Thomas; Leckebusch, Gregor

    2013-04-01

    Northwest Nigeria is a region with high risk of bacterial meningitis. Since the first documented epidemic of meningitis in Nigeria in 1905, the disease has been endemic in the northern part of the country, with epidemics occurring regularly. In this study we examine the influence of climate on the interannual variability of meningitis incidence and epidemics. Monthly aggregate counts of clinically confirmed hospital-reported cases of meningitis were collected in northwest Nigeria for the 22-year period spanning 1990-2011. Several generalized linear statistical models were fit to the monthly meningitis counts, including generalized additive models. Explanatory variables included monthly records of temperatures, humidity, rainfall, wind speed, sunshine and dustiness from weather stations nearest to the hospitals, and a time series of polysaccharide vaccination efficacy. The effects of other confounding factors -- i.e., mainly non-climatic factors for which records were not available -- were estimated as a smooth, monthly-varying function of time in the generalized additive models. Results reveal that the most important explanatory climatic variables are mean maximum monthly temperature, relative humidity and dustiness. Accounting for confounding factors (e.g., social processes) in the generalized additive models explains more of the year-to-year variation of meningococcal disease compared to those generalized linear models that do not account for such factors. Promising results from several models that included only explanatory variables that preceded the meningitis case data by 1-month suggest there may be potential for prediction of meningitis in northwest Nigeria to aid decision makers on this time scale.

  17. Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee

    2015-08-01

    This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted.

  18. Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach

    PubMed Central

    Duarte, Belmiro P. M.; Wong, Weng Kee

    2014-01-01

    Summary This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted. PMID:26512159

  19. Estimating Additive and Non-Additive Genetic Variances and Predicting Genetic Merits Using Genome-Wide Dense Single Nucleotide Polymorphism Markers

    PubMed Central

    Su, Guosheng; Christensen, Ole F.; Ostersen, Tage; Henryon, Mark; Lund, Mogens S.

    2012-01-01

    Non-additive genetic variation is usually ignored when genome-wide markers are used to study the genetic architecture and genomic prediction of complex traits in human, wild life, model organisms or farm animals. However, non-additive genetic effects may have an important contribution to total genetic variation of complex traits. This study presented a genomic BLUP model including additive and non-additive genetic effects, in which additive and non-additive genetic relation matrices were constructed from information of genome-wide dense single nucleotide polymorphism (SNP) markers. In addition, this study for the first time proposed a method to construct dominance relationship matrix using SNP markers and demonstrated it in detail. The proposed model was implemented to investigate the amounts of additive genetic, dominance and epistatic variations, and assessed the accuracy and unbiasedness of genomic predictions for daily gain in pigs. In the analysis of daily gain, four linear models were used: 1) a simple additive genetic model (MA), 2) a model including both additive and additive by additive epistatic genetic effects (MAE), 3) a model including both additive and dominance genetic effects (MAD), and 4) a full model including all three genetic components (MAED). Estimates of narrow-sense heritability were 0.397, 0.373, 0.379 and 0.357 for models MA, MAE, MAD and MAED, respectively. Estimated dominance variance and additive by additive epistatic variance accounted for 5.6% and 9.5% of the total phenotypic variance, respectively. Based on model MAED, the estimate of broad-sense heritability was 0.506. Reliabilities of genomic predicted breeding values for the animals without performance records were 28.5%, 28.8%, 29.2% and 29.5% for models MA, MAE, MAD and MAED, respectively. In addition, models including non-additive genetic effects improved unbiasedness of genomic predictions. PMID:23028912

  20. Incentives for knowledge sharing: impact of organisational culture and information technology

    NASA Astrophysics Data System (ADS)

    Lyu, Hongbo; Zhang, Zuopeng Justin

    2017-10-01

    This research presents and examines an analytical model of knowledge management in which organisational culture dynamically improves with knowledge-sharing and learning activities within organisations. We investigate the effects of organisational incentives and the level of information technology on the motivation of knowledge sharing. We derive a linear incentive reward structure for knowledge sharing under both homogeneous and heterogeneous conditions. In addition, we show how the organisational culture and the optimum linear sharing reward change with several crucial factors, and summarise three sets of methods (strong IT support, congruent organisational culture, and effective employee assessment) to complement the best linear incentive. Our research provides valuable insights for practitioners in terms of implementing knowledge-management initiatives.

  1. Lifting primordial non-Gaussianity above the noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Welling, Yvette; Woude, Drian van der; Pajer, Enrico, E-mail: welling@strw.leidenuniv.nl, E-mail: D.C.vanderWoude@uu.nl, E-mail: enrico.pajer@gmail.com

    2016-08-01

    Primordial non-Gaussianity (PNG) in Large Scale Structures is obfuscated by the many additional sources of non-linearity. Within the Effective Field Theory approach to Standard Perturbation Theory, we show that matter non-linearities in the bispectrum can be modeled sufficiently well to strengthen current bounds with near future surveys, such as Euclid. We find that the EFT corrections are crucial to this improvement in sensitivity. Yet, our understanding of non-linearities is still insufficient to reach important theoretical benchmarks for equilateral PNG, while, for local PNG, our forecast is more optimistic. We consistently account for the theoretical error intrinsic to the perturbative approachmore » and discuss the details of its implementation in Fisher forecasts.« less

  2. The Quantitative-MFG Test: A Linear Mixed Effect Model to Detect Maternal-Offspring Gene Interactions.

    PubMed

    Clark, Michelle M; Blangero, John; Dyer, Thomas D; Sobel, Eric M; Sinsheimer, Janet S

    2016-01-01

    Maternal-offspring gene interactions, aka maternal-fetal genotype (MFG) incompatibilities, are neglected in complex diseases and quantitative trait studies. They are implicated in birth to adult onset diseases but there are limited ways to investigate their influence on quantitative traits. We present the quantitative-MFG (QMFG) test, a linear mixed model where maternal and offspring genotypes are fixed effects and residual correlations between family members are random effects. The QMFG handles families of any size, common or general scenarios of MFG incompatibility, and additional covariates. We develop likelihood ratio tests (LRTs) and rapid score tests and show they provide correct inference. In addition, the LRT's alternative model provides unbiased parameter estimates. We show that testing the association of SNPs by fitting a standard model, which only considers the offspring genotypes, has very low power or can lead to incorrect conclusions. We also show that offspring genetic effects are missed if the MFG modeling assumptions are too restrictive. With genome-wide association study data from the San Antonio Family Heart Study, we demonstrate that the QMFG score test is an effective and rapid screening tool. The QMFG test therefore has important potential to identify pathways of complex diseases for which the genetic etiology remains to be discovered. © 2015 John Wiley & Sons Ltd/University College London.

  3. Extended MHD modeling of nonlinear instabilities in fusion and space plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Germaschewski, Kai

    A number of different sub-projects where pursued within this DOE early career project. The primary focus was on using fully nonlinear, curvilinear, extended MHD simulations of instabilities with applications to fusion and space plasmas. In particular, we performed comprehensive studies of the dynamics of the double tearing mode in different regimes and confi gurations, using Cartesian and cyclindrical geometry and investigating both linear and non-linear dynamics. In addition to traditional extended MHD involving Hall term and electron pressure gradient, we also employed a new multi-fluid moment model, which shows great promise to incorporate kinetic effects, in particular off-diagonal elements ofmore » the pressure tensor, in a fluid model, which is naturally computationally much cheaper than fully kinetic particle or Vlasov simulations. We used our Vlasov code for detailed studies of how weak collisions effect plasma echos. In addition, we have played an important supporting role working with the PPPL theory group around Will Fox and Amitava Bhattacharjee on providing simulation support for HED plasma experiments performed at high-powered laser facilities like OMEGA-EP in Rochester, NY. This project has support a great number of computational advances in our fluid and kinetic plasma models, and has been crucial to winning multiple INCITE computer time awards that supported our computational modeling.« less

  4. Hypothalamic stimulation and baroceptor reflex interaction on renal nerve activity.

    NASA Technical Reports Server (NTRS)

    Wilson, M. F.; Ninomiya, I.; Franz, G. N.; Judy, W. V.

    1971-01-01

    The basal level of mean renal nerve activity (MRNA-0) measured in anesthetized cats was found to be modified by the additive interaction of hypothalamic and baroceptor reflex influences. Data were collected with the four major baroceptor nerves either intact or cut, and with mean aortic pressure (MAP) either clamped with a reservoir or raised with l-epinephrine. With intact baroceptor nerves, MRNA stayed essentially constant at level MRNA-0 for MAP below an initial pressure P1, and fell approximately linearly to zero as MAP was raised to P2. Cutting the baroceptor nerves kept MRNA at MRNA-0 (assumed to represent basal central neural output) independent of MAP. The addition of hypothalamic stimulation produced nearly constant increments in MRNA for all pressure levels up to P2, with complete inhibition at some level above P2. The increments in MRNA depended on frequency and location of the stimulus. A piecewise linear model describes MRNA as a linear combination of hypothalamic, basal central neural, and baroceptor reflex activity.

  5. Interaction Models for Functional Regression.

    PubMed

    Usset, Joseph; Staicu, Ana-Maria; Maity, Arnab

    2016-02-01

    A functional regression model with a scalar response and multiple functional predictors is proposed that accommodates two-way interactions in addition to their main effects. The proposed estimation procedure models the main effects using penalized regression splines, and the interaction effect by a tensor product basis. Extensions to generalized linear models and data observed on sparse grids or with measurement error are presented. A hypothesis testing procedure for the functional interaction effect is described. The proposed method can be easily implemented through existing software. Numerical studies show that fitting an additive model in the presence of interaction leads to both poor estimation performance and lost prediction power, while fitting an interaction model where there is in fact no interaction leads to negligible losses. The methodology is illustrated on the AneuRisk65 study data.

  6. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    DOE PAGES

    Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.

    2015-09-08

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakagemore » frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.« less

  7. Developing CORE model-based worksheet with recitation task to facilitate students’ mathematical communication skills in linear algebra course

    NASA Astrophysics Data System (ADS)

    Risnawati; Khairinnisa, S.; Darwis, A. H.

    2018-01-01

    The purpose of this study was to develop a CORE model-based worksheet with recitation task that were valid and practical and could facilitate students’ communication skills in Linear Algebra course. This study was conducted in mathematics education department of one public university in Riau, Indonesia. Participants of the study were media and subject matter experts as validators as well as students from mathematics education department. The objects of this study are students’ worksheet and students’ mathematical communication skills. The results of study showed that: (1) based on validation of the experts, the developed students’ worksheet was valid and could be applied for students in Linear Algebra courses; (2) based on the group trial, the practicality percentage was 92.14% in small group and 90.19% in large group, so the worksheet was very practical and could attract students to learn; and (3) based on the post test, the average percentage of ideals was 87.83%. In addition, the results showed that the students’ worksheet was able to facilitate students’ mathematical communication skills in linear algebra course.

  8. Structural Equation Modeling: A Framework for Ocular and Other Medical Sciences Research

    PubMed Central

    Christ, Sharon L.; Lee, David J.; Lam, Byron L.; Diane, Zheng D.

    2017-01-01

    Structural equation modeling (SEM) is a modeling framework that encompasses many types of statistical models and can accommodate a variety of estimation and testing methods. SEM has been used primarily in social sciences but is increasingly used in epidemiology, public health, and the medical sciences. SEM provides many advantages for the analysis of survey and clinical data, including the ability to model latent constructs that may not be directly observable. Another major feature is simultaneous estimation of parameters in systems of equations that may include mediated relationships, correlated dependent variables, and in some instances feedback relationships. SEM allows for the specification of theoretically holistic models because multiple and varied relationships may be estimated together in the same model. SEM has recently expanded by adding generalized linear modeling capabilities that include the simultaneous estimation of parameters of different functional form for outcomes with different distributions in the same model. Therefore, mortality modeling and other relevant health outcomes may be evaluated. Random effects estimation using latent variables has been advanced in the SEM literature and software. In addition, SEM software has increased estimation options. Therefore, modern SEM is quite general and includes model types frequently used by health researchers, including generalized linear modeling, mixed effects linear modeling, and population average modeling. This article does not present any new information. It is meant as an introduction to SEM and its uses in ocular and other health research. PMID:24467557

  9. Linear indices of the "molecular pseudograph's atom adjacency matrix": definition, significance-interpretation, and application to QSAR analysis of flavone derivatives as HIV-1 integrase inhibitors.

    PubMed

    Marrero-Ponce, Yovani

    2004-01-01

    This report describes a new set of molecular descriptors of relevance to QSAR/QSPR studies and drug design, atom linear indices fk(xi). These atomic level chemical descriptors are based on the calculation of linear maps on Rn[fk(xi): Rn--> Rn] in canonical basis. In this context, the kth power of the molecular pseudograph's atom adjacency matrix [Mk(G)] denotes the matrix of fk(xi) with respect to the canonical basis. In addition, a local-fragment (atom-type) formalism was developed. The kth atom-type linear indices are calculated by summing the kth atom linear indices of all atoms of the same atom type in the molecules. Moreover, total (whole-molecule) linear indices are also proposed. This descriptor is a linear functional (linear form) on Rn. That is, the kth total linear indices is a linear map from Rn to the scalar R[ fk(x): Rn --> R]. Thus, the kth total linear indices are calculated by summing the atom linear indices of all atoms in the molecule. The features of the kth total and local linear indices are illustrated by examples of various types of molecular structures, including chain-lengthening, branching, heteroatoms-content, and multiple bonds. Additionally, the linear independence of the local linear indices to other 0D, 1D, 2D, and 3D molecular descriptors is demonstrated by using principal component analysis for 42 very heterogeneous molecules. Much redundancy and overlapping was found among total linear indices and most of the other structural indices presently in use in the QSPR/QSAR practice. On the contrary, the information carried by atom-type linear indices was strikingly different from that codified in most of the 229 0D-3D molecular descriptors used in this study. It is concluded that the local linear indices are an independent indices containing important structural information to be used in QSPR/QSAR and drug design studies. In this sense, atom, atom-type, and total linear indices were used for the prediction of pIC50 values for the cleavage process of a set of flavone derivatives inhibitors of HIV-1 integrase. Quantitative models found are significant from a statistical point of view (R of 0.965, 0.902, and 0.927, respectively) and permit a clear interpretation of the studied properties in terms of the structural features of molecules. A LOO cross-validation procedure revealed that the regression models had a fairly good predictability (q2 of 0.679, 0.543, and 0.721, respectively). The comparison with other approaches reveals good behavior of the method proposed. The approach described in this paper appears to be an excellent alternative or guides for discovery and optimization of new lead compounds.

  10. Linear-quadratic-Gaussian synthesis with reduced parameter sensitivity

    NASA Technical Reports Server (NTRS)

    Lin, J. Y.; Mingori, D. L.

    1992-01-01

    We present a method for improving the tolerance of a conventional LQG controller to parameter errors in the plant model. The improvement is achieved by introducing additional terms reflecting the structure of the parameter errors into the LQR cost function, and also the process and measurement noise models. Adjusting the sizes of these additional terms permits a trade-off between robustness and nominal performance. Manipulation of some of the additional terms leads to high gain controllers while other terms lead to low gain controllers. Conditions are developed under which the high-gain approach asymptotically recovers the robustness of the corresponding full-state feedback design, and the low-gain approach makes the closed-loop poles asymptotically insensitive to parameter errors.

  11. Quantitative Modeling of Entangled Polymer Rheology: Experiments, Tube Models and Slip-Link Simulations

    NASA Astrophysics Data System (ADS)

    Desai, Priyanka Subhash

    Rheology properties are sensitive indicators of molecular structure and dynamics. The relationship between rheology and polymer dynamics is captured in the constitutive model, which, if accurate and robust, would greatly aid molecular design and polymer processing. This dissertation is thus focused on building accurate and quantitative constitutive models that can help predict linear and non-linear viscoelasticity. In this work, we have used a multi-pronged approach based on the tube theory, coarse-grained slip-link simulations, and advanced polymeric synthetic and characterization techniques, to confront some of the outstanding problems in entangled polymer rheology. First, we modified simple tube based constitutive equations in extensional rheology and developed functional forms to test the effect of Kuhn segment alignment on a) tube diameter enlargement and b) monomeric friction reduction between subchains. We, then, used these functional forms to model extensional viscosity data for polystyrene (PS) melts and solutions. We demonstrated that the idea of reduction in segmental friction due to Kuhn alignment is successful in explaining the qualitative difference between melts and solutions in extension as revealed by recent experiments on PS. Second, we compiled literature data and used it to develop a universal tube model parameter set and prescribed their values and uncertainties for 1,4-PBd by comparing linear viscoelastic G' and G" mastercurves for 1,4-PBds of various branching architectures. The high frequency transition region of the mastercurves superposed very well for all the 1,4-PBds irrespective of their molecular weight and architecture, indicating universality in high frequency behavior. Therefore, all three parameters of the tube model were extracted from this high frequency transition region alone. Third, we compared predictions of two versions of the tube model, Hierarchical model and BoB model against linear viscoelastic data of blends of 1,4-PBd star and linear melts. The star was carefully synthesized and characterized. We found massive failures of tube models to predict the terminal relaxation behavior of the star/linear blends. In addition, these blends were also tested against a coarse-grained slip-link model, the "Cluster Fixed Slip-link Model (CFSM)" of Schieber and coworkers. The CFSM with only two parameters gave excellent agreement with all experimental data for the blends.

  12. Log-Linear Modeling of Agreement among Expert Exposure Assessors

    PubMed Central

    Hunt, Phillip R.; Friesen, Melissa C.; Sama, Susan; Ryan, Louise; Milton, Donald

    2015-01-01

    Background: Evaluation of expert assessment of exposure depends, in the absence of a validation measurement, upon measures of agreement among the expert raters. Agreement is typically measured using Cohen’s Kappa statistic, however, there are some well-known limitations to this approach. We demonstrate an alternate method that uses log-linear models designed to model agreement. These models contain parameters that distinguish between exact agreement (diagonals of agreement matrix) and non-exact associations (off-diagonals). In addition, they can incorporate covariates to examine whether agreement differs across strata. Methods: We applied these models to evaluate agreement among expert ratings of exposure to sensitizers (none, likely, high) in a study of occupational asthma. Results: Traditional analyses using weighted kappa suggested potential differences in agreement by blue/white collar jobs and office/non-office jobs, but not case/control status. However, the evaluation of the covariates and their interaction terms in log-linear models found no differences in agreement with these covariates and provided evidence that the differences observed using kappa were the result of marginal differences in the distribution of ratings rather than differences in agreement. Differences in agreement were predicted across the exposure scale, with the likely moderately exposed category more difficult for the experts to differentiate from the highly exposed category than from the unexposed category. Conclusions: The log-linear models provided valuable information about patterns of agreement and the structure of the data that were not revealed in analyses using kappa. The models’ lack of dependence on marginal distributions and the ease of evaluating covariates allow reliable detection of observational bias in exposure data. PMID:25748517

  13. Did ASAS-SN Kill the Supermassive Black Hole Binary Candidate PG1302-102?

    NASA Astrophysics Data System (ADS)

    Liu, Tingting; Gezari, Suvi; Miller, M. Coleman

    2018-05-01

    Graham et al. reported a periodically varying quasar and supermassive black hole binary candidate, PG1302-102 (hereafter PG1302), which was discovered in the Catalina Real-time Transient Survey (CRTS). Its combined Lincoln Near-Earth Asteroid Research (LINEAR) and CRTS optical light curve is well fitted to a sinusoid of an observed period of ≈1884 days and well modeled by the relativistic Doppler boosting of the secondary mini-disk. However, the LINEAR+CRTS light curve from MJD ≈52,700 to MJD ≈56,400 covers only ∼2 cycles of periodic variation, which is a short baseline that can be highly susceptible to normal, stochastic quasar variability. In this Letter, we present a reanalysis of PG1302 using the latest light curve from the All-sky Automated Survey for Supernovae (ASAS-SN), which extends the observational baseline to the present day (MJD ≈58,200), and adopting a maximum likelihood method that searches for a periodic component in addition to stochastic quasar variability. When the ASAS-SN data are combined with the previous LINEAR+CRTS data, the evidence for periodicity decreases. For genuine periodicity one would expect that additional data would strengthen the evidence, so the decrease in significance may be an indication that the binary model is disfavored.

  14. Feedback stabilization of an oscillating vertical cylinder by POD Reduced-Order Model

    NASA Astrophysics Data System (ADS)

    Tissot, Gilles; Cordier, Laurent; Noack, Bernd R.

    2015-01-01

    The objective is to demonstrate the use of reduced-order models (ROM) based on proper orthogonal decomposition (POD) to stabilize the flow over a vertically oscillating circular cylinder in the laminar regime (Reynolds number equal to 60). The 2D Navier-Stokes equations are first solved with a finite element method, in which the moving cylinder is introduced via an ALE method. Since in fluid-structure interaction, the POD algorithm cannot be applied directly, we implemented the fictitious domain method of Glowinski et al. [1] where the solid domain is treated as a fluid undergoing an additional constraint. The POD-ROM is classically obtained by projecting the Navier-Stokes equations onto the first POD modes. At this level, the cylinder displacement is enforced in the POD-ROM through the introduction of Lagrange multipliers. For determining the optimal vertical velocity of the cylinder, a linear quadratic regulator framework is employed. After linearization of the POD-ROM around the steady flow state, the optimal linear feedback gain is obtained as solution of a generalized algebraic Riccati equation. Finally, when the optimal feedback control is applied, it is shown that the flow converges rapidly to the steady state. In addition, a vanishing control is obtained proving the efficiency of the control approach.

  15. Phytotoxicity and accumulation of chromium in carrot plants and the derivation of soil thresholds for Chinese soils.

    PubMed

    Ding, Changfeng; Li, Xiaogang; Zhang, Taolin; Ma, Yibing; Wang, Xingxiang

    2014-10-01

    Soil environmental quality standards in respect of heavy metals for farmlands should be established considering both their effects on crop yield and their accumulation in the edible part. A greenhouse experiment was conducted to investigate the effects of chromium (Cr) on biomass production and Cr accumulation in carrot plants grown in a wide range of soils. The results revealed that carrot yield significantly decreased in 18 of the total 20 soils with Cr addition being the soil environmental quality standard of China. The Cr content of carrot grown in the five soils with pH>8.0 exceeded the maximum allowable level (0.5mgkg(-1)) according to the Chinese General Standard for Contaminants in Foods. The relationship between carrot Cr concentration and soil pH could be well fitted (R(2)=0.70, P<0.0001) by a linear-linear segmented regression model. The addition of Cr to soil influenced carrot yield firstly rather than the food quality. The major soil factors controlling Cr phytotoxicity and the prediction models were further identified and developed using path analysis and stepwise multiple linear regression analysis. Soil Cr thresholds for phytotoxicity meanwhile ensuring food safety were then derived on the condition of 10 percent yield reduction. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Imprints of dark energy on cosmic structure formation - I. Realistic quintessence models and the non-linear matter power spectrum

    NASA Astrophysics Data System (ADS)

    Alimi, J.-M.; Füzfa, A.; Boucher, V.; Rasera, Y.; Courtin, J.; Corasaniti, P.-S.

    2010-01-01

    Quintessence has been proposed to account for dark energy (DE) in the Universe. This component causes a typical modification of the background cosmic expansion, which, in addition to its clustering properties, can leave a potentially distinctive signature on large-scale structures. Many previous studies have investigated this topic, particularly in relation to the non-linear regime of structure formation. However, no careful pre-selection of viable quintessence models with high precision cosmological data was performed. Here we show that this has led to a misinterpretation (and underestimation) of the imprint of quintessence on the distribution of large-scale structures. To this purpose, we perform a likelihood analysis of the combined Supernova Ia UNION data set and Wilkinson Microwave Anisotropy Probe 5-yr data to identify realistic quintessence models. These are specified by different model parameter values, but still statistically indistinguishable from the vanilla Λ cold dark matter (ΛCDM). Differences are especially manifest in the predicted amplitude and shape of the linear matter power spectrum though these remain within the uncertainties of the Sloan Digital Sky Survey data. We use these models as a benchmark for studying the clustering properties of dark matter haloes by performing a series of high-resolution N-body simulations. In this first paper, we specifically focus on the non-linear matter power spectrum. We find that realistic quintessence models allow for relevant differences of the dark matter distribution with respect to the ΛCDM scenario well into the non-linear regime, with deviations of up to 40 per cent in the non-linear power spectrum. Such differences are shown to depend on the nature of DE, as well as the scale and epoch considered. At small scales (k ~ 1-5hMpc-1, depending on the redshift), the structure formation process is about 20 per cent more efficient than in ΛCDM. We show that these imprints are a specific record of the cosmic structure formation history in DE cosmologies and therefore cannot be accounted for in standard fitting functions of the non-linear matter power spectrum.

  17. A Finite-Difference Time-Domain Model of Artificial Ionospheric Modification

    NASA Astrophysics Data System (ADS)

    Cannon, Patrick; Honary, Farideh; Borisov, Nikolay

    Experiments in the artificial modification of the ionosphere via a radio frequency pump wave have observed a wide range of non-linear phenomena near the reflection height of an O-mode wave. These effects exhibit a strong aspect-angle dependence thought to be associated with the process by which, for a narrow range of off-vertical launch angles, the O-mode pump wave can propagate beyond the standard reflection height at X=1 as a Z-mode wave and excite additional plasma activity. A numerical model based on Finite-Difference Time-Domain method has been developed to simulate the interaction of the pump wave with an ionospheric plasma and investigate different non-linear processes involved in modification experiments. The effects on wave propagation due to plasma inhomogeneity and anisotropy are introduced through coupling of the Lorentz equation of motion for electrons and ions to Maxwell’s wave equations in the FDTD formulation, leading to a model that is capable of exciting a variety of plasma waves including Langmuir and upper-hybrid waves. Additionally, discretized equations describing the time-dependent evolution of the plasma fluid temperature and density are included in the FDTD update scheme. This model is used to calculate the aspect angle dependence and angular size of the radio window for which Z-mode excitation occurs, and the results compared favourably with both theoretical predictions and experimental observations. The simulation results are found to reproduce the angular dependence on electron density and temperature enhancement observed experimentally. The model is used to investigate the effect of different initial plasma density conditions on the evolution of non-linear effects, and demonstrates that the inclusion of features such as small field-aligned density perturbations can have a significant influence on wave propagation and the magnitude of temperature and density enhancements.

  18. Endotoxin and gender modify lung function recovery after occupational organic dust exposure: a 30-year study.

    PubMed

    Lai, Peggy S; Hang, Jing-Qing; Valeri, Linda; Zhang, Feng-Ying; Zheng, Bu-Yong; Mehta, Amar J; Shi, Jing; Su, Li; Brown, Dan; Eisen, Ellen A; Christiani, David C

    2015-08-01

    The purpose of this study is to determine the trajectory of lung function change after exposure cessation to occupational organic dust exposure, and to identify factors that modify improvement. The Shanghai Textile Worker Study is a longitudinal study of 447 cotton workers exposed to endotoxin-containing dust and 472 silk workers exposed to non-endotoxin-containing dust. Spirometry was performed at 5-year intervals. Air sampling was performed to estimate individual cumulative exposures. The effect of work cessation on forced expiratory volume in 1 s (FEV1) was modelled using generalised additive mixed effects models to identify the trajectory of FEV1 recovery. Linear mixed effects models incorporating interaction terms were used to identify modifiers of FEV1 recovery. Loss to follow-up was accounted for with inverse probability of censoring weights. 74.2% of the original cohort still alive participated in 2011. Generalised additive mixed models identified a non-linear improvement in FEV1 for all workers after exposure cessation, with no plateau noted 25 years after retirement. Linear mixed effects models incorporating interaction terms identified prior endotoxin exposure (p=0.01) and male gender (p=0.002) as risk factors for impaired FEV1 improvement after exposure cessation. After adjusting for gender, smoking delayed the onset of FEV1 gain but did not affect the overall magnitude of change. Lung function improvement after cessation of exposure to organic dust is sustained. Endotoxin exposure and male gender are risk factors for less FEV1 improvement. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  19. A Numerical Study of Scalable Cardiac Electro-Mechanical Solvers on HPC Architectures

    PubMed Central

    Colli Franzone, Piero; Pavarino, Luca F.; Scacchi, Simone

    2018-01-01

    We introduce and study some scalable domain decomposition preconditioners for cardiac electro-mechanical 3D simulations on parallel HPC (High Performance Computing) architectures. The electro-mechanical model of the cardiac tissue is composed of four coupled sub-models: (1) the static finite elasticity equations for the transversely isotropic deformation of the cardiac tissue; (2) the active tension model describing the dynamics of the intracellular calcium, cross-bridge binding and myofilament tension; (3) the anisotropic Bidomain model describing the evolution of the intra- and extra-cellular potentials in the deforming cardiac tissue; and (4) the ionic membrane model describing the dynamics of ionic currents, gating variables, ionic concentrations and stretch-activated channels. This strongly coupled electro-mechanical model is discretized in time with a splitting semi-implicit technique and in space with isoparametric finite elements. The resulting scalable parallel solver is based on Multilevel Additive Schwarz preconditioners for the solution of the Bidomain system and on BDDC preconditioned Newton-Krylov solvers for the non-linear finite elasticity system. The results of several 3D parallel simulations show the scalability of both linear and non-linear solvers and their application to the study of both physiological excitation-contraction cardiac dynamics and re-entrant waves in the presence of different mechano-electrical feedbacks. PMID:29674971

  20. Modeling pinchoff and reconnection in a Hele-Shaw cell. I. The models and their calibration

    NASA Astrophysics Data System (ADS)

    Lee, Hyeong-Gi; Lowengrub, J. S.; Goodman, J.

    2002-02-01

    This is the first paper in a two-part series in which we analyze two model systems to study pinchoff and reconnection in binary fluid flow in a Hele-Shaw cell with arbitrary density and viscosity contrast between the components. The systems stem from a simplification of a general system of equations governing the motion of a binary fluid (NSCH model [Lowengrub and Truskinovsky, Proc. R. Soc. London, Ser. A 454, 2617 (1998)]) to flow in a Hele-Shaw cell. The system takes into account the chemical diffusivity between different components of a fluid mixture and the reactive stresses induced by inhomogeneity. In one of the systems we consider (HSCH), the binary fluid may be compressible due to diffusion. In the other system (BHSCH), a Boussinesq approximation is used and the fluid is incompressible. In this paper, we motivate, present and calibrate the HSCH/BHSCH equations so as to yield the classical sharp interface model as a limiting case. We then analyze their equilibria, one dimensional evolution and linear stability. In the second paper [paper II, Phys. Fluids 14, 514 (2002)], we analyze the behavior of the models in the fully nonlinear regime. In the BHSCH system, the equilibrium concentration profile is obtained using the classical Maxwell construction [Rowlinson and Widom, Molecular Theory of Capillarity (Clarendon, Oxford, 1979)] and does not depend on the orientation of the gravitational field. We find that the equilibria in the HSCH model are somewhat surprising as the gravitational field actually affects the internal structure of an isolated interface by driving additional stratification of light and heavy fluids over that predicted in the Boussinesq case. A comparison of the linear growth rates indicates that the HSCH system is slightly more diffusive than the BHSCH system. In both, linear convergence to the sharp interface growth rates is observed in a parameter controlling the interface thickness. In addition, we identify the effect that each of the parameters, in the HSCH/BHSCH models, has on the linear growth rates. We then show how this analysis may be used to suggest a set of modified parameters which, when used in the HSCH/BHSCH systems, yield improved agreement with the sharp interface model at a finite interface thickness. Evidence of this improved agreement may be found in paper II.

  1. Predicting response before initiation of neoadjuvant chemotherapy in breast cancer using new methods for the analysis of dynamic contrast enhanced MRI (DCE MRI) data

    NASA Astrophysics Data System (ADS)

    DeGrandchamp, Joseph B.; Whisenant, Jennifer G.; Arlinghaus, Lori R.; Abramson, V. G.; Yankeelov, Thomas E.; Cárdenas-Rodríguez, Julio

    2016-03-01

    The pharmacokinetic parameters derived from dynamic contrast enhanced (DCE) MRI have shown promise as biomarkers for tumor response to therapy. However, standard methods of analyzing DCE MRI data (Tofts model) require high temporal resolution, high signal-to-noise ratio (SNR), and the Arterial Input Function (AIF). Such models produce reliable biomarkers of response only when a therapy has a large effect on the parameters. We recently reported a method that solves the limitations, the Linear Reference Region Model (LRRM). Similar to other reference region models, the LRRM needs no AIF. Additionally, the LRRM is more accurate and precise than standard methods at low SNR and slow temporal resolution, suggesting LRRM-derived biomarkers could be better predictors. Here, the LRRM, Non-linear Reference Region Model (NRRM), Linear Tofts model (LTM), and Non-linear Tofts Model (NLTM) were used to estimate the RKtrans between muscle and tumor (or the Ktrans for Tofts) and the tumor kep,TOI for 39 breast cancer patients who received neoadjuvant chemotherapy (NAC). These parameters and the receptor statuses of each patient were used to construct cross-validated predictive models to classify patients as complete pathological responders (pCR) or non-complete pathological responders (non-pCR) to NAC. Model performance was evaluated using area under the ROC curve (AUC). The AUC for receptor status alone was 0.62, while the best performance using predictors from the LRRM, NRRM, LTM, and NLTM were AUCs of 0.79, 0.55, 0.60, and 0.59 respectively. This suggests that the LRRM can be used to predict response to NAC in breast cancer.

  2. Observed Arctic sea-ice loss directly follows anthropogenic CO2 emission.

    PubMed

    Notz, Dirk; Stroeve, Julienne

    2016-11-11

    Arctic sea ice is retreating rapidly, raising prospects of a future ice-free Arctic Ocean during summer. Because climate-model simulations of the sea-ice loss differ substantially, we used a robust linear relationship between monthly-mean September sea-ice area and cumulative carbon dioxide (CO 2 ) emissions to infer the future evolution of Arctic summer sea ice directly from the observational record. The observed linear relationship implies a sustained loss of 3 ± 0.3 square meters of September sea-ice area per metric ton of CO 2 emission. On the basis of this sensitivity, Arctic sea ice will be lost throughout September for an additional 1000 gigatons of CO 2 emissions. Most models show a lower sensitivity, which is possibly linked to an underestimation of the modeled increase in incoming longwave radiation and of the modeled transient climate response. Copyright © 2016, American Association for the Advancement of Science.

  3. Classification of hyperspectral imagery using MapReduce on a NVIDIA graphics processing unit (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ramirez, Andres; Rahnemoonfar, Maryam

    2017-04-01

    A hyperspectral image provides multidimensional figure rich in data consisting of hundreds of spectral dimensions. Analyzing the spectral and spatial information of such image with linear and non-linear algorithms will result in high computational time. In order to overcome this problem, this research presents a system using a MapReduce-Graphics Processing Unit (GPU) model that can help analyzing a hyperspectral image through the usage of parallel hardware and a parallel programming model, which will be simpler to handle compared to other low-level parallel programming models. Additionally, Hadoop was used as an open-source version of the MapReduce parallel programming model. This research compared classification accuracy results and timing results between the Hadoop and GPU system and tested it against the following test cases: the CPU and GPU test case, a CPU test case and a test case where no dimensional reduction was applied.

  4. Estimating cosmic velocity fields from density fields and tidal tensors

    NASA Astrophysics Data System (ADS)

    Kitaura, Francisco-Shu; Angulo, Raul E.; Hoffman, Yehuda; Gottlöber, Stefan

    2012-10-01

    In this work we investigate the non-linear and non-local relation between cosmological density and peculiar velocity fields. Our goal is to provide an algorithm for the reconstruction of the non-linear velocity field from the fully non-linear density. We find that including the gravitational tidal field tensor using second-order Lagrangian perturbation theory based upon an estimate of the linear component of the non-linear density field significantly improves the estimate of the cosmic flow in comparison to linear theory not only in the low density, but also and more dramatically in the high-density regions. In particular we test two estimates of the linear component: the lognormal model and the iterative Lagrangian linearization. The present approach relies on a rigorous higher order Lagrangian perturbation theory analysis which incorporates a non-local relation. It does not require additional fitting from simulations being in this sense parameter free, it is independent of statistical-geometrical optimization and it is straightforward and efficient to compute. The method is demonstrated to yield an unbiased estimator of the velocity field on scales ≳5 h-1 Mpc with closely Gaussian distributed errors. Moreover, the statistics of the divergence of the peculiar velocity field is extremely well recovered showing a good agreement with the true one from N-body simulations. The typical errors of about 10 km s-1 (1σ confidence intervals) are reduced by more than 80 per cent with respect to linear theory in the scale range between 5 and 10 h-1 Mpc in high-density regions (δ > 2). We also find that iterative Lagrangian linearization is significantly superior in the low-density regime with respect to the lognormal model.

  5. On the need of mode interpolation for data-driven Galerkin models of a transient flow around a sphere

    NASA Astrophysics Data System (ADS)

    Stankiewicz, Witold; Morzyński, Marek; Kotecki, Krzysztof; Noack, Bernd R.

    2017-04-01

    We present a low-dimensional Galerkin model with state-dependent modes capturing linear and nonlinear dynamics. Departure point is a direct numerical simulation of the three-dimensional incompressible flow around a sphere at Reynolds numbers 400. This solution starts near the unstable steady Navier-Stokes solution and converges to a periodic limit cycle. The investigated Galerkin models are based on the dynamic mode decomposition (DMD) and derive the dynamical system from first principles, the Navier-Stokes equations. A DMD model with training data from the initial linear transient fails to predict the limit cycle. Conversely, a model from limit-cycle data underpredicts the initial growth rate roughly by a factor 5. Key enablers for uniform accuracy throughout the transient are a continuous mode interpolation between both oscillatory fluctuations and the addition of a shift mode. This interpolated model is shown to capture both the transient growth of the oscillation and the limit cycle.

  6. Ancestral haplotype-based association mapping with generalized linear mixed models accounting for stratification.

    PubMed

    Zhang, Z; Guillaume, F; Sartelet, A; Charlier, C; Georges, M; Farnir, F; Druet, T

    2012-10-01

    In many situations, genome-wide association studies are performed in populations presenting stratification. Mixed models including a kinship matrix accounting for genetic relatedness among individuals have been shown to correct for population and/or family structure. Here we extend this methodology to generalized linear mixed models which properly model data under various distributions. In addition we perform association with ancestral haplotypes inferred using a hidden Markov model. The method was shown to properly account for stratification under various simulated scenari presenting population and/or family structure. Use of ancestral haplotypes resulted in higher power than SNPs on simulated datasets. Application to real data demonstrates the usefulness of the developed model. Full analysis of a dataset with 4600 individuals and 500 000 SNPs was performed in 2 h 36 min and required 2.28 Gb of RAM. The software GLASCOW can be freely downloaded from www.giga.ulg.ac.be/jcms/prod_381171/software. francois.guillaume@jouy.inra.fr Supplementary data are available at Bioinformatics online.

  7. Shape-Reprogrammable Polymers: Encoding, Erasing, and Re-Encoding (Postprint)

    DTIC Science & Technology

    2014-11-01

    printing , is a layer-by-layer technology for producing 3D objects directly from a digital model. While 3D printing allows the fabrication of increasingly...one linear shape-translation processes often increase rapidly with shape complexity. Additive manufacturing, also called three-dimensional ( 3D

  8. Skeletal muscle tensile strain dependence: hyperviscoelastic nonlinearity

    PubMed Central

    Wheatley, Benjamin B; Morrow, Duane A; Odegard, Gregory M; Kaufman, Kenton R; Donahue, Tammy L Haut

    2015-01-01

    Introduction Computational modeling of skeletal muscle requires characterization at the tissue level. While most skeletal muscle studies focus on hyperelasticity, the goal of this study was to examine and model the nonlinear behavior of both time-independent and time-dependent properties of skeletal muscle as a function of strain. Materials and Methods Nine tibialis anterior muscles from New Zealand White rabbits were subject to five consecutive stress relaxation cycles of roughly 3% strain. Individual relaxation steps were fit with a three-term linear Prony series. Prony series coefficients and relaxation ratio were assessed for strain dependence using a general linear statistical model. A fully nonlinear constitutive model was employed to capture the strain dependence of both the viscoelastic and instantaneous components. Results Instantaneous modulus (p<0.0005) and mid-range relaxation (p<0.0005) increased significantly with strain level, while relaxation at longer time periods decreased with strain (p<0.0005). Time constants and overall relaxation ratio did not change with strain level (p>0.1). Additionally, the fully nonlinear hyperviscoelastic constitutive model provided an excellent fit to experimental data, while other models which included linear components failed to capture muscle function as accurately. Conclusions Material properties of skeletal muscle are strain-dependent at the tissue level. This strain dependence can be included in computational models of skeletal muscle performance with a fully nonlinear hyperviscoelastic model. PMID:26409235

  9. Nonlinear Time Delayed Feedback Control of Aeroelastic Systems: A Functional Approach

    NASA Technical Reports Server (NTRS)

    Marzocca, Piergiovanni; Librescu, Liviu; Silva, Walter A.

    2003-01-01

    In addition to its intrinsic practical importance, nonlinear time delayed feedback control applied to lifting surfaces can result in interesting aeroelastic behaviors. In this paper, nonlinear aeroelastic response to external time-dependent loads and stability boundary for actively controlled lifting surfaces, in an incompressible flow field, are considered. The structural model and the unsteady aerodynamics are considered linear. The implications of the presence of time delays in the linear/nonlinear feedback control and of geometrical parameters on the aeroelasticity of lifting surfaces are analyzed and conclusions on their implications are highlighted.

  10. Analysis of redox additive-based overcharge protection for rechargeable lithium batteries

    NASA Technical Reports Server (NTRS)

    Narayanan, S. R.; Surampudi, S.; Attia, A. I.; Bankston, C. P.

    1991-01-01

    The overcharge condition in secondary lithium batteries employing redox additives for overcharge protection, has been theoretically analyzed in terms of a finite linear diffusion model. The analysis leads to expressions relating the steady-state overcharge current density and cell voltage to the concentration, diffusion coefficient, standard reduction potential of the redox couple, and interelectrode distance. The model permits the estimation of the maximum permissible overcharge rate for any chosen set of system conditions. Digital simulation of the overcharge experiment leads to numerical representation of the potential transients, and estimate of the influence of diffusion coefficient and interelectrode distance on the transient attainment of the steady state during overcharge. The model has been experimentally verified using 1,1-prime-dimethyl ferrocene as a redox additive. The analysis of the experimental results in terms of the theory allows the calculation of the diffusion coefficient and the formal potential of the redox couple. The model and the theoretical results may be exploited in the design and optimization of overcharge protection by the redox additive approach.

  11. Effects of linear trends on estimation of noise in GNSS position time-series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dmitrieva, K.; Segall, P.; Bradley, A. M.

    A thorough understanding of time-dependent noise in Global Navigation Satellite System (GNSS) position time-series is necessary for computing uncertainties in any signals found in the data. However, estimation of time-correlated noise is a challenging task and is complicated by the difficulty in separating noise from signal, the features of greatest interest in the time-series. In this study, we investigate how linear trends affect the estimation of noise in daily GNSS position time-series. We use synthetic time-series to study the relationship between linear trends and estimates of time-correlated noise for the six most commonly cited noise models. We find that themore » effects of added linear trends, or conversely de-trending, vary depending on the noise model. The commonly adopted model of random walk (RW), flicker noise (FN) and white noise (WN) is the most severely affected by de-trending, with estimates of low-amplitude RW most severely biased. FN plus WN is least affected by adding or removing trends. Non-integer power-law noise estimates are also less affected by de-trending, but are very sensitive to the addition of trend when the spectral index is less than one. We derive an analytical relationship between linear trends and the estimated RW variance for the special case of pure RW noise. Finally, overall, we find that to ascertain the correct noise model for GNSS position time-series and to estimate the correct noise parameters, it is important to have independent constraints on the actual trends in the data.« less

  12. Effects of linear trends on estimation of noise in GNSS position time-series

    NASA Astrophysics Data System (ADS)

    Dmitrieva, K.; Segall, P.; Bradley, A. M.

    2017-01-01

    A thorough understanding of time-dependent noise in Global Navigation Satellite System (GNSS) position time-series is necessary for computing uncertainties in any signals found in the data. However, estimation of time-correlated noise is a challenging task and is complicated by the difficulty in separating noise from signal, the features of greatest interest in the time-series. In this paper, we investigate how linear trends affect the estimation of noise in daily GNSS position time-series. We use synthetic time-series to study the relationship between linear trends and estimates of time-correlated noise for the six most commonly cited noise models. We find that the effects of added linear trends, or conversely de-trending, vary depending on the noise model. The commonly adopted model of random walk (RW), flicker noise (FN) and white noise (WN) is the most severely affected by de-trending, with estimates of low-amplitude RW most severely biased. FN plus WN is least affected by adding or removing trends. Non-integer power-law noise estimates are also less affected by de-trending, but are very sensitive to the addition of trend when the spectral index is less than one. We derive an analytical relationship between linear trends and the estimated RW variance for the special case of pure RW noise. Overall, we find that to ascertain the correct noise model for GNSS position time-series and to estimate the correct noise parameters, it is important to have independent constraints on the actual trends in the data.

  13. Effects of linear trends on estimation of noise in GNSS position time-series

    DOE PAGES

    Dmitrieva, K.; Segall, P.; Bradley, A. M.

    2016-10-20

    A thorough understanding of time-dependent noise in Global Navigation Satellite System (GNSS) position time-series is necessary for computing uncertainties in any signals found in the data. However, estimation of time-correlated noise is a challenging task and is complicated by the difficulty in separating noise from signal, the features of greatest interest in the time-series. In this study, we investigate how linear trends affect the estimation of noise in daily GNSS position time-series. We use synthetic time-series to study the relationship between linear trends and estimates of time-correlated noise for the six most commonly cited noise models. We find that themore » effects of added linear trends, or conversely de-trending, vary depending on the noise model. The commonly adopted model of random walk (RW), flicker noise (FN) and white noise (WN) is the most severely affected by de-trending, with estimates of low-amplitude RW most severely biased. FN plus WN is least affected by adding or removing trends. Non-integer power-law noise estimates are also less affected by de-trending, but are very sensitive to the addition of trend when the spectral index is less than one. We derive an analytical relationship between linear trends and the estimated RW variance for the special case of pure RW noise. Finally, overall, we find that to ascertain the correct noise model for GNSS position time-series and to estimate the correct noise parameters, it is important to have independent constraints on the actual trends in the data.« less

  14. Hybrid finite element method for describing the electrical response of biological cells to applied fields.

    PubMed

    Ying, Wenjun; Henriquez, Craig S

    2007-04-01

    A novel hybrid finite element method (FEM) for modeling the response of passive and active biological membranes to external stimuli is presented. The method is based on the differential equations that describe the conservation of electric flux and membrane currents. By introducing the electric flux through the cell membrane as an additional variable, the algorithm decouples the linear partial differential equation part from the nonlinear ordinary differential equation part that defines the membrane dynamics of interest. This conveniently results in two subproblems: a linear interface problem and a nonlinear initial value problem. The linear interface problem is solved with a hybrid FEM. The initial value problem is integrated by a standard ordinary differential equation solver such as the Euler and Runge-Kutta methods. During time integration, these two subproblems are solved alternatively. The algorithm can be used to model the interaction of stimuli with multiple cells of almost arbitrary geometries and complex ion-channel gating at the plasma membrane. Numerical experiments are presented demonstrating the uses of the method for modeling field stimulation and action potential propagation.

  15. Intrinsic coincident linear polarimetry using stacked organic photovoltaics.

    PubMed

    Roy, S Gupta; Awartani, O M; Sen, P; O'Connor, B T; Kudenov, M W

    2016-06-27

    Polarimetry has widespread applications within atmospheric sensing, telecommunications, biomedical imaging, and target detection. Several existing methods of imaging polarimetry trade off the sensor's spatial resolution for polarimetric resolution, and often have some form of spatial registration error. To mitigate these issues, we have developed a system using oriented polymer-based organic photovoltaics (OPVs) that can preferentially absorb linearly polarized light. Additionally, the OPV cells can be made semitransparent, enabling multiple detectors to be cascaded along the same optical axis. Since each device performs a partial polarization measurement of the same incident beam, high temporal resolution is maintained with the potential for inherent spatial registration. In this paper, a Mueller matrix model of the stacked OPV design is provided. Based on this model, a calibration technique is developed and presented. This calibration technique and model are validated with experimental data, taken with a cascaded three cell OPV Stokes polarimeter, capable of measuring incident linear polarization states. Our results indicate polarization measurement error of 1.2% RMS and an average absolute radiometric accuracy of 2.2% for the demonstrated polarimeter.

  16. QSRR modeling for diverse drugs using different feature selection methods coupled with linear and nonlinear regressions.

    PubMed

    Goodarzi, Mohammad; Jensen, Richard; Vander Heyden, Yvan

    2012-12-01

    A Quantitative Structure-Retention Relationship (QSRR) is proposed to estimate the chromatographic retention of 83 diverse drugs on a Unisphere poly butadiene (PBD) column, using isocratic elutions at pH 11.7. Previous work has generated QSRR models for them using Classification And Regression Trees (CART). In this work, Ant Colony Optimization is used as a feature selection method to find the best molecular descriptors from a large pool. In addition, several other selection methods have been applied, such as Genetic Algorithms, Stepwise Regression and the Relief method, not only to evaluate Ant Colony Optimization as a feature selection method but also to investigate its ability to find the important descriptors in QSRR. Multiple Linear Regression (MLR) and Support Vector Machines (SVMs) were applied as linear and nonlinear regression methods, respectively, giving excellent correlation between the experimental, i.e. extrapolated to a mobile phase consisting of pure water, and predicted logarithms of the retention factors of the drugs (logk(w)). The overall best model was the SVM one built using descriptors selected by ACO. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Exact solutions of the Navier-Stokes equations generalized for flow in porous media

    NASA Astrophysics Data System (ADS)

    Daly, Edoardo; Basser, Hossein; Rudman, Murray

    2018-05-01

    Flow of Newtonian fluids in porous media is often modelled using a generalized version of the full non-linear Navier-Stokes equations that include additional terms describing the resistance to flow due to the porous matrix. Because this formulation is becoming increasingly popular in numerical models, exact solutions are required as a benchmark of numerical codes. The contribution of this study is to provide a number of non-trivial exact solutions of the generalized form of the Navier-Stokes equations for parallel flow in porous media. Steady-state solutions are derived in the case of flows in a medium with constant permeability along the main direction of flow and a constant cross-stream velocity in the case of both linear and non-linear drag. Solutions are also presented for cases in which the permeability changes in the direction normal to the main flow. An unsteady solution for a flow with velocity driven by a time-periodic pressure gradient is also derived. These solutions form a basis for validating computational models across a wide range of Reynolds and Darcy numbers.

  18. Time Series Analysis and Forecasting of Wastewater Inflow into Bandar Tun Razak Sewage Treatment Plant in Selangor, Malaysia

    NASA Astrophysics Data System (ADS)

    Abunama, Taher; Othman, Faridah

    2017-06-01

    Analysing the fluctuations of wastewater inflow rates in sewage treatment plants (STPs) is essential to guarantee a sufficient treatment of wastewater before discharging it to the environment. The main objectives of this study are to statistically analyze and forecast the wastewater inflow rates into the Bandar Tun Razak STP in Kuala Lumpur, Malaysia. A time series analysis of three years’ weekly influent data (156weeks) has been conducted using the Auto-Regressive Integrated Moving Average (ARIMA) model. Various combinations of ARIMA orders (p, d, q) have been tried to select the most fitted model, which was utilized to forecast the wastewater inflow rates. The linear regression analysis was applied to testify the correlation between the observed and predicted influents. ARIMA (3, 1, 3) model was selected with the highest significance R-square and lowest normalized Bayesian Information Criterion (BIC) value, and accordingly the wastewater inflow rates were forecasted to additional 52weeks. The linear regression analysis between the observed and predicted values of the wastewater inflow rates showed a positive linear correlation with a coefficient of 0.831.

  19. A Lagrangian effective field theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlah, Zvonimir; White, Martin; Aviles, Alejandro

    We have continued the development of Lagrangian, cosmological perturbation theory for the low-order correlators of the matter density field. We provide a new route to understanding how the effective field theory (EFT) of large-scale structure can be formulated in the Lagrandian framework and a new resummation scheme, comparing our results to earlier work and to a series of high-resolution N-body simulations in both Fourier and configuration space. The `new' terms arising from EFT serve to tame the dependence of perturbation theory on small-scale physics and improve agreement with simulations (though with an additional free parameter). We find that all ofmore » our models fare well on scales larger than about two to three times the non-linear scale, but fail as the non-linear scale is approached. This is slightly less reach than has been seen previously. At low redshift the Lagrangian model fares as well as EFT in its Eulerian formulation, but at higher z the Eulerian EFT fits the data to smaller scales than resummed, Lagrangian EFT. Furthermore, all the perturbative models fare better than linear theory.« less

  20. A Lagrangian effective field theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlah, Zvonimir; White, Martin; Aviles, Alejandro, E-mail: zvlah@stanford.edu, E-mail: mwhite@berkeley.edu, E-mail: aviles@berkeley.edu

    We have continued the development of Lagrangian, cosmological perturbation theory for the low-order correlators of the matter density field. We provide a new route to understanding how the effective field theory (EFT) of large-scale structure can be formulated in the Lagrandian framework and a new resummation scheme, comparing our results to earlier work and to a series of high-resolution N-body simulations in both Fourier and configuration space. The 'new' terms arising from EFT serve to tame the dependence of perturbation theory on small-scale physics and improve agreement with simulations (though with an additional free parameter). We find that all ofmore » our models fare well on scales larger than about two to three times the non-linear scale, but fail as the non-linear scale is approached. This is slightly less reach than has been seen previously. At low redshift the Lagrangian model fares as well as EFT in its Eulerian formulation, but at higher z the Eulerian EFT fits the data to smaller scales than resummed, Lagrangian EFT. All the perturbative models fare better than linear theory.« less

  1. A Lagrangian effective field theory

    DOE PAGES

    Vlah, Zvonimir; White, Martin; Aviles, Alejandro

    2015-09-02

    We have continued the development of Lagrangian, cosmological perturbation theory for the low-order correlators of the matter density field. We provide a new route to understanding how the effective field theory (EFT) of large-scale structure can be formulated in the Lagrandian framework and a new resummation scheme, comparing our results to earlier work and to a series of high-resolution N-body simulations in both Fourier and configuration space. The `new' terms arising from EFT serve to tame the dependence of perturbation theory on small-scale physics and improve agreement with simulations (though with an additional free parameter). We find that all ofmore » our models fare well on scales larger than about two to three times the non-linear scale, but fail as the non-linear scale is approached. This is slightly less reach than has been seen previously. At low redshift the Lagrangian model fares as well as EFT in its Eulerian formulation, but at higher z the Eulerian EFT fits the data to smaller scales than resummed, Lagrangian EFT. Furthermore, all the perturbative models fare better than linear theory.« less

  2. On the additive and dominant variance and covariance of individuals within the genomic selection scope.

    PubMed

    Vitezica, Zulma G; Varona, Luis; Legarra, Andres

    2013-12-01

    Genomic evaluation models can fit additive and dominant SNP effects. Under quantitative genetics theory, additive or "breeding" values of individuals are generated by substitution effects, which involve both "biological" additive and dominant effects of the markers. Dominance deviations include only a portion of the biological dominant effects of the markers. Additive variance includes variation due to the additive and dominant effects of the markers. We describe a matrix of dominant genomic relationships across individuals, D, which is similar to the G matrix used in genomic best linear unbiased prediction. This matrix can be used in a mixed-model context for genomic evaluations or to estimate dominant and additive variances in the population. From the "genotypic" value of individuals, an alternative parameterization defines additive and dominance as the parts attributable to the additive and dominant effect of the markers. This approach underestimates the additive genetic variance and overestimates the dominance variance. Transforming the variances from one model into the other is trivial if the distribution of allelic frequencies is known. We illustrate these results with mouse data (four traits, 1884 mice, and 10,946 markers) and simulated data (2100 individuals and 10,000 markers). Variance components were estimated correctly in the model, considering breeding values and dominance deviations. For the model considering genotypic values, the inclusion of dominant effects biased the estimate of additive variance. Genomic models were more accurate for the estimation of variance components than their pedigree-based counterparts.

  3. Update on Bayesian Blocks: Segmented Models for Sequential Data

    NASA Technical Reports Server (NTRS)

    Scargle, Jeff

    2017-01-01

    The Bayesian Block algorithm, in wide use in astronomy and other areas, has been improved in several ways. The model for block shape has been generalized to include other than constant signal rate - e.g., linear, exponential, or other parametric models. In addition the computational efficiency has been improved, so that instead of O(N**2) the basic algorithm is O(N) in most cases. Other improvements in the theory and application of segmented representations will be described.

  4. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.

    PubMed

    Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na

    2015-09-03

    Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.

  5. Linear degrees of freedom in speech production: analysis of cineradio- and labio-film data and articulatory-acoustic modeling.

    PubMed

    Beautemps, D; Badin, P; Bailly, G

    2001-05-01

    The following contribution addresses several issues concerning speech degrees of freedom in French oral vowels, stop, and fricative consonants based on an analysis of tongue and lip shapes extracted from cineradio- and labio-films. The midsagittal tongue shapes have been submitted to a linear decomposition where some of the loading factors were selected such as jaw and larynx position while four other components were derived from principal component analysis (PCA). For the lips, in addition to the more traditional protrusion and opening components, a supplementary component was extracted to explain the upward movement of both the upper and lower lips in [v] production. A linear articulatory model was developed; the six tongue degrees of freedom were used as the articulatory control parameters of the midsagittal tongue contours and explained 96% of the tongue data variance. These control parameters were also used to specify the frontal lip width dimension derived from the labio-film front views. Finally, this model was complemented by a conversion model going from the midsagittal to the area function, based on a fitting of the midsagittal distances and the formant frequencies for both vowels and consonants.

  6. Use of AMMI and linear regression models to analyze genotype-environment interaction in durum wheat.

    PubMed

    Nachit, M M; Nachit, G; Ketata, H; Gauch, H G; Zobel, R W

    1992-03-01

    The joint durum wheat (Triticum turgidum L var 'durum') breeding program of the International Maize and Wheat Improvement Center (CIMMYT) and the International Center for Agricultural Research in the Dry Areas (ICARDA) for the Mediterranean region employs extensive multilocation testing. Multilocation testing produces significant genotype-environment (GE) interaction that reduces the accuracy for estimating yield and selecting appropriate germ plasm. The sum of squares (SS) of GE interaction was partitioned by linear regression techniques into joint, genotypic, and environmental regressions, and by Additive Main effects and the Multiplicative Interactions (AMMI) model into five significant Interaction Principal Component Axes (IPCA). The AMMI model was more effective in partitioning the interaction SS than the linear regression technique. The SS contained in the AMMI model was 6 times higher than the SS for all three regressions. Postdictive assessment recommended the use of the first five IPCA axes, while predictive assessment AMMI1 (main effects plus IPCA1). After elimination of random variation, AMMI1 estimates for genotypic yields within sites were more precise than unadjusted means. This increased precision was equivalent to increasing the number of replications by a factor of 3.7.

  7. Kinetic simulations and reduced modeling of longitudinal sideband instabilities in non-linear electron plasma waves

    DOE PAGES

    Brunner, S.; Berger, R. L.; Cohen, B. I.; ...

    2014-10-01

    Kinetic Vlasov simulations of one-dimensional finite amplitude Electron Plasma Waves are performed in a multi-wavelength long system. A systematic study of the most unstable linear sideband mode, in particular its growth rate γ and quasi- wavenumber δk, is carried out by scanning the amplitude and wavenumber of the initial wave. Simulation results are successfully compared against numerical and analytical solutions to the reduced model by Kruer et al. [Phys. Rev. Lett. 23, 838 (1969)] for the Trapped Particle Instability (TPI). A model recently suggested by Dodin et al. [Phys. Rev. Lett. 110, 215006 (2013)], which in addition to the TPImore » accounts for the so-called Negative Mass Instability because of a more detailed representation of the trapped particle dynamics, is also studied and compared with simulations.« less

  8. A model of plant canopy polarization

    NASA Technical Reports Server (NTRS)

    Vanderbilt, V. C.

    1980-01-01

    A model for the amount of linearly polarized light reflected by the shiny leaves of grain crops is based on the morphological and phenological characteristics of the plant canopy and upon the Fresnel equations which describe the light reflection process at the smooth boundary separating two dielectrics. The theory used demonstrates that, potentially, measurements of the linearly polarized light from a crop canopy may be used as an additional feature to discriminate between crops such as wheat and barley, two crops which are so spectrally similar that they are misclassified with unacceptable frequency. Examination of the model suggests that, potentially, satellite polarization measurements may be used to monitor crop development stage, leaf water content, leaf area index, hail damage, and certain plant diseases. The information content of these measurements is needed to evaluate the proposed polarization sensor for the satellite-borne multispectral resource sampler.

  9. Paper-cutting operations using scissors in Drury's law tasks.

    PubMed

    Yamanaka, Shota; Miyashita, Homei

    2018-05-01

    Human performance modeling is a core topic in ergonomics. In addition to deriving models, it is important to verify the kinds of tasks that can be modeled. Drury's law is promising for path tracking tasks such as navigating a path with pens or driving a car. We conducted an experiment based on the observation that paper-cutting tasks using scissors resemble such tasks. The results showed that cutting arc-like paths (1/4 of a circle) showed an excellent fit with Drury's law (R 2  > 0.98), whereas cutting linear paths showed a worse fit (R 2  > 0.87). Since linear paths yielded better fits when path amplitudes were divided (R 2  > 0.99 for all amplitudes), we discuss the characteristics of paper-cutting operations using scissors. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.

    PubMed

    Chu, Annie; Cui, Jenny; Dinov, Ivo D

    2009-03-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models.

  11. Examining the influence of link function misspecification in conventional regression models for developing crash modification factors.

    PubMed

    Wu, Lingtao; Lord, Dominique

    2017-05-01

    This study further examined the use of regression models for developing crash modification factors (CMFs), specifically focusing on the misspecification in the link function. The primary objectives were to validate the accuracy of CMFs derived from the commonly used regression models (i.e., generalized linear models or GLMs with additive linear link functions) when some of the variables have nonlinear relationships and quantify the amount of bias as a function of the nonlinearity. Using the concept of artificial realistic data, various linear and nonlinear crash modification functions (CM-Functions) were assumed for three variables. Crash counts were randomly generated based on these CM-Functions. CMFs were then derived from regression models for three different scenarios. The results were compared with the assumed true values. The main findings are summarized as follows: (1) when some variables have nonlinear relationships with crash risk, the CMFs for these variables derived from the commonly used GLMs are all biased, especially around areas away from the baseline conditions (e.g., boundary areas); (2) with the increase in nonlinearity (i.e., nonlinear relationship becomes stronger), the bias becomes more significant; (3) the quality of CMFs for other variables having linear relationships can be influenced when mixed with those having nonlinear relationships, but the accuracy may still be acceptable; and (4) the misuse of the link function for one or more variables can also lead to biased estimates for other parameters. This study raised the importance of the link function when using regression models for developing CMFs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Collision cross section (CCS) measurement by ion cyclotron resonance mass spectrometry with short-time Fourier transform.

    PubMed

    Hu, Miao; Zhang, Linzhou; He, Shan; Xu, Chunming; Shi, Quan

    2018-05-15

    The collision cross section (CCS) is an important shape parameter which is often used in molecular structure investigation. In Fourier transform ion cyclotron resonance mass spectrometry (FTICR-MS), the CCS affects the ion signal damping shape due to the effect of ion-neutral collisions. It is potential to obtain ion CCS values from FTICR-MS with the help of a proper ion-collision model. We have developed a rapid method to obtain the ion damping profile and CCS for mixtures by only one FTICR-MS measurement. The method utilizes short-time Fourier transform (STFT) to process FTICR-MS time domain signals. The STFT-processed result is a three-dimensional (3D) spectrum which has an additional time axis in addition to the conventional mass-to-charge ratio and intensity domains. The damping profile of each ion can be recognized from the 3D spectrum. After extracting the decay profile of a specified ion, all the three ion-neutral collision models were tested in curve fitting. The hard-sphere model was proven to be suitable for our experimental setup. A linear relationship was observed between the CCS value and hard-sphere model parameters. Therefore, the CCS values of all the peaks were obtained through the addition of internal model compounds and linear calibration. The proposed method was successfully applied to determine the CCSs of fatty acids and polyalanines in a petroleum gas oil matrix. This technique can be used for simultaneous measurement of cross sections for many ions in congested spectra. Copyright © 2018 John Wiley & Sons, Ltd.

  13. A UWB Radar Signal Processing Platform for Real-Time Human Respiratory Feature Extraction Based on Four-Segment Linear Waveform Model.

    PubMed

    Hsieh, Chi-Hsuan; Chiu, Yu-Fang; Shen, Yi-Hsiang; Chu, Ta-Shun; Huang, Yuan-Hao

    2016-02-01

    This paper presents an ultra-wideband (UWB) impulse-radio radar signal processing platform used to analyze human respiratory features. Conventional radar systems used in human detection only analyze human respiration rates or the response of a target. However, additional respiratory signal information is available that has not been explored using radar detection. The authors previously proposed a modified raised cosine waveform (MRCW) respiration model and an iterative correlation search algorithm that could acquire additional respiratory features such as the inspiration and expiration speeds, respiration intensity, and respiration holding ratio. To realize real-time respiratory feature extraction by using the proposed UWB signal processing platform, this paper proposes a new four-segment linear waveform (FSLW) respiration model. This model offers a superior fit to the measured respiration signal compared with the MRCW model and decreases the computational complexity of feature extraction. In addition, an early-terminated iterative correlation search algorithm is presented, substantially decreasing the computational complexity and yielding negligible performance degradation. These extracted features can be considered the compressed signals used to decrease the amount of data storage required for use in long-term medical monitoring systems and can also be used in clinical diagnosis. The proposed respiratory feature extraction algorithm was designed and implemented using the proposed UWB radar signal processing platform including a radar front-end chip and an FPGA chip. The proposed radar system can detect human respiration rates at 0.1 to 1 Hz and facilitates the real-time analysis of the respiratory features of each respiration period.

  14. CADASTER QSPR Models for Predictions of Melting and Boiling Points of Perfluorinated Chemicals.

    PubMed

    Bhhatarai, Barun; Teetz, Wolfram; Liu, Tao; Öberg, Tomas; Jeliazkova, Nina; Kochev, Nikolay; Pukalov, Ognyan; Tetko, Igor V; Kovarich, Simona; Papa, Ester; Gramatica, Paola

    2011-03-14

    Quantitative structure property relationship (QSPR) studies on per- and polyfluorinated chemicals (PFCs) on melting point (MP) and boiling point (BP) are presented. The training and prediction chemicals used for developing and validating the models were selected from Syracuse PhysProp database and literatures. The available experimental data sets were split in two different ways: a) random selection on response value, and b) structural similarity verified by self-organizing-map (SOM), in order to propose reliable predictive models, developed only on the training sets and externally verified on the prediction sets. Individual linear and non-linear approaches based models developed by different CADASTER partners on 0D-2D Dragon descriptors, E-state descriptors and fragment based descriptors as well as consensus model and their predictions are presented. In addition, the predictive performance of the developed models was verified on a blind external validation set (EV-set) prepared using PERFORCE database on 15 MP and 25 BP data respectively. This database contains only long chain perfluoro-alkylated chemicals, particularly monitored by regulatory agencies like US-EPA and EU-REACH. QSPR models with internal and external validation on two different external prediction/validation sets and study of applicability-domain highlighting the robustness and high accuracy of the models are discussed. Finally, MPs for additional 303 PFCs and BPs for 271 PFCs were predicted for which experimental measurements are unknown. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Analyzing Seasonal Variations in Suicide With Fourier Poisson Time-Series Regression: A Registry-Based Study From Norway, 1969-2007.

    PubMed

    Bramness, Jørgen G; Walby, Fredrik A; Morken, Gunnar; Røislien, Jo

    2015-08-01

    Seasonal variation in the number of suicides has long been acknowledged. It has been suggested that this seasonality has declined in recent years, but studies have generally used statistical methods incapable of confirming this. We examined all suicides occurring in Norway during 1969-2007 (more than 20,000 suicides in total) to establish whether seasonality decreased over time. Fitting of additive Fourier Poisson time-series regression models allowed for formal testing of a possible linear decrease in seasonality, or a reduction at a specific point in time, while adjusting for a possible smooth nonlinear long-term change without having to categorize time into discrete yearly units. The models were compared using Akaike's Information Criterion and analysis of variance. A model with a seasonal pattern was significantly superior to a model without one. There was a reduction in seasonality during the period. Both the model assuming a linear decrease in seasonality and the model assuming a change at a specific point in time were both superior to a model assuming constant seasonality, thus confirming by formal statistical testing that the magnitude of the seasonality in suicides has diminished. The additive Fourier Poisson time-series regression model would also be useful for studying other temporal phenomena with seasonal components. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. LIMO EEG: a toolbox for hierarchical LInear MOdeling of ElectroEncephaloGraphic data.

    PubMed

    Pernet, Cyril R; Chauveau, Nicolas; Gaspar, Carl; Rousselet, Guillaume A

    2011-01-01

    Magnetic- and electric-evoked brain responses have traditionally been analyzed by comparing the peaks or mean amplitudes of signals from selected channels and averaged across trials. More recently, tools have been developed to investigate single trial response variability (e.g., EEGLAB) and to test differences between averaged evoked responses over the entire scalp and time dimensions (e.g., SPM, Fieldtrip). LIMO EEG is a Matlab toolbox (EEGLAB compatible) to analyse evoked responses over all space and time dimensions, while accounting for single trial variability using a simple hierarchical linear modelling of the data. In addition, LIMO EEG provides robust parametric tests, therefore providing a new and complementary tool in the analysis of neural evoked responses.

  17. LIMO EEG: A Toolbox for Hierarchical LInear MOdeling of ElectroEncephaloGraphic Data

    PubMed Central

    Pernet, Cyril R.; Chauveau, Nicolas; Gaspar, Carl; Rousselet, Guillaume A.

    2011-01-01

    Magnetic- and electric-evoked brain responses have traditionally been analyzed by comparing the peaks or mean amplitudes of signals from selected channels and averaged across trials. More recently, tools have been developed to investigate single trial response variability (e.g., EEGLAB) and to test differences between averaged evoked responses over the entire scalp and time dimensions (e.g., SPM, Fieldtrip). LIMO EEG is a Matlab toolbox (EEGLAB compatible) to analyse evoked responses over all space and time dimensions, while accounting for single trial variability using a simple hierarchical linear modelling of the data. In addition, LIMO EEG provides robust parametric tests, therefore providing a new and complementary tool in the analysis of neural evoked responses. PMID:21403915

  18. A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection

    PubMed Central

    Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B

    2015-01-01

    Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050

  19. Tori and chaos in a simple C1-system

    NASA Astrophysics Data System (ADS)

    Roessler, O. E.; Kahiert, C.; Ughleke, B.

    A piecewise-linear autonomous 3-variable ordinary differential equation is presented which permits analytical modeling of chaotic attractors. A once-differentiable system of equations is defined which consists of two linear half-systems which meet along a threshold plane. The trajectories described by each equation is thereby continuous along the divide, forming a one-parameter family of invariant tori. The addition of a damping term produces a system of equations for various chaotic attractors. Extension of the system by means of a 4-variable generalization yields hypertori and hyperchaos. It is noted that the hierarchy established is amenable to analysis by the use of Poincare half-maps. Applications of the systems of ordinary differential equations to modeling turbulent flows are discussed.

  20. Prospective Mathematics Teachers' Sense Making of Polynomial Multiplication and Factorization Modeled with Algebra Tiles

    ERIC Educational Resources Information Center

    Caglayan, Günhan

    2013-01-01

    This study is about prospective secondary mathematics teachers' understanding and sense making of representational quantities generated by algebra tiles, the quantitative units (linear vs. areal) inherent in the nature of these quantities, and the quantitative addition and multiplication operations--referent preserving versus referent…

  1. Iterative Usage of Fixed and Random Effect Models for Powerful and Efficient Genome-Wide Association Studies

    PubMed Central

    Liu, Xiaolei; Huang, Meng; Fan, Bin; Buckler, Edward S.; Zhang, Zhiwu

    2016-01-01

    False positives in a Genome-Wide Association Study (GWAS) can be effectively controlled by a fixed effect and random effect Mixed Linear Model (MLM) that incorporates population structure and kinship among individuals to adjust association tests on markers; however, the adjustment also compromises true positives. The modified MLM method, Multiple Loci Linear Mixed Model (MLMM), incorporates multiple markers simultaneously as covariates in a stepwise MLM to partially remove the confounding between testing markers and kinship. To completely eliminate the confounding, we divided MLMM into two parts: Fixed Effect Model (FEM) and a Random Effect Model (REM) and use them iteratively. FEM contains testing markers, one at a time, and multiple associated markers as covariates to control false positives. To avoid model over-fitting problem in FEM, the associated markers are estimated in REM by using them to define kinship. The P values of testing markers and the associated markers are unified at each iteration. We named the new method as Fixed and random model Circulating Probability Unification (FarmCPU). Both real and simulated data analyses demonstrated that FarmCPU improves statistical power compared to current methods. Additional benefits include an efficient computing time that is linear to both number of individuals and number of markers. Now, a dataset with half million individuals and half million markers can be analyzed within three days. PMID:26828793

  2. Temporal Drivers of Liking Based on Functional Data Analysis and Non-Additive Models for Multi-Attribute Time-Intensity Data of Fruit Chews.

    PubMed

    Kuesten, Carla; Bi, Jian

    2018-06-03

    Conventional drivers of liking analysis was extended with a time dimension into temporal drivers of liking (TDOL) based on functional data analysis methodology and non-additive models for multiple-attribute time-intensity (MATI) data. The non-additive models, which consider both direct effects and interaction effects of attributes to consumer overall liking, include Choquet integral and fuzzy measure in the multi-criteria decision-making, and linear regression based on variance decomposition. Dynamics of TDOL, i.e., the derivatives of the relative importance functional curves were also explored. Well-established R packages 'fda', 'kappalab' and 'relaimpo' were used in the paper for developing TDOL. Applied use of these methods shows that the relative importance of MATI curves offers insights for understanding the temporal aspects of consumer liking for fruit chews.

  3. Efficient parallel simulation of CO2 geologic sequestration insaline aquifers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Keni; Doughty, Christine; Wu, Yu-Shu

    2007-01-01

    An efficient parallel simulator for large-scale, long-termCO2 geologic sequestration in saline aquifers has been developed. Theparallel simulator is a three-dimensional, fully implicit model thatsolves large, sparse linear systems arising from discretization of thepartial differential equations for mass and energy balance in porous andfractured media. The simulator is based on the ECO2N module of the TOUGH2code and inherits all the process capabilities of the single-CPU TOUGH2code, including a comprehensive description of the thermodynamics andthermophysical properties of H2O-NaCl- CO2 mixtures, modeling singleand/or two-phase isothermal or non-isothermal flow processes, two-phasemixtures, fluid phases appearing or disappearing, as well as saltprecipitation or dissolution. The newmore » parallel simulator uses MPI forparallel implementation, the METIS software package for simulation domainpartitioning, and the iterative parallel linear solver package Aztec forsolving linear equations by multiple processors. In addition, theparallel simulator has been implemented with an efficient communicationscheme. Test examples show that a linear or super-linear speedup can beobtained on Linux clusters as well as on supercomputers. Because of thesignificant improvement in both simulation time and memory requirement,the new simulator provides a powerful tool for tackling larger scale andmore complex problems than can be solved by single-CPU codes. Ahigh-resolution simulation example is presented that models buoyantconvection, induced by a small increase in brine density caused bydissolution of CO2.« less

  4. Modeling the formation of ordered nano-assemblies comprised by dendrimers and linear polyelectrolytes: The role of Coulombic interactions

    NASA Astrophysics Data System (ADS)

    Eleftheriou, E.; Karatasos, K.

    2012-10-01

    Models of mixtures of peripherally charged dendrimers with oppositely charged linear polyelectrolytes in the presence of explicit solvent are studied by means of molecular dynamics simulations. Under the influence of varying strength of electrostatic interactions, these systems appear to form dynamically arrested film-like interconnected structures in the polymer-rich phase. Acting like a pseudo-thermodynamic inverse temperature, the increase of the strength of the Coulombic interactions drive the polymeric constituents of the mixture to a gradual dynamic freezing-in. The timescale of the average density fluctuations of the formed complexes initially increases in the weak electrostatic regime reaching a finite limit as the strength of electrostatic interactions grow. Although the models are overall electrically neutral, during this process the dendrimer/linear complexes develop a polar character with an excess charge mainly close to the periphery of the dendrimers. The morphological characteristics of the resulted pattern are found to depend on the size of the polymer chains on account of the distinct conformational features assumed by the complexed linear polyelectrolytes of different length. In addition, the length of the polymer chain appears to affect the dynamics of the counterions, thus affecting the ionic transport properties of the system. It appears, therefore, that the strength of electrostatic interactions together with the length of the linear polyelectrolytes are parameters to which these systems are particularly responsive, offering thus the possibility for a better control of the resulted structure and the electric properties of these soft-colloidal systems.

  5. Grain-Size Based Additivity Models for Scaling Multi-rate Uranyl Surface Complexation in Subsurface Sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.

    The additivity model assumed that field-scale reaction properties in a sediment including surface area, reactive site concentration, and reaction rate can be predicted from field-scale grain-size distribution by linearly adding reaction properties estimated in laboratory for individual grain-size fractions. This study evaluated the additivity model in scaling mass transfer-limited, multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment. Experimental data of rate-limited U(VI) desorption in a stirred flow-cell reactor were used to estimate the statistical properties of the rate constants for individual grain-size fractions, which were then used to predict rate-limited U(VI) desorption in the composite sediment. The resultmore » indicated that the additivity model with respect to the rate of U(VI) desorption provided a good prediction of U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel-size fraction (2 to 8 mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.« less

  6. Multivariate modelling of density, strength, and stiffness from near infared for mature, juvenile, and pith wood of longleaf pine (Pinus Palustris)

    Treesearch

    Brian K. Via; Todd F. Shupe; Leslie H. Groom; Michael Stine; Chi-Leung So

    2003-01-01

    In manufacturing, monitoring the mechanical properties of wood with near infrared spectroscopy (NIR) is an attractive alternative to more conventional methods. However, no attention has been given to see if models differ between juvenile and mature wood. Additionally, it would be convenient if multiple linear regression (MLR) could perform well in the place of more...

  7. A geometric nonlinear degenerated shell element using a mixed formulation with independently assumed strain fields. Final Report; Ph.D. Thesis, 1989

    NASA Technical Reports Server (NTRS)

    Graf, Wiley E.

    1991-01-01

    A mixed formulation is chosen to overcome deficiencies of the standard displacement-based shell model. Element development is traced from the incremental variational principle on through to the final set of equilibrium equations. Particular attention is paid to developing specific guidelines for selecting the optimal set of strain parameters. A discussion of constraint index concepts and their predictive capability related to locking is included. Performance characteristics of the elements are assessed in a wide variety of linear and nonlinear plate/shell problems. Despite limiting the study to geometric nonlinear analysis, a substantial amount of additional insight concerning the finite element modeling of thin plate/shell structures is provided. For example, in nonlinear analysis, given the same mesh and load step size, mixed elements converge in fewer iterations than equivalent displacement-based models. It is also demonstrated that, in mixed formulations, lower order elements are preferred. Additionally, meshes used to obtain accurate linear solutions do not necessarily converge to the correct nonlinear solution. Finally, a new form of locking was identified associated with employing elements designed for biaxial bending in uniaxial bending applications.

  8. Empirical modelling to predict the refractive index of human blood.

    PubMed

    Yahya, M; Saghir, M Z

    2016-02-21

    Optical techniques used for the measurement of the optical properties of blood are of great interest in clinical diagnostics. Blood analysis is a routine procedure used in medical diagnostics to confirm a patient's condition. Measuring the optical properties of blood is difficult due to the non-homogenous nature of the blood itself. In addition, there is a lot of variation in the refractive indices reported in the literature. These are the reasons that motivated the researchers to develop a mathematical model that can be used to predict the refractive index of human blood as a function of concentration, temperature and wavelength. The experimental measurements were conducted on mimicking phantom hemoglobin samples using the Abbemat Refractometer. The results analysis revealed a linear relationship between the refractive index and concentration as well as temperature, and a non-linear relationship between refractive index and wavelength. These results are in agreement with those found in the literature. In addition, a new formula was developed based on empirical modelling which suggests that temperature and wavelength coefficients be added to the Barer formula. The verification of this correlation confirmed its ability to determine refractive index and/or blood hematocrit values with appropriate clinical accuracy.

  9. Near-road air pollutant concentrations of CO and PM 2.5: A comparison of MOBILE6.2/CALINE4 and generalized additive models

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Batterman, Stuart

    2010-05-01

    The contribution of vehicular traffic to air pollutant concentrations is often difficult to establish. This paper utilizes both time-series and simulation models to estimate vehicle contributions to pollutant levels near roadways. The time-series model used generalized additive models (GAMs) and fitted pollutant observations to traffic counts and meteorological variables. A one year period (2004) was analyzed on a seasonal basis using hourly measurements of carbon monoxide (CO) and particulate matter less than 2.5 μm in diameter (PM 2.5) monitored near a major highway in Detroit, Michigan, along with hourly traffic counts and local meteorological data. Traffic counts showed statistically significant and approximately linear relationships with CO concentrations in fall, and piecewise linear relationships in spring, summer and winter. The same period was simulated using emission and dispersion models (Motor Vehicle Emissions Factor Model/MOBILE6.2; California Line Source Dispersion Model/CALINE4). CO emissions derived from the GAM were similar, on average, to those estimated by MOBILE6.2. The same analyses for PM 2.5 showed that GAM emission estimates were much higher (by 4-5 times) than the dispersion model results, and that the traffic-PM 2.5 relationship varied seasonally. This analysis suggests that the simulation model performed reasonably well for CO, but it significantly underestimated PM 2.5 concentrations, a likely result of underestimating PM 2.5 emission factors. Comparisons between statistical and simulation models can help identify model deficiencies and improve estimates of vehicle emissions and near-road air quality.

  10. Rheology modification with ring polymers

    NASA Astrophysics Data System (ADS)

    Vlassopoulos, Dimitris

    It is now established that experimental unconcatenated ring polymers can be purified effectively by means of fractionation at the critical condition. For molecular weights well above the entanglement threshold, purified rings relax stress via power-law (with an exponent of about -0.4), sharply departing from their linear counterparts. Experimental results are in harmony with modeling predictions and simulations. Here, we present results from recent interdisciplinary efforts and discuss two challenges: (i) the nonlinear shear rheology of purified ring melts is also very different from that of unlinked chains. Whereas the latter exhibit features that can be explained, to a first approach, in the framework in the tube model, the former behave akin to unentangled chains with finite extensibility and exhibit much small deformation at steady state. (ii) blends of rings and linear polymers exhibit unique features in different regimes: The addition of minute amounts of linear chains drastically affects ring dynamics. This relates to ring purity and the ability of unlinked linear chains to thread rings. With the help of simulations, it is possible to rationalize the observed surprisingly slow viscoelastic relaxation, which is attributed to ring-linear and ring-ring penetrations. On the other hand, adding small amounts of rings to linear polymers of different molecular weights influences their linear and nonlinear rheology in an unprecedented way. The blend viscosity exceeds that of the slower component (linear) in this non-interacting mixture, and its dependencies on composition and molecular weight ratio are examined, whereas the role of molecular architecture is also addressed. Consequently, closing the ends of a linear chain can serve as a powerful means for molecular manipulation of its rheology. This presentation reflects collaborative efforts with S. Costanzo, Z-C. Yan, R. Pasquino, M. Kaliva, S. Kamble, Y. Jeong, P. Lutz, J. Allgaier, T. Chang, D. Talikis, V. Mavrantzas and M. Rubinstein.

  11. Gadolinium deposition in the brain: association with various GBCAs using a generalized additive model.

    PubMed

    Bae, Sohi; Lee, Ho-Joon; Han, Kyunghwa; Park, Yae-Won; Choi, Yoon Seong; Ahn, Sung Soo; Kim, Jinna; Lee, Seung-Koo

    2017-08-01

    To determine the relationship between the number of administrations of various gadolinium-based contrast agents (GBCAs) and increased T1 signal intensity in the globus pallidus (GP) and dentate nucleus (DN). This retrospective study included 122 patients who underwent double-dose GBCA-enhanced magnetic resonance imaging. Two radiologists calculated GP-to-thalamus (TH) signal intensity ratio, DN-to-pons signal intensity ratio and relative change (R change ) between the baseline and final examinations. Interobserver agreement was evaluated. The relationships between R change and several factors, including number of each GBCA administrations, were analysed using a generalized additive model. Six patients (4.9%) received linear GBCAs (mean 20.8 number of administration; range 15-30), 44 patients (36.1%) received macrocyclic GBCAs (mean 26.1; range 14-51) and 72 patients (59.0%) received both types of GBCAs (mean 31.5; range 12-65). Interobserver agreement was almost perfect (0.99; 95% CI: 0.99-0.99). R change (DN:pons) was associated with gadodiamide (p = 0.006) and gadopentetate dimeglumine (p < 0.001), but not with other GBCAs. R change (GP:TH) was not associated with GBCA administration. Previous administration of linear agents gadoiamide and gadopentetate dimeglumine is associated with increased T1 signal intensity in the DN, whereas macrocyclic GBCAs do not show an association. • Certain linear GBCAs are associated with T1 signal change in the dentate nucleus. • The signal change is related to the administration number of certain linear GBCAs. • Difference in signal change may reflect differences in stability of agents.

  12. Impacts analysis of car following models considering variable vehicular gap policies

    NASA Astrophysics Data System (ADS)

    Xin, Qi; Yang, Nan; Fu, Rui; Yu, Shaowei; Shi, Zhongke

    2018-07-01

    Due to the important roles playing in the vehicles' adaptive cruise control system, variable vehicular gap polices were employed to full velocity difference model (FVDM) to investigate the traffic flow properties. In this paper, two new car following models were put forward by taking constant time headway(CTH) policy and variable time headway(VTH) policy into optimal velocity function, separately. By steady state analysis of the new models, an equivalent optimal velocity function was defined. To determine the linear stable conditions of the new models, we introduce equivalent expressions of safe vehicular gap, and then apply small amplitude perturbation analysis and long terms of wave expansion techniques to obtain the new models' linear stable conditions. Additionally, the first order approximate solutions of the new models were drawn at the stable region, by transforming the models into typical Burger's partial differential equations with reductive perturbation method. The FVDM based numerical simulations indicate that the variable vehicular gap polices with proper parameters directly contribute to the improvement of the traffic flows' stability and the avoidance of the unstable traffic phenomena.

  13. Genomic prediction based on data from three layer lines using non-linear regression models.

    PubMed

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional occurrence of large negative accuracies when the evaluated line was not included in the training dataset. Furthermore, when using a multi-line training dataset, non-linear models provided information on the genotype data that was complementary to the linear models, which indicates that the underlying data distributions of the three studied lines were indeed heterogeneous.

  14. Estimating the impact of mineral aerosols on crop yields in food insecure regions using statistical crop models

    NASA Astrophysics Data System (ADS)

    Hoffman, A.; Forest, C. E.; Kemanian, A.

    2016-12-01

    A significant number of food-insecure nations exist in regions of the world where dust plays a large role in the climate system. While the impacts of common climate variables (e.g. temperature, precipitation, ozone, and carbon dioxide) on crop yields are relatively well understood, the impact of mineral aerosols on yields have not yet been thoroughly investigated. This research aims to develop the data and tools to progress our understanding of mineral aerosol impacts on crop yields. Suspended dust affects crop yields by altering the amount and type of radiation reaching the plant, modifying local temperature and precipitation. While dust events (i.e. dust storms) affect crop yields by depleting the soil of nutrients or by defoliation via particle abrasion. The impact of dust on yields is modeled statistically because we are uncertain which impacts will dominate the response on national and regional scales considered in this study. Multiple linear regression is used in a number of large-scale statistical crop modeling studies to estimate yield responses to various climate variables. In alignment with previous work, we develop linear crop models, but build upon this simple method of regression with machine-learning techniques (e.g. random forests) to identify important statistical predictors and isolate how dust affects yields on the scales of interest. To perform this analysis, we develop a crop-climate dataset for maize, soybean, groundnut, sorghum, rice, and wheat for the regions of West Africa, East Africa, South Africa, and the Sahel. Random forest regression models consistently model historic crop yields better than the linear models. In several instances, the random forest models accurately capture the temperature and precipitation threshold behavior in crops. Additionally, improving agricultural technology has caused a well-documented positive trend that dominates time series of global and regional yields. This trend is often removed before regression with traditional crop models, but likely at the cost of removing climate information. Our random forest models consistently discover the positive trend without removing any additional data. The application of random forests as a statistical crop model provides insight into understanding the impact of dust on yields in marginal food producing regions.

  15. Primordial black holes in linear and non-linear regimes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allahyari, Alireza; Abolhasani, Ali Akbar; Firouzjaee, Javad T., E-mail: allahyari@physics.sharif.edu, E-mail: j.taghizadeh.f@ipm.ir

    We revisit the formation of primordial black holes (PBHs) in the radiation-dominated era for both linear and non-linear regimes, elaborating on the concept of an apparent horizon. Contrary to the expectation from vacuum models, we argue that in a cosmological setting a density fluctuation with a high density does not always collapse to a black hole. To this end, we first elaborate on the perturbation theory for spherically symmetric space times in the linear regime. Thereby, we introduce two gauges. This allows to introduce a well defined gauge-invariant quantity for the expansion of null geodesics. Using this quantity, we arguemore » that PBHs do not form in the linear regime irrespective of the density of the background. Finally, we consider the formation of PBHs in non-linear regimes, adopting the spherical collapse picture. In this picture, over-densities are modeled by closed FRW models in the radiation-dominated era. The difference of our approach is that we start by finding an exact solution for a closed radiation-dominated universe. This yields exact results for turn-around time and radius. It is important that we take the initial conditions from the linear perturbation theory. Additionally, instead of using uniform Hubble gauge condition, both density and velocity perturbations are admitted in this approach. Thereby, the matching condition will impose an important constraint on the initial velocity perturbations δ {sup h} {sub 0} = −δ{sub 0}/2. This can be extended to higher orders. Using this constraint, we find that the apparent horizon of a PBH forms when δ > 3 at turn-around time. The corrections also appear from the third order. Moreover, a PBH forms when its apparent horizon is outside the sound horizon at the re-entry time. Applying this condition, we infer that the threshold value of the density perturbations at horizon re-entry should be larger than δ {sub th} > 0.7.« less

  16. Precision-feeding dairy heifers a high rumen-undegradable protein diet with different proportions of dietary fiber and forage-to-concentrate ratios.

    PubMed

    Koch, L E; Gomez, N A; Bowyer, A; Lascano, G J

    2017-12-01

    The addition of dietary fiber can alter nutrient and N utilization in precision-fed dairy heifers and may further benefit from higher inclusion levels of RUP. The objective of this experiment was to determine the effects of feeding a high-RUP diet when dietary fiber content was manipulated within differing forage-to-concentrate ratios (F:C) on nutrient utilization of precision-fed dairy heifers. Six rumen-cannulated Holstein heifers (555.4 ± 31.4 kg BW; 17.4 ± 0.1 mo) were randomly assigned to 2 levels of forage, high forage (HF; 60% forage) or low forage (LF; 45% forage), and to a fiber proportion sequence (low fiber: 100% oat hay and silage [OA], 0% wheat straw [WS]; medium fiber: 83.4% OA, 16.6% WS; and high fiber: 66.7% OA, 33.3% WS) administered according to a split-plot 3 × 3 Latin square design (21-d periods). Similar levels of N intake (1.70 g N/kg BW) and RUP (55% of CP) were provided. Data were analyzed as a split-plot, 3 × 3 Latin square design using a mixed model with fixed effects of period and treatment. A repeated measures model was used with data that had multiple measurements over time. No differences were observed for DM, OM, NDF, or ADF apparent digestibility coefficients (dC) between HF- and LF-fed heifers. Heifers receiving LF diets had greater starch dC compared to HF heifers. Increasing the fiber level through WS addition resulted in a linear reduction of OM dC. There was a linear interaction for DM dC with a concurrent linear interaction in NDF dC. Nitrogen intake, dC, and retention did not differ; however, urine and total N excretion increased linearly with added fiber. Predicted microbial CP flow (MP) linearly decreased with WS inclusion mainly in LF heifers, as indicated by a significant interaction between F:C and WS. Rumen pH linearly increased with WS addition, although no F:C effect was detected. Ruminal ammonia concentration had an opposite linear effect with respect to MP as WS increased. Diets with the higher proportion of fiber benefited the most from a high RUP supply, complementing the substantial reduction in predicted MP caused by the incremental dietary fiber concentration. These results suggest that RUP supplementation is a practical method for reestablishing optimal ruminal N balance in the event of increased dietary fiber through forage inclusion in precision-fed dairy heifer diets.

  17. Analysis of Binary Adherence Data in the Setting of Polypharmacy: A Comparison of Different Approaches

    PubMed Central

    Esserman, Denise A.; Moore, Charity G.; Roth, Mary T.

    2009-01-01

    Older community dwelling adults often take multiple medications for numerous chronic diseases. Non-adherence to these medications can have a large public health impact. Therefore, the measurement and modeling of medication adherence in the setting of polypharmacy is an important area of research. We apply a variety of different modeling techniques (standard linear regression; weighted linear regression; adjusted linear regression; naïve logistic regression; beta-binomial (BB) regression; generalized estimating equations (GEE)) to binary medication adherence data from a study in a North Carolina based population of older adults, where each medication an individual was taking was classified as adherent or non-adherent. In addition, through simulation we compare these different methods based on Type I error rates, bias, power, empirical 95% coverage, and goodness of fit. We find that estimation and inference using GEE is robust to a wide variety of scenarios and we recommend using this in the setting of polypharmacy when adherence is dichotomously measured for multiple medications per person. PMID:20414358

  18. Impact of a large density gradient on linear and nonlinear edge-localized mode simulations

    DOE PAGES

    Xi, P. W.; Xu, X. Q.; Xia, T. Y.; ...

    2013-09-27

    Here, the impact of a large density gradient on edge-localized modes (ELMs) is studied linearly and nonlinearly by employing both two-fluid and gyro-fluid simulations. In two-fluid simulations, the ion diamagnetic stabilization on high-n modes disappears when the large density gradient is taken into account. But gyro-fluid simulations show that the finite Larmor radius (FLR) effect can effectively stabilize high-n modes, so the ion diamagnetic effect alone is not sufficient to represent the FLR stabilizing effect. We further demonstrate that additional gyroviscous terms must be kept in the two-fluid model to recover the linear results from the gyro-fluid model. Nonlinear simulations show that the density variation significantly weakens the E × B shearing at the top of the pedestal and thus leads to more energy loss during ELMs. The turbulence spectrum after an ELM crash is measured and has the relation ofmore » $$P(k_{z})\\propto k_{z}^{-3.3}$$ .« less

  19. Analytical prediction of digital signal crosstalk of FCC

    NASA Technical Reports Server (NTRS)

    Belleisle, A. P.

    1972-01-01

    The results are presented of study effort whose aim was the development of accurate means of analyzing and predicting signal cross-talk in multi-wire digital data cables. A complete analytical model is developed n + 1 wire systems of uniform transmission lines with arbitrary linear boundary conditions. In addition, a minimum set of parameter measurements required for the application of the model are presented. Comparisons between cross-talk predicted by this model and actual measured cross-talk are shown for a six conductor ribbon cable.

  20. Modulation of additive and interactive effects in lexical decision by trial history.

    PubMed

    Masson, Michael E J; Kliegl, Reinhold

    2013-05-01

    Additive and interactive effects of word frequency, stimulus quality, and semantic priming have been used to test theoretical claims about the cognitive architecture of word-reading processes. Additive effects among these factors have been taken as evidence for discrete-stage models of word reading. We present evidence from linear mixed-model analyses applied to 2 lexical decision experiments indicating that apparent additive effects can be the product of aggregating over- and underadditive interaction effects that are modulated by recent trial history, particularly the lexical status and stimulus quality of the previous trial's target. Even a simple practice effect expressed as improved response speed across trials was powerfully modulated by the nature of the previous target item. These results suggest that additivity and interaction between factors may reflect trial-to-trial variation in stimulus representations and decision processes rather than fundamental differences in processing architecture.

  1. Self-organizing biochemical cycle in dynamic feedback with soil structure

    NASA Astrophysics Data System (ADS)

    Vasilyeva, Nadezda; Vladimirov, Artem; Smirnov, Alexander; Matveev, Sergey; Tyrtyshnikov, Evgeniy; Yudina, Anna; Milanovskiy, Evgeniy; Shein, Evgeniy

    2016-04-01

    In the present study we perform bifurcation analysis of a physically-based mathematical model of self-organized structures in soil (Vasilyeva et al., 2015). The state variables in this model included microbial biomass, two organic matter types, oxygen, carbon dioxide, water content and capillary pore size. According to our previous experimental studies, organic matter affinity to water is an important property affecting soil structure. Therefore, organic matter wettability was taken as principle distinction between organic matter types in this model. It considers general known biological feedbacks with soil physical properties formulated as a system of parabolic type non-linear partial differential equations with elements of discrete modeling for water and pore formation. The model shows complex behavior, involving emergence of temporal and spatial irregular auto-oscillations from initially homogeneous distributions. The energy of external impact on a system was defined by a constant oxygen level on the boundary. Non-linear as opposed to linear oxygen diffusion gives possibility of modeling anaerobic micro-zones formation (organic matter conservation mechanism). For the current study we also introduced population competition of three different types of microorganisms according to their mobility/feeding (diffusive, moving and fungal growth). The strongly non-linear system was solved and parameterized by time-optimized algorithm combining explicit and implicit (matrix form of Thomas algorithm) methods considering the time for execution of the evaluated time-step according to accuracy control. The integral flux of the CO2 state variable was used as a macroscopic parameter to describe system as a whole and validation was carried out on temperature series of moisture dependence for soil heterotrophic respiration data. Thus, soil heterotrophic respiration can be naturally modeled as an integral result of complex dynamics on microscale, arising from biological processes formulated as a sum of state variables products, with no need to introduce any saturation functions, such as Mikhaelis-Menten type kinetics, inside the model. Analyzed dynamic soil model is being further developed to describe soil structure formation and its effect on organic matter decomposition at macro-scale, to predict changes with external perturbations. To link micro- and macro-scales we additionally model soil particles aggregation process. The results from local biochemical soil organic matter cycle serve as inputs to aggregation process, while the output aggregate size distributions define physical properties in the soil profile, these in turn serve as dynamic parameters in local biochemical cycles. The additional formulation is a system of non-linear ordinary differential equations, including Smoluchowski-type equations for aggregation and reaction kinetics equations for coagulation/adsorption/adhesion processes. Vasilyeva N.A., Ingtem J.G., Silaev D.A. Nonlinear dynamical model of microbial growth in soil medium. Computational Mathematics and Modeling, vol. 49, p.31-44, 2015 (in Russian). English version is expected in corresponding vol.27, issue 2, 2016.

  2. Two algorithms for neural-network design and training with application to channel equalization.

    PubMed

    Sweatman, C Z; Mulgrew, B; Gibson, G J

    1998-01-01

    We describe two algorithms for designing and training neural-network classifiers. The first, the linear programming slab algorithm (LPSA), is motivated by the problem of reconstructing digital signals corrupted by passage through a dispersive channel and by additive noise. It constructs a multilayer perceptron (MLP) to separate two disjoint sets by using linear programming methods to identify network parameters. The second, the perceptron learning slab algorithm (PLSA), avoids the computational costs of linear programming by using an error-correction approach to identify parameters. Both algorithms operate in highly constrained parameter spaces and are able to exploit symmetry in the classification problem. Using these algorithms, we develop a number of procedures for the adaptive equalization of a complex linear 4-quadrature amplitude modulation (QAM) channel, and compare their performance in a simulation study. Results are given for both stationary and time-varying channels, the latter based on the COST 207 GSM propagation model.

  3. Additive-Multiplicative Approximation of Genotype-Environment Interaction

    PubMed Central

    Gimelfarb, A.

    1994-01-01

    A model of genotype-environment interaction in quantitative traits is considered. The model represents an expansion of the traditional additive (first degree polynomial) approximation of genotypic and environmental effects to a second degree polynomial incorporating a multiplicative term besides the additive terms. An experimental evaluation of the model is suggested and applied to a trait in Drosophila melanogaster. The environmental variance of a genotype in the model is shown to be a function of the genotypic value: it is a convex parabola. The broad sense heritability in a population depends not only on the genotypic and environmental variances, but also on the position of the genotypic mean in the population relative to the minimum of the parabola. It is demonstrated, using the model, that GXE interaction rectional may cause a substantial non-linearity in offspring-parent regression and a reversed response to directional selection. It is also shown that directional selection may be accompanied by an increase in the heritability. PMID:7896113

  4. On the Effectiveness of Security Countermeasures for Critical Infrastructures.

    PubMed

    Hausken, Kjell; He, Fei

    2016-04-01

    A game-theoretic model is developed where an infrastructure of N targets is protected against terrorism threats. An original threat score is determined by the terrorist's threat against each target and the government's inherent protection level and original protection. The final threat score is impacted by the government's additional protection. We investigate and verify the effectiveness of countermeasures using empirical data and two methods. The first is to estimate the model's parameter values to minimize the sum of the squared differences between the government's additional resource investment predicted by the model and the empirical data. The second is to develop a multivariate regression model where the final threat score varies approximately linearly relative to the original threat score, sectors, and threat scenarios, and depends nonlinearly on the additional resource investment. The model and method are offered as tools, and as a way of thinking, to determine optimal resource investments across vulnerable targets subject to terrorism threats. © 2014 Society for Risk Analysis.

  5. Exhibit D modular design attitude control system study

    NASA Technical Reports Server (NTRS)

    Chichester, F.

    1984-01-01

    A dynamically equivalent four body approximation of the NASTRAN finite element model supplied for hybrid deployable truss to support the digital computer simulation of the ten body model of the flexible space platform that incorporates the four body truss model were investigated. Coefficients for sensitivity of state variables of the linearized model of the three axes rotational dynamics of the prototype flexible spacecraft were generated with respect to the model's parameters. Software changes required to accommodate addition of another rigid body to the five body model of the rotational dynamics of the prototype flexible spacecraft were evaluated.

  6. Unraveling additive from nonadditive effects using genomic relationship matrices.

    PubMed

    Muñoz, Patricio R; Resende, Marcio F R; Gezan, Salvador A; Resende, Marcos Deon Vilela; de Los Campos, Gustavo; Kirst, Matias; Huber, Dudley; Peter, Gary F

    2014-12-01

    The application of quantitative genetics in plant and animal breeding has largely focused on additive models, which may also capture dominance and epistatic effects. Partitioning genetic variance into its additive and nonadditive components using pedigree-based models (P-genomic best linear unbiased predictor) (P-BLUP) is difficult with most commonly available family structures. However, the availability of dense panels of molecular markers makes possible the use of additive- and dominance-realized genomic relationships for the estimation of variance components and the prediction of genetic values (G-BLUP). We evaluated height data from a multifamily population of the tree species Pinus taeda with a systematic series of models accounting for additive, dominance, and first-order epistatic interactions (additive by additive, dominance by dominance, and additive by dominance), using either pedigree- or marker-based information. We show that, compared with the pedigree, use of realized genomic relationships in marker-based models yields a substantially more precise separation of additive and nonadditive components of genetic variance. We conclude that the marker-based relationship matrices in a model including additive and nonadditive effects performed better, improving breeding value prediction. Moreover, our results suggest that, for tree height in this population, the additive and nonadditive components of genetic variance are similar in magnitude. This novel result improves our current understanding of the genetic control and architecture of a quantitative trait and should be considered when developing breeding strategies. Copyright © 2014 by the Genetics Society of America.

  7. Inverse problems-based maximum likelihood estimation of ground reflectivity for selected regions of interest from stripmap SAR data [Regularized maximum likelihood estimation of ground reflectivity from stripmap SAR data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    West, R. Derek; Gunther, Jacob H.; Moon, Todd K.

    In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less

  8. Inverse problems-based maximum likelihood estimation of ground reflectivity for selected regions of interest from stripmap SAR data [Regularized maximum likelihood estimation of ground reflectivity from stripmap SAR data

    DOE PAGES

    West, R. Derek; Gunther, Jacob H.; Moon, Todd K.

    2016-12-01

    In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less

  9. The quest for solvable multistate Landau-Zener models

    DOE PAGES

    Sinitsyn, Nikolai A.; Chernyak, Vladimir Y.

    2017-05-24

    Recently, integrability conditions (ICs) in mutistate Landau-Zener (MLZ) theory were proposed. They describe common properties of all known solved systems with linearly time-dependent Hamiltonians. Here we show that ICs enable efficient computer assisted search for new solvable MLZ models that span complexity range from several interacting states to mesoscopic systems with many-body dynamics and combinatorially large phase space. This diversity suggests that nontrivial solvable MLZ models are numerous. Additionally, we refine the formulation of ICs and extend the class of solvable systems to models with points of multiple diabatic level crossing.

  10. The origin of radio pulsar polarization

    NASA Astrophysics Data System (ADS)

    Dyks, J.

    2017-12-01

    Polarization of radio pulsar profiles involves a number of poorly understood, intriguing phenomena, such as the existence of comparable amounts of orthogonal polarization modes (OPMs), strong distortions of polarization angle (PA) curves into shapes inconsistent with the rotating vector model (RVM), and the strong circular polarization V which can be maximum (instead of zero) at the OPM jumps. It is shown that the comparable OPMs and large V result from a coherent addition of phase-delayed waves in natural propagation modes, which are produced by a linearly polarized emitted signal. The coherent mode summation implies opposite polarization properties to those known from the incoherent case, in particular, the OPM jumps occur at peaks of V, whereas V changes sign at a maximum linear polarization fraction L/I. These features are indispensable to interpret various observed polarization effects. It is shown that statistical properties of emission and propagation can be efficiently parametrized in a simple model of coherent mode addition, which is successfully applied to complex polarization phenomena, such as the stepwise PA curve of PSR B1913+16 and the strong PA distortions within core components of pulsars B1933+16 and B1237+25. The inclusion of coherent mode addition opens the possibility for a number of new polarization effects, such as inversion of relative modal strength, twin minima in L/I coincident with peaks in V, 45° PA jumps in weakly polarized emission, and loop-shaped core PA distortions. The empirical treatment of the coherency of mode addition makes it possible to advance the understanding of pulsar polarization beyond the RVM model.

  11. Random regression analyses using B-spline functions to model growth of Nellore cattle.

    PubMed

    Boligon, A A; Mercadante, M E Z; Lôbo, R B; Baldi, F; Albuquerque, L G

    2012-02-01

    The objective of this study was to estimate (co)variance components using random regression on B-spline functions to weight records obtained from birth to adulthood. A total of 82 064 weight records of 8145 females obtained from the data bank of the Nellore Breeding Program (PMGRN/Nellore Brazil) which started in 1987, were used. The models included direct additive and maternal genetic effects and animal and maternal permanent environmental effects as random. Contemporary group and dam age at calving (linear and quadratic effect) were included as fixed effects, and orthogonal Legendre polynomials of age (cubic regression) were considered as random covariate. The random effects were modeled using B-spline functions considering linear, quadratic and cubic polynomials for each individual segment. Residual variances were grouped in five age classes. Direct additive genetic and animal permanent environmental effects were modeled using up to seven knots (six segments). A single segment with two knots at the end points of the curve was used for the estimation of maternal genetic and maternal permanent environmental effects. A total of 15 models were studied, with the number of parameters ranging from 17 to 81. The models that used B-splines were compared with multi-trait analyses with nine weight traits and to a random regression model that used orthogonal Legendre polynomials. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most appropriate and parsimonious model to describe the covariance structure of the data. Selection for higher weight, such as at young ages, should be performed taking into account an increase in mature cow weight. Particularly, this is important in most of Nellore beef cattle production systems, where the cow herd is maintained on range conditions. There is limited modification of the growth curve of Nellore cattle with respect to the aim of selecting them for rapid growth at young ages while maintaining constant adult weight.

  12. A Fully Associative, Non-Linear Kinematic, Unified Viscoplastic Model for Titanium Based Matrices

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.; Saleeb, A. F.; Castelli, M. G.

    1994-01-01

    Specific forms for both the Gibb's and complementary dissipation potentials are chosen such that a complete (i.e., fully associative) potential based multiaxial unified viscoplastic model is obtained. This model possesses one tensorial internal state variable that is associated with dislocation substructure, with an evolutionary law that has nonlinear kinematic hardening and both thermal and strain induced recovery mechanisms. A unique aspect of the present model is the inclusion of non-linear hardening through the use of a compliance operator, derived from the Gibb's potential, in the evolution law for the back stress. This non-linear tensorial operator is significant in that it allows both the flow and evolutionary laws to be fully associative (and therefore easily integrated) and greatly influences the multiaxial response under non-proportional loading paths. In addition to this nonlinear compliance operator, a new consistent, potential preserving, internal strain unloading criterion has been introduced to prevent abnormalities in the predicted stress-strain curves, which are present with nonlinear hardening formulations, during unloading and reversed loading of the external variables. Specification of an experimental program for the complete determination of the material functions and parameters for characterizing a metallic matrix, e.g., TIMETAL 21S, is given. The experiments utilized are tensile, creep, and step creep tests. Finally, a comparison of this model and a commonly used Bodner-Partom model is made on the basis of predictive accuracy and numerical efficiency.

  13. Modeling the Geographic Consequence and Pattern of Dengue Fever Transmission in Thailand.

    PubMed

    Bekoe, Collins; Pansombut, Tatdow; Riyapan, Pakwan; Kakchapati, Sampurna; Phon-On, Aniruth

    2017-05-04

    Dengue fever is one of the infectious diseases that is still a public health problem in Thailand. This study considers in detail, the geographic consequence, seasonal and pattern of dengue fever transmission among the 76 provinces of Thailand from 2003 to 2015. A cross-sectional study. The data for the study was from the Department of Disease Control under the Bureau of Epidemiology, Thailand. The quarterly effects and location on the transmission of dengue was modeled using an alternative additive log-linear model. The model fitted well as illustrated by the residual plots and the  Again, the model showed that dengue fever is high in the second quarter of every year from May to August. There was an evidence of an increase in the trend of dengue annually from 2003 to 2015. There was a difference in the distribution of dengue fever within and between provinces. The areas of high risks were the central and southern regions of Thailand. The log-linear model provided a simple medium of modeling dengue fever transmission. The results are very important in the geographic distribution of dengue fever patterns.

  14. A Three-Dimensional Linearized Unsteady Euler Analysis for Turbomachinery Blade Rows

    NASA Technical Reports Server (NTRS)

    Montgomery, Matthew D.; Verdon, Joseph M.

    1996-01-01

    A three-dimensional, linearized, Euler analysis is being developed to provide an efficient unsteady aerodynamic analysis that can be used to predict the aeroelastic and aeroacoustic response characteristics of axial-flow turbomachinery blading. The field equations and boundary conditions needed to describe nonlinear and linearized inviscid unsteady flows through a blade row operating within a cylindrical annular duct are presented. In addition, a numerical model for linearized inviscid unsteady flow, which is based upon an existing nonlinear, implicit, wave-split, finite volume analysis, is described. These aerodynamic and numerical models have been implemented into an unsteady flow code, called LINFLUX. A preliminary version of the LINFLUX code is applied herein to selected, benchmark three-dimensional, subsonic, unsteady flows, to illustrate its current capabilities and to uncover existing problems and deficiencies. The numerical results indicate that good progress has been made toward developing a reliable and useful three-dimensional prediction capability. However, some problems, associated with the implementation of an unsteady displacement field and numerical errors near solid boundaries, still exist. Also, accurate far-field conditions must be incorporated into the FINFLUX analysis, so that this analysis can be applied to unsteady flows driven be external aerodynamic excitations.

  15. Modelling leaf photosynthetic and transpiration temperature-dependent responses in Vitis vinifera cv. Semillon grapevines growing in hot, irrigated vineyard conditions

    PubMed Central

    Greer, Dennis H.

    2012-01-01

    Background and aims Grapevines growing in Australia are often exposed to very high temperatures and the question of how the gas exchange processes adjust to these conditions is not well understood. The aim was to develop a model of photosynthesis and transpiration in relation to temperature to quantify the impact of the growing conditions on vine performance. Methodology Leaf gas exchange was measured along the grapevine shoots in accordance with their growth and development over several growing seasons. Using a general linear statistical modelling approach, photosynthesis and transpiration were modelled against leaf temperature separated into bands and the model parameters and coefficients applied to independent datasets to validate the model. Principal results Photosynthesis, transpiration and stomatal conductance varied along the shoot, with early emerging leaves having the highest rates, but these declined as later emerging leaves increased their gas exchange capacities in accordance with development. The general linear modelling approach applied to these data revealed that photosynthesis at each temperature was additively dependent on stomatal conductance, internal CO2 concentration and photon flux density. The temperature-dependent coefficients for these parameters applied to other datasets gave a predicted rate of photosynthesis that was linearly related to the measured rates, with a 1 : 1 slope. Temperature-dependent transpiration was multiplicatively related to stomatal conductance and the leaf to air vapour pressure deficit and applying the coefficients also showed a highly linear relationship, with a 1 : 1 slope between measured and modelled rates, when applied to independent datasets. Conclusions The models developed for the grapevines were relatively simple but accounted for much of the seasonal variation in photosynthesis and transpiration. The goodness of fit in each case demonstrated that explicitly selecting leaf temperature as a model parameter, rather than including temperature intrinsically as is usually done in more complex models, was warranted. PMID:22567220

  16. Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields.

    PubMed

    Yildiz, Izzet B; Mesgarani, Nima; Deneve, Sophie

    2016-12-07

    A primary goal of auditory neuroscience is to identify the sound features extracted and represented by auditory neurons. Linear encoding models, which describe neural responses as a function of the stimulus, have been primarily used for this purpose. Here, we provide theoretical arguments and experimental evidence in support of an alternative approach, based on decoding the stimulus from the neural response. We used a Bayesian normative approach to predict the responses of neurons detecting relevant auditory features, despite ambiguities and noise. We compared the model predictions to recordings from the primary auditory cortex of ferrets and found that: (1) the decoding filters of auditory neurons resemble the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynamics of responses better than a linear encoding model of similar complexity; and (3) the decoding model accounts for the accuracy with which the stimulus is represented in neural activity, whereas linear encoding model performs very poorly. Most importantly, our model predicts that neuronal responses are fundamentally shaped by "explaining away," a divisive competition between alternative interpretations of the auditory scene. Neural responses in the auditory cortex are dynamic, nonlinear, and hard to predict. Traditionally, encoding models have been used to describe neural responses as a function of the stimulus. However, in addition to external stimulation, neural activity is strongly modulated by the responses of other neurons in the network. We hypothesized that auditory neurons aim to collectively decode their stimulus. In particular, a stimulus feature that is decoded (or explained away) by one neuron is not explained by another. We demonstrated that this novel Bayesian decoding model is better at capturing the dynamic responses of cortical neurons in ferrets. Whereas the linear encoding model poorly reflects selectivity of neurons, the decoding model can account for the strong nonlinearities observed in neural data. Copyright © 2016 Yildiz et al.

  17. Linearized simulation of flow over wind farms and complex terrains.

    PubMed

    Segalini, Antonio

    2017-04-13

    The flow over complex terrains and wind farms is estimated here by numerically solving the linearized Navier-Stokes equations. The equations are linearized around the unperturbed incoming wind profile, here assumed logarithmic. The Boussinesq approximation is used to model the Reynolds stress with a prescribed turbulent eddy viscosity profile. Without requiring the boundary-layer approximation, two new linear equations are obtained for the vertical velocity and the wall-normal vorticity, with a reduction in the computational cost by a factor of 8 when compared with a primitive-variables formulation. The presence of terrain elevation is introduced as a vertical coordinate shift, while forestry or wind turbines are included as body forces, without any assumption about the wake structure for the turbines. The model is first validated against some available experiments and simulations, and then a simulation of a wind farm over a Gaussian hill is performed. The speed-up effect of the hill is clearly beneficial in terms of the available momentum upstream of the crest, while downstream of it the opposite can be said as the turbines face a decreased wind speed. Also, the presence of the hill introduces an additional spanwise velocity component that may also affect the turbines' operations. The linear superposition of the flow over the hill and the flow over the farm alone provided a first estimation of the wind speed along the farm, with discrepancies of the same order of magnitude for the spanwise velocity. Finally, the possibility of using a parabolic set of equations to obtain the turbulent kinetic energy after the linearized model is investigated with promising results.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).

  18. Linearized simulation of flow over wind farms and complex terrains

    NASA Astrophysics Data System (ADS)

    Segalini, Antonio

    2017-03-01

    The flow over complex terrains and wind farms is estimated here by numerically solving the linearized Navier-Stokes equations. The equations are linearized around the unperturbed incoming wind profile, here assumed logarithmic. The Boussinesq approximation is used to model the Reynolds stress with a prescribed turbulent eddy viscosity profile. Without requiring the boundary-layer approximation, two new linear equations are obtained for the vertical velocity and the wall-normal vorticity, with a reduction in the computational cost by a factor of 8 when compared with a primitive-variables formulation. The presence of terrain elevation is introduced as a vertical coordinate shift, while forestry or wind turbines are included as body forces, without any assumption about the wake structure for the turbines. The model is first validated against some available experiments and simulations, and then a simulation of a wind farm over a Gaussian hill is performed. The speed-up effect of the hill is clearly beneficial in terms of the available momentum upstream of the crest, while downstream of it the opposite can be said as the turbines face a decreased wind speed. Also, the presence of the hill introduces an additional spanwise velocity component that may also affect the turbines' operations. The linear superposition of the flow over the hill and the flow over the farm alone provided a first estimation of the wind speed along the farm, with discrepancies of the same order of magnitude for the spanwise velocity. Finally, the possibility of using a parabolic set of equations to obtain the turbulent kinetic energy after the linearized model is investigated with promising results. This article is part of the themed issue 'Wind energy in complex terrains'.

  19. High-resolution vertical profiles of groundwater electrical conductivity (EC) and chloride from direct-push EC logs

    NASA Astrophysics Data System (ADS)

    Bourke, Sarah A.; Hermann, Kristian J.; Hendry, M. Jim

    2017-11-01

    Elevated groundwater salinity associated with produced water, leaching from landfills or secondary salinity can degrade arable soils and potable water resources. Direct-push electrical conductivity (EC) profiling enables rapid, relatively inexpensive, high-resolution in-situ measurements of subsurface salinity, without requiring core collection or installation of groundwater wells. However, because the direct-push tool measures the bulk EC of both solid and liquid phases (ECa), incorporation of ECa data into regional or historical groundwater data sets requires the prediction of pore water EC (ECw) or chloride (Cl-) concentrations from measured ECa. Statistical linear regression and physically based models for predicting ECw and Cl- from ECa profiles were tested on a brine plume in central Saskatchewan, Canada. A linear relationship between ECa/ECw and porosity was more accurate for predicting ECw and Cl- concentrations than a power-law relationship (Archie's Law). Despite clay contents of up to 96%, the addition of terms to account for electrical conductance in the solid phase did not improve model predictions. In the absence of porosity data, statistical linear regression models adequately predicted ECw and Cl- concentrations from direct-push ECa profiles (ECw = 5.48 ECa + 0.78, R 2 = 0.87; Cl- = 1,978 ECa - 1,398, R 2 = 0.73). These statistical models can be used to predict ECw in the absence of lithologic data and will be particularly useful for initial site assessments. The more accurate linear physically based model can be used to predict ECw and Cl- as porosity data become available and the site-specific ECw-Cl- relationship is determined.

  20. When three traits make a line: evolution of phenotypic plasticity and genetic assimilation through linear reaction norms in stochastic environments.

    PubMed

    Ergon, T; Ergon, R

    2017-03-01

    Genetic assimilation emerges from selection on phenotypic plasticity. Yet, commonly used quantitative genetics models of linear reaction norms considering intercept and slope as traits do not mimic the full process of genetic assimilation. We argue that intercept-slope reaction norm models are insufficient representations of genetic effects on linear reaction norms and that considering reaction norm intercept as a trait is unfortunate because the definition of this trait relates to a specific environmental value (zero) and confounds genetic effects on reaction norm elevation with genetic effects on environmental perception. Instead, we suggest a model with three traits representing genetic effects that, respectively, (i) are independent of the environment, (ii) alter the sensitivity of the phenotype to the environment and (iii) determine how the organism perceives the environment. The model predicts that, given sufficient additive genetic variation in environmental perception, the environmental value at which reaction norms tend to cross will respond rapidly to selection after an abrupt environmental change, and eventually becomes equal to the new mean environment. This readjustment of the zone of canalization becomes completed without changes in genetic correlations, genetic drift or imposing any fitness costs of maintaining plasticity. The asymptotic evolutionary outcome of this three-trait linear reaction norm generally entails a lower degree of phenotypic plasticity than the two-trait model, and maximum expected fitness does not occur at the mean trait values in the population. © 2016 The Authors. Journal of Evolutionary Biology published by John Wiley & Sons Ltd on behalf of European Society for Evolutionary Biology.

  1. Geometry-based ensembles: toward a structural characterization of the classification boundary.

    PubMed

    Pujol, Oriol; Masip, David

    2009-06-01

    This paper introduces a novel binary discriminative learning technique based on the approximation of the nonlinear decision boundary by a piecewise linear smooth additive model. The decision border is geometrically defined by means of the characterizing boundary points-points that belong to the optimal boundary under a certain notion of robustness. Based on these points, a set of locally robust linear classifiers is defined and assembled by means of a Tikhonov regularized optimization procedure in an additive model to create a final lambda-smooth decision rule. As a result, a very simple and robust classifier with a strong geometrical meaning and nonlinear behavior is obtained. The simplicity of the method allows its extension to cope with some of today's machine learning challenges, such as online learning, large-scale learning or parallelization, with linear computational complexity. We validate our approach on the UCI database, comparing with several state-of-the-art classification techniques. Finally, we apply our technique in online and large-scale scenarios and in six real-life computer vision and pattern recognition problems: gender recognition based on face images, intravascular ultrasound tissue classification, speed traffic sign detection, Chagas' disease myocardial damage severity detection, old musical scores clef classification, and action recognition using 3D accelerometer data from a wearable device. The results are promising and this paper opens a line of research that deserves further attention.

  2. Numerical Technology for Large-Scale Computational Electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharpe, R; Champagne, N; White, D

    The key bottleneck of implicit computational electromagnetics tools for large complex geometries is the solution of the resulting linear system of equations. The goal of this effort was to research and develop critical numerical technology that alleviates this bottleneck for large-scale computational electromagnetics (CEM). The mathematical operators and numerical formulations used in this arena of CEM yield linear equations that are complex valued, unstructured, and indefinite. Also, simultaneously applying multiple mathematical modeling formulations to different portions of a complex problem (hybrid formulations) results in a mixed structure linear system, further increasing the computational difficulty. Typically, these hybrid linear systems aremore » solved using a direct solution method, which was acceptable for Cray-class machines but does not scale adequately for ASCI-class machines. Additionally, LLNL's previously existing linear solvers were not well suited for the linear systems that are created by hybrid implicit CEM codes. Hence, a new approach was required to make effective use of ASCI-class computing platforms and to enable the next generation design capabilities. Multiple approaches were investigated, including the latest sparse-direct methods developed by our ASCI collaborators. In addition, approaches that combine domain decomposition (or matrix partitioning) with general-purpose iterative methods and special purpose pre-conditioners were investigated. Special-purpose pre-conditioners that take advantage of the structure of the matrix were adapted and developed based on intimate knowledge of the matrix properties. Finally, new operator formulations were developed that radically improve the conditioning of the resulting linear systems thus greatly reducing solution time. The goal was to enable the solution of CEM problems that are 10 to 100 times larger than our previous capability.« less

  3. Monotonic entropy growth for a nonlinear model of random exchanges.

    PubMed

    Apenko, S M

    2013-02-01

    We present a proof of the monotonic entropy growth for a nonlinear discrete-time model of a random market. This model, based on binary collisions, also may be viewed as a particular case of Ulam's redistribution of energy problem. We represent each step of this dynamics as a combination of two processes. The first one is a linear energy-conserving evolution of the two-particle distribution, for which the entropy growth can be easily verified. The original nonlinear process is actually a result of a specific "coarse graining" of this linear evolution, when after the collision one variable is integrated away. This coarse graining is of the same type as the real space renormalization group transformation and leads to an additional entropy growth. The combination of these two factors produces the required result which is obtained only by means of information theory inequalities.

  4. PMICALC: an R code-based software for estimating post-mortem interval (PMI) compatible with Windows, Mac and Linux operating systems.

    PubMed

    Muñoz-Barús, José I; Rodríguez-Calvo, María Sol; Suárez-Peñaranda, José M; Vieira, Duarte N; Cadarso-Suárez, Carmen; Febrero-Bande, Manuel

    2010-01-30

    In legal medicine the correct determination of the time of death is of utmost importance. Recent advances in estimating post-mortem interval (PMI) have made use of vitreous humour chemistry in conjunction with Linear Regression, but the results are questionable. In this paper we present PMICALC, an R code-based freeware package which estimates PMI in cadavers of recent death by measuring the concentrations of potassium ([K+]), hypoxanthine ([Hx]) and urea ([U]) in the vitreous humor using two different regression models: Additive Models (AM) and Support Vector Machine (SVM), which offer more flexibility than the previously used Linear Regression. The results from both models are better than those published to date and can give numerical expression of PMI with confidence intervals and graphic support within 20 min. The program also takes into account the cause of death. 2009 Elsevier Ireland Ltd. All rights reserved.

  5. Linear mixing model applied to coarse resolution satellite data

    NASA Technical Reports Server (NTRS)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1992-01-01

    A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies.

  6. Monotonic entropy growth for a nonlinear model of random exchanges

    NASA Astrophysics Data System (ADS)

    Apenko, S. M.

    2013-02-01

    We present a proof of the monotonic entropy growth for a nonlinear discrete-time model of a random market. This model, based on binary collisions, also may be viewed as a particular case of Ulam's redistribution of energy problem. We represent each step of this dynamics as a combination of two processes. The first one is a linear energy-conserving evolution of the two-particle distribution, for which the entropy growth can be easily verified. The original nonlinear process is actually a result of a specific “coarse graining” of this linear evolution, when after the collision one variable is integrated away. This coarse graining is of the same type as the real space renormalization group transformation and leads to an additional entropy growth. The combination of these two factors produces the required result which is obtained only by means of information theory inequalities.

  7. Mixture models in diagnostic meta-analyses--clustering summary receiver operating characteristic curves accounted for heterogeneity and correlation.

    PubMed

    Schlattmann, Peter; Verba, Maryna; Dewey, Marc; Walther, Mario

    2015-01-01

    Bivariate linear and generalized linear random effects are frequently used to perform a diagnostic meta-analysis. The objective of this article was to apply a finite mixture model of bivariate normal distributions that can be used for the construction of componentwise summary receiver operating characteristic (sROC) curves. Bivariate linear random effects and a bivariate finite mixture model are used. The latter model is developed as an extension of a univariate finite mixture model. Two examples, computed tomography (CT) angiography for ruling out coronary artery disease and procalcitonin as a diagnostic marker for sepsis, are used to estimate mean sensitivity and mean specificity and to construct sROC curves. The suggested approach of a bivariate finite mixture model identifies two latent classes of diagnostic accuracy for the CT angiography example. Both classes show high sensitivity but mainly two different levels of specificity. For the procalcitonin example, this approach identifies three latent classes of diagnostic accuracy. Here, sensitivities and specificities are quite different as such that sensitivity increases with decreasing specificity. Additionally, the model is used to construct componentwise sROC curves and to classify individual studies. The proposed method offers an alternative approach to model between-study heterogeneity in a diagnostic meta-analysis. Furthermore, it is possible to construct sROC curves even if a positive correlation between sensitivity and specificity is present. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Infrared weak corrections to strongly interacting gauge boson scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciafaloni, Paolo; Urbano, Alfredo

    2010-04-15

    We evaluate the impact of electroweak corrections of infrared origin on strongly interacting longitudinal gauge boson scattering, calculating all-order resummed expressions at the double log level. As a working example, we consider the standard model with a heavy Higgs. At energies typical of forthcoming experiments (LHC, International Linear Collider, Compact Linear Collider), the corrections are in the 10%-40% range, with the relative sign depending on the initial state considered and on whether or not additional gauge boson emission is included. We conclude that the effect of radiative electroweak corrections should be included in the analysis of longitudinal gauge boson scattering.

  9. Estimating the strength of bone using linear response

    NASA Astrophysics Data System (ADS)

    Gunaratne, Gemunu H.

    2002-12-01

    Accurate diagnostic tools are required for effective management of osteoporosis; one method to identify additional diagnostics is to search for observable consequences of bone loss. An analysis of a model system is used to show that weakening of a bone is accompanied by a reduction of the fraction of the bone that participates in load transmission. On the basis of this observation, it is argued that the ratio Γ of linear responses of a network to dc and high-frequency ac driving can be used as a surrogate for their strength. Protocols needed to obtain Γ for bone samples are discussed.

  10. An Ada Linear-Algebra Software Package Modeled After HAL/S

    NASA Technical Reports Server (NTRS)

    Klumpp, Allan R.; Lawson, Charles L.

    1990-01-01

    New avionics software written more easily. Software package extends Ada programming language to include linear-algebra capabilities similar to those of HAL/S programming language. Designed for such avionics applications as Space Station flight software. In addition to built-in functions of HAL/S, package incorporates quaternion functions used in Space Shuttle and Galileo projects and routines from LINPAK solving systems of equations involving general square matrices. Contains two generic programs: one for floating-point computations and one for integer computations. Written on IBM/AT personal computer running under PC DOS, v.3.1.

  11. High Speed Civil Transport Aircraft Simulation: Reference-H Cycle 1, MATLAB Implementation

    NASA Technical Reports Server (NTRS)

    Sotack, Robert A.; Chowdhry, Rajiv S.; Buttrill, Carey S.

    1999-01-01

    The mathematical model and associated code to simulate a high speed civil transport aircraft - the Boeing Reference H configuration - are described. The simulation was constructed in support of advanced control law research. In addition to providing time histories of the dynamic response, the code includes the capabilities for calculating trim solutions and for generating linear models. The simulation relies on the nonlinear, six-degree-of-freedom equations which govern the motion of a rigid aircraft in atmospheric flight. The 1962 Standard Atmosphere Tables are used along with a turbulence model to simulate the Earth atmosphere. The aircraft model has three parts - an aerodynamic model, an engine model, and a mass model. These models use the data from the Boeing Reference H cycle 1 simulation data base. Models for the actuator dynamics, landing gear, and flight control system are not included in this aircraft model. Dynamic responses generated by the nonlinear simulation are presented and compared with results generated from alternate simulations at Boeing Commercial Aircraft Company and NASA Langley Research Center. Also, dynamic responses generated using linear models are presented and compared with dynamic responses generated using the nonlinear simulation.

  12. A dielectric model of self-assembled monolayer interfaces by capacitive spectroscopy.

    PubMed

    Góes, Márcio S; Rahman, Habibur; Ryall, Joshua; Davis, Jason J; Bueno, Paulo R

    2012-06-26

    The presence of self-assembled monolayers at an electrode introduces capacitance and resistance contributions that can profoundly affect subsequently observed electronic characteristics. Despite the impact of this on any voltammetry, these contributions are not directly resolvable with any clarity by standard electrochemical means. A capacitive analysis of such interfaces (by capacitance spectroscopy), introduced here, enables a clean mapping of these features and additionally presents a means of studying layer polarizability and Cole-Cole relaxation effects. The resolved resistive term contributes directly to an intrinsic monolayer uncompensated resistance that has a linear dependence on the layer thickness. The dielectric model proposed is fully aligned with the classic Helmholtz plate capacitor model and additionally explains the inherently associated resistive features of molecular films.

  13. Threshold models for genome-enabled prediction of ordinal categorical traits in plant breeding.

    PubMed

    Montesinos-López, Osval A; Montesinos-López, Abelardo; Pérez-Rodríguez, Paulino; de Los Campos, Gustavo; Eskridge, Kent; Crossa, José

    2014-12-23

    Categorical scores for disease susceptibility or resistance often are recorded in plant breeding. The aim of this study was to introduce genomic models for analyzing ordinal characters and to assess the predictive ability of genomic predictions for ordered categorical phenotypes using a threshold model counterpart of the Genomic Best Linear Unbiased Predictor (i.e., TGBLUP). The threshold model was used to relate a hypothetical underlying scale to the outward categorical response. We present an empirical application where a total of nine models, five without interaction and four with genomic × environment interaction (G×E) and genomic additive × additive × environment interaction (G×G×E), were used. We assessed the proposed models using data consisting of 278 maize lines genotyped with 46,347 single-nucleotide polymorphisms and evaluated for disease resistance [with ordinal scores from 1 (no disease) to 5 (complete infection)] in three environments (Colombia, Zimbabwe, and Mexico). Models with G×E captured a sizeable proportion of the total variability, which indicates the importance of introducing interaction to improve prediction accuracy. Relative to models based on main effects only, the models that included G×E achieved 9-14% gains in prediction accuracy; adding additive × additive interactions did not increase prediction accuracy consistently across locations. Copyright © 2015 Montesinos-López et al.

  14. Trajectories of Family Management Practices and Early Adolescent Behavioral Outcomes

    ERIC Educational Resources Information Center

    Wang, Ming-Te; Dishion, Thomas J.; Stormshak, Elizabeth A.; Willett, John B.

    2011-01-01

    Stage-environment fit theory was used to examine the reciprocal lagged relations between family management practices and early adolescent problem behavior during the middle school years. In addition, the potential moderating roles of family structure and of gender were explored. Hierarchical linear modeling was used to describe patterns of growth…

  15. Testing the Intervention Effect in Single-Case Experiments: A Monte Carlo Simulation Study

    ERIC Educational Resources Information Center

    Heyvaert, Mieke; Moeyaert, Mariola; Verkempynck, Paul; Van den Noortgate, Wim; Vervloet, Marlies; Ugille, Maaike; Onghena, Patrick

    2017-01-01

    This article reports on a Monte Carlo simulation study, evaluating two approaches for testing the intervention effect in replicated randomized AB designs: two-level hierarchical linear modeling (HLM) and using the additive method to combine randomization test "p" values (RTcombiP). Four factors were manipulated: mean intervention effect,…

  16. Nonparametric regression applied to quantitative structure-activity relationships

    PubMed

    Constans; Hirst

    2000-03-01

    Several nonparametric regressors have been applied to modeling quantitative structure-activity relationship (QSAR) data. The simplest regressor, the Nadaraya-Watson, was assessed in a genuine multivariate setting. Other regressors, the local linear and the shifted Nadaraya-Watson, were implemented within additive models--a computationally more expedient approach, better suited for low-density designs. Performances were benchmarked against the nonlinear method of smoothing splines. A linear reference point was provided by multilinear regression (MLR). Variable selection was explored using systematic combinations of different variables and combinations of principal components. For the data set examined, 47 inhibitors of dopamine beta-hydroxylase, the additive nonparametric regressors have greater predictive accuracy (as measured by the mean absolute error of the predictions or the Pearson correlation in cross-validation trails) than MLR. The use of principal components did not improve the performance of the nonparametric regressors over use of the original descriptors, since the original descriptors are not strongly correlated. It remains to be seen if the nonparametric regressors can be successfully coupled with better variable selection and dimensionality reduction in the context of high-dimensional QSARs.

  17. Uncertainty analysis and robust trajectory linearization control of a flexible air-breathing hypersonic vehicle

    NASA Astrophysics Data System (ADS)

    Pu, Zhiqiang; Tan, Xiangmin; Fan, Guoliang; Yi, Jianqiang

    2014-08-01

    Flexible air-breathing hypersonic vehicles feature significant uncertainties which pose huge challenges to robust controller designs. In this paper, four major categories of uncertainties are analyzed, that is, uncertainties associated with flexible effects, aerodynamic parameter variations, external environmental disturbances, and control-oriented modeling errors. A uniform nonlinear uncertainty model is explored for the first three uncertainties which lumps all uncertainties together and consequently is beneficial for controller synthesis. The fourth uncertainty is additionally considered in stability analysis. Based on these analyses, the starting point of the control design is to decompose the vehicle dynamics into five functional subsystems. Then a robust trajectory linearization control (TLC) scheme consisting of five robust subsystem controllers is proposed. In each subsystem controller, TLC is combined with the extended state observer (ESO) technique for uncertainty compensation. The stability of the overall closed-loop system with the four aforementioned uncertainties and additional singular perturbations is analyzed. Particularly, the stability of nonlinear ESO is also discussed from a Liénard system perspective. At last, simulations demonstrate the great control performance and the uncertainty rejection ability of the robust scheme.

  18. Oligopolies with contingent workforce and unemployment insurance systems

    NASA Astrophysics Data System (ADS)

    Matsumoto, Akio; Merlone, Ugo; Szidarovszky, Ferenc

    2015-10-01

    In the recent literature the introduction of modified cost functions has added reality into the classical oligopoly analysis. Furthermore, the market evolution requires much more flexibility to firms, and in several countries contingent workforce plays an important role in the production choices by the firms. Therefore, an analysis of dynamic adjustment costs is in order to understand oligopoly dynamics. In this paper, dynamic single-product oligopolies without product differentiation are first examined with the additional consideration of production adjustment costs. Linear inverse demand and cost functions are considered and it is assumed that the firms adjust their outputs partially toward best response. The set of the steady states is characterized by a system of linear inequalities and there are usually infinitely many steady states. The asymptotic behavior of the output trajectories is examined by using computer simulation. The numerical results indicate that the resulting dynamics is richer than in the case of the classical Cournot model. This model and results are then compared to oligopolies with unemployment insurance systems when the additional cost is considered if firms do not use their maximum capacities.

  19. Breakfast intake among adults with type 2 diabetes: is bigger better?

    PubMed Central

    Jarvandi, Soghra; Schootman, Mario; Racette, Susan B.

    2015-01-01

    Objective To assess the association between breakfast energy and total daily energy intake among individuals with type 2 diabetes. Design Cross-sectional study. Daily energy intake was computed from a 24-h dietary recall. Multiple regression models were used to estimate the association between daily energy intake (dependent variable) and quartiles of energy intake at breakfast (independent variable) expressed as either absolute or relative (% of total daily energy intake) terms. Orthogonal polynomial contrasts were used to test for linear and quadratic trends. Models were controlled for sex, age, race/ethnicity, body mass index, physical activity and smoking. In addition, we used separate multiple regression models to test the effect of quartiles of absolute and relative breakfast energy on intake at lunch, dinner, and snacks. Setting The 1999–2004 National Health and Nutrition Examination Survey (NHANES). Subjects Participants aged ≥ 30 years with self-reported history of diabetes (N = 1,146). Results Daily energy intake increased as absolute breakfast energy intake increased (linear trend, P < 0.0001; quadratic trend, P = 0.02), but decreased as relative breakfast energy intake increased (linear trend, P < 0.0001). In addition, while higher quartiles of absolute breakfast intake had no associations with energy intake at subsequent meals, higher quartiles of relative breakfast intake were associated with lower energy intake during all subsequent meals and snacks (P < 0.05). Conclusions Consuming a breakfast that provided less energy or comprised a greater proportion of daily energy intake was associated with lower total daily energy intake in adults with type 2 diabetes. PMID:25529061

  20. Random regression models using different functions to model milk flow in dairy cows.

    PubMed

    Laureano, M M M; Bignardi, A B; El Faro, L; Cardoso, V L; Tonhati, H; Albuquerque, L G

    2014-09-12

    We analyzed 75,555 test-day milk flow records from 2175 primiparous Holstein cows that calved between 1997 and 2005. Milk flow was obtained by dividing the mean milk yield (kg) of the 3 daily milking by the total milking time (min) and was expressed as kg/min. Milk flow was grouped into 43 weekly classes. The analyses were performed using a single-trait Random Regression Models that included direct additive genetic, permanent environmental, and residual random effects. In addition, the contemporary group and linear and quadratic effects of cow age at calving were included as fixed effects. Fourth-order orthogonal Legendre polynomial of days in milk was used to model the mean trend in milk flow. The additive genetic and permanent environmental covariance functions were estimated using random regression Legendre polynomials and B-spline functions of days in milk. The model using a third-order Legendre polynomial for additive genetic effects and a sixth-order polynomial for permanent environmental effects, which contained 7 residual classes, proved to be the most adequate to describe variations in milk flow, and was also the most parsimonious. The heritability in milk flow estimated by the most parsimonious model was of moderate to high magnitude.

  1. Use of probabilistic weights to enhance linear regression myoelectric control

    NASA Astrophysics Data System (ADS)

    Smith, Lauren H.; Kuiken, Todd A.; Hargrove, Levi J.

    2015-12-01

    Objective. Clinically available prostheses for transradial amputees do not allow simultaneous myoelectric control of degrees of freedom (DOFs). Linear regression methods can provide simultaneous myoelectric control, but frequently also result in difficulty with isolating individual DOFs when desired. This study evaluated the potential of using probabilistic estimates of categories of gross prosthesis movement, which are commonly used in classification-based myoelectric control, to enhance linear regression myoelectric control. Approach. Gaussian models were fit to electromyogram (EMG) feature distributions for three movement classes at each DOF (no movement, or movement in either direction) and used to weight the output of linear regression models by the probability that the user intended the movement. Eight able-bodied and two transradial amputee subjects worked in a virtual Fitts’ law task to evaluate differences in controllability between linear regression and probability-weighted regression for an intramuscular EMG-based three-DOF wrist and hand system. Main results. Real-time and offline analyses in able-bodied subjects demonstrated that probability weighting improved performance during single-DOF tasks (p < 0.05) by preventing extraneous movement at additional DOFs. Similar results were seen in experiments with two transradial amputees. Though goodness-of-fit evaluations suggested that the EMG feature distributions showed some deviations from the Gaussian, equal-covariance assumptions used in this experiment, the assumptions were sufficiently met to provide improved performance compared to linear regression control. Significance. Use of probability weights can improve the ability to isolate individual during linear regression myoelectric control, while maintaining the ability to simultaneously control multiple DOFs.

  2. Health effects model for nuclear power plant accident consequence analysis. Part I. Introduction, integration, and summary. Part II. Scientific basis for health effects models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, J.S.; Moeller, D.W.; Cooper, D.W.

    1985-07-01

    Analysis of the radiological health effects of nuclear power plant accidents requires models for predicting early health effects, cancers and benign thyroid nodules, and genetic effects. Since the publication of the Reactor Safety Study, additional information on radiological health effects has become available. This report summarizes the efforts of a program designed to provide revised health effects models for nuclear power plant accident consequence modeling. The new models for early effects address four causes of mortality and nine categories of morbidity. The models for early effects are based upon two parameter Weibull functions. They permit evaluation of the influence ofmore » dose protraction and address the issue of variation in radiosensitivity among the population. The piecewise-linear dose-response models used in the Reactor Safety Study to predict cancers and thyroid nodules have been replaced by linear and linear-quadratic models. The new models reflect the most recently reported results of the follow-up of the survivors of the bombings of Hiroshima and Nagasaki and permit analysis of both morbidity and mortality. The new models for genetic effects allow prediction of genetic risks in each of the first five generations after an accident and include information on the relative severity of various classes of genetic effects. The uncertainty in modeloling radiological health risks is addressed by providing central, upper, and lower estimates of risks. An approach is outlined for summarizing the health consequences of nuclear power plant accidents. 298 refs., 9 figs., 49 tabs.« less

  3. Assessment of the non-Gaussianity and non-linearity levels of simulated sEMG signals on stationary segments.

    PubMed

    Messaoudi, Noureddine; Bekka, Raïs El'hadi; Ravier, Philippe; Harba, Rachid

    2017-02-01

    The purpose of this paper was to evaluate the effects of the longitudinal single differential (LSD), the longitudinal double differential (LDD) and the normal double differential (NDD) spatial filters, the electrode shape, the inter-electrode distance (IED) on non-Gaussianity and non-linearity levels of simulated surface EMG (sEMG) signals when the maximum voluntary contraction (MVC) varied from 10% to 100% by a step of 10%. The effects of recruitment range thresholds (RR), the firing rate (FR) strategy and the peak firing rate (PFR) of motor units were also considered. A cylindrical multilayer model of the volume conductor and a model of motor unit (MU) recruitment and firing rate were used to simulate sEMG signals in a pool of 120 MUs for 5s. Firstly, the stationarity of sEMG signals was tested by the runs, the reverse arrangements (RA) and the modified reverse arrangements (MRA) tests. Then the non-Gaussianity was characterised with bicoherence and kurtosis, and non-linearity levels was evaluated with linearity test. The kurtosis analysis showed that the sEMG signals detected by the LSD filter were the most Gaussian and those detected by the NDD filter were the least Gaussian. In addition, the sEMG signals detected by the LSD filter were the most linear. For a given filter, the sEMG signals detected by using rectangular electrodes were more Gaussian and more linear than that detected with circular electrodes. Moreover, the sEMG signals are less non-Gaussian and more linear with reverse onion-skin firing rate strategy than those with onion-skin strategy. The levels of sEMG signal Gaussianity and linearity increased with the increase of the IED, RR and PFR. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Image interpolation via regularized local linear regression.

    PubMed

    Liu, Xianming; Zhao, Debin; Xiong, Ruiqin; Ma, Siwei; Gao, Wen; Sun, Huifang

    2011-12-01

    The linear regression model is a very attractive tool to design effective image interpolation schemes. Some regression-based image interpolation algorithms have been proposed in the literature, in which the objective functions are optimized by ordinary least squares (OLS). However, it is shown that interpolation with OLS may have some undesirable properties from a robustness point of view: even small amounts of outliers can dramatically affect the estimates. To address these issues, in this paper we propose a novel image interpolation algorithm based on regularized local linear regression (RLLR). Starting with the linear regression model where we replace the OLS error norm with the moving least squares (MLS) error norm leads to a robust estimator of local image structure. To keep the solution stable and avoid overfitting, we incorporate the l(2)-norm as the estimator complexity penalty. Moreover, motivated by recent progress on manifold-based semi-supervised learning, we explicitly consider the intrinsic manifold structure by making use of both measured and unmeasured data points. Specifically, our framework incorporates the geometric structure of the marginal probability distribution induced by unmeasured samples as an additional local smoothness preserving constraint. The optimal model parameters can be obtained with a closed-form solution by solving a convex optimization problem. Experimental results on benchmark test images demonstrate that the proposed method achieves very competitive performance with the state-of-the-art interpolation algorithms, especially in image edge structure preservation. © 2011 IEEE

  5. Designing a mixture experiment when the components are subject to a nonlinear multiple-component constraint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepel, Greg F.; Cooley, Scott K.; Vienna, John D.

    This article presents a case study of developing an experimental design for a constrained mixture experiment when the experimental region is defined by single-component constraints (SCCs), linear multiple-component constraints (MCCs), and a nonlinear MCC. Traditional methods and software for designing constrained mixture experiments with SCCs and linear MCCs are not directly applicable because of the nonlinear MCC. A modification of existing methodology to account for the nonlinear MCC was developed and is described in this article. The case study involves a 15-component nuclear waste glass example in which SO3 is one of the components. SO3 has a solubility limit inmore » glass that depends on the composition of the balance of the glass. A goal was to design the experiment so that SO3 would not exceed its predicted solubility limit for any of the experimental glasses. The SO3 solubility limit had previously been modeled by a partial quadratic mixture (PQM) model expressed in the relative proportions of the 14 other components. The PQM model was used to construct a nonlinear MCC in terms of all 15 components. In addition, there were SCCs and linear MCCs. This article discusses the waste glass example and how a layered design was generated to (i) account for the SCCs, linear MCCs, and nonlinear MCC and (ii) meet the goals of the study.« less

  6. Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Chang, K. C.

    2005-05-01

    Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.

  7. Modeling Dengue vector population using remotely sensed data and machine learning.

    PubMed

    Scavuzzo, Juan M; Trucco, Francisco; Espinosa, Manuel; Tauro, Carolina B; Abril, Marcelo; Scavuzzo, Carlos M; Frery, Alejandro C

    2018-05-16

    Mosquitoes are vectors of many human diseases. In particular, Aedes ægypti (Linnaeus) is the main vector for Chikungunya, Dengue, and Zika viruses in Latin America and it represents a global threat. Public health policies that aim at combating this vector require dependable and timely information, which is usually expensive to obtain with field campaigns. For this reason, several efforts have been done to use remote sensing due to its reduced cost. The present work includes the temporal modeling of the oviposition activity (measured weekly on 50 ovitraps in a north Argentinean city) of Aedes ægypti (Linnaeus), based on time series of data extracted from operational earth observation satellite images. We use are NDVI, NDWI, LST night, LST day and TRMM-GPM rain from 2012 to 2016 as predictive variables. In contrast to previous works which use linear models, we employ Machine Learning techniques using completely accessible open source toolkits. These models have the advantages of being non-parametric and capable of describing nonlinear relationships between variables. Specifically, in addition to two linear approaches, we assess a support vector machine, an artificial neural networks, a K-nearest neighbors and a decision tree regressor. Considerations are made on parameter tuning and the validation and training approach. The results are compared to linear models used in previous works with similar data sets for generating temporal predictive models. These new tools perform better than linear approaches, in particular nearest neighbor regression (KNNR) performs the best. These results provide better alternatives to be implemented operatively on the Argentine geospatial risk system that is running since 2012. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. SU-E-T-186: Cloud-Based Quality Assurance Application for Linear Accelerator Commissioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rogers, J

    2015-06-15

    Purpose: To identify anomalies and safety issues during data collection and modeling for treatment planning systems Methods: A cloud-based quality assurance system (AQUIRE - Automated QUalIty REassurance) has been developed to allow the uploading and analysis of beam data aquired during the treatment planning system commissioning process. In addition to comparing and aggregating measured data, tools have also been developed to extract dose from the treatment planning system for end-to-end testing. A gamma index is perfomed on the data to give a dose difference and distance-to-agreement for validation that a beam model is generating plans consistent with the beam datamore » collection. Results: Over 20 linear accelerators have been commissioning using this platform, and a variety of errors and potential saftey issues have been caught through the validation process. For example, the gamma index of 2% dose, 2mm DTA is quite sufficient to see curves not corrected for effective point of measurement. Also, data imported into the database is analyzed against an aggregate of similar linear accelerators to show data points that are outliers. The resulting curves in the database exhibit a very small standard deviation and imply that a preconfigured beam model based on aggregated linear accelerators will be sufficient in most cases. Conclusion: With the use of this new platform for beam data commissioning, errors in beam data collection and treatment planning system modeling are greatly reduced. With the reduction in errors during acquisition, the resulting beam models are quite similar, suggesting that a common beam model may be possible in the future. Development is ongoing to create routine quality assurance tools to compare back to the beam data acquired during commissioning. I am a medical physicist for Alzyen Medical Physics, and perform commissioning services.« less

  9. A DGTD method for the numerical modeling of the interaction of light with nanometer scale metallic structures taking into account non-local dispersion effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmitt, Nikolai; Technische Universitaet Darmstadt, Institut fuer Theorie Elektromagnetischer Felder; Scheid, Claire

    2016-07-01

    The interaction of light with metallic nanostructures is increasingly attracting interest because of numerous potential applications. Sub-wavelength metallic structures, when illuminated with a frequency close to the plasma frequency of the metal, present resonances that cause extreme local field enhancements. Exploiting the latter in applications of interest requires a detailed knowledge about the occurring fields which can actually not be obtained analytically. For the latter mentioned reason, numerical tools are thus an absolute necessity. The insight they provide is very often the only way to get a deep enough understanding of the very rich physics at play. For the numericalmore » modeling of light-structure interaction on the nanoscale, the choice of an appropriate material model is a crucial point. Approaches that are adopted in a first instance are based on local (i.e. with no interaction between electrons) dispersive models, e.g. Drude or Drude–Lorentz models. From the mathematical point of view, when a time-domain modeling is considered, these models lead to an additional system of ordinary differential equations coupled to Maxwell's equations. However, recent experiments have shown that the repulsive interaction between electrons inside the metal makes the response of metals intrinsically non-local and that this effect cannot generally be overlooked. Technological achievements have enabled the consideration of metallic structures in a regime where such non-localities have a significant influence on the structures' optical response. This leads to an additional, in general non-linear, system of partial differential equations which is, when coupled to Maxwell's equations, significantly more difficult to treat. Nevertheless, dealing with a linearized non-local dispersion model already opens the route to numerous practical applications of plasmonics. In this work, we present a Discontinuous Galerkin Time-Domain (DGTD) method able to solve the system of Maxwell's equations coupled to a linearized non-local dispersion model relevant to plasmonics. While the method is presented in the general 3D case, numerical results are given for 2D simulation settings.« less

  10. LINEAR - DERIVATION AND DEFINITION OF A LINEAR AIRCRAFT MODEL

    NASA Technical Reports Server (NTRS)

    Duke, E. L.

    1994-01-01

    The Derivation and Definition of a Linear Model program, LINEAR, provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models. LINEAR was developed to provide a standard, documented, and verified tool to derive linear models for aircraft stability analysis and control law design. Linear system models define the aircraft system in the neighborhood of an analysis point and are determined by the linearization of the nonlinear equations defining vehicle dynamics and sensors. LINEAR numerically determines a linear system model using nonlinear equations of motion and a user supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. LINEAR is capable of extracting both linearized engine effects, such as net thrust, torque, and gyroscopic effects and including these effects in the linear system model. The point at which this linear model is defined is determined either by completely specifying the state and control variables, or by specifying an analysis point on a trajectory and directing the program to determine the control variables and the remaining state variables. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to provide easy selection of state, control, and observation variables to be used in a particular model. Thus, the order of the system model is completely under user control. Further, the program provides the flexibility of allowing alternate formulations of both the state and observation equations. Data describing the aircraft and the test case is input to the program through a terminal or formatted data files. All data can be modified interactively from case to case. The aerodynamic model can be defined in two ways: a set of nondimensional stability and control derivatives for the flight point of interest, or a full non-linear aerodynamic model as used in simulations. LINEAR is written in FORTRAN and has been implemented on a DEC VAX computer operating under VMS with a virtual memory requirement of approximately 296K of 8 bit bytes. Both an interactive and batch version are included. LINEAR was developed in 1988.

  11. Coupled modelling of groundwater flow-heat transport for assessing river-aquifer interactions

    NASA Astrophysics Data System (ADS)

    Engeler, I.; Hendricks Franssen, H. J.; Müller, R.; Stauffer, F.

    2010-05-01

    A three-dimensional finite element model for coupled variably saturated groundwater flow and heat transport was developed for the aquifer below the city of Zurich. The piezometric heads in the aquifer are strongly influenced by the river Limmat. In the model region, the river Limmat looses water to the aquifer. The river-aquifer interaction was modelled with the standard linear leakage concept. Coupling was implemented by considering temperature dependence of the hydraulic conductivity and of the leakage coefficient (via water viscosity) and density dependent transport. Calibration was performed for isothermal conditions by inverse modelling using the pilot point method. Independent model testing was carried out with help of the available dense monitoring network for piezometric heads and groundwater temperature. The model was tested by residuals analysis with the help of measurements for both groundwater temperature and head. The comparison of model results and measurements showed high accuracy for temperature except for the Southern part of the model area, where important geological heterogeneity is expected, which could not be reproduced by the model. The comparison of simulated and measured head showed that especially in the vicinity of river Limmat model results were improved by a temperature dependent leakage coefficient. Residuals were reduced up to 30% compared to isothermal leakage coefficients. This holds particularly for regions, where the river stage is considerably above the groundwater level. Furthermore additional analysis confirmed prior findings, that seepage rates during flood events cannot be reproduced with the implemented linear leakage-concept. Infiltration during flood events is larger than expected, which can be potentially attributed to additional infiltration areas. It is concluded that the temperature dependent leakage concept improves the model results for this study area significantly, and that we expect that this is also for other areas the case.

  12. A finite nonlinear hyper-viscoelastic model for soft biological tissues.

    PubMed

    Panda, Satish Kumar; Buist, Martin Lindsay

    2018-03-01

    Soft tissues exhibit highly nonlinear rate and time-dependent stress-strain behaviour. Strain and strain rate dependencies are often modelled using a hyperelastic model and a discrete (standard linear solid) or continuous spectrum (quasi-linear) viscoelastic model, respectively. However, these models are unable to properly capture the materials characteristics because hyperelastic models are unsuited for time-dependent events, whereas the common viscoelastic models are insufficient for the nonlinear and finite strain viscoelastic tissue responses. The convolution integral based models can demonstrate a finite viscoelastic response; however, their derivations are not consistent with the laws of thermodynamics. The aim of this work was to develop a three-dimensional finite hyper-viscoelastic model for soft tissues using a thermodynamically consistent approach. In addition, a nonlinear function, dependent on strain and strain rate, was adopted to capture the nonlinear variation of viscosity during a loading process. To demonstrate the efficacy and versatility of this approach, the model was used to recreate the experimental results performed on different types of soft tissues. In all the cases, the simulation results were well matched (R 2 ⩾0.99) with the experimental data. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Environmental factors and flow paths related to Escherichia coli concentrations at two beaches on Lake St. Clair, Michigan, 2002–2005

    USGS Publications Warehouse

    Holtschlag, David J.; Shively, Dawn; Whitman, Richard L.; Haack, Sheridan K.; Fogarty, Lisa R.

    2008-01-01

    Regression analyses and hydrodynamic modeling were used to identify environmental factors and flow paths associated with Escherichia coli (E. coli) concentrations at Memorial and Metropolitan Beaches on Lake St. Clair in Macomb County, Mich. Lake St. Clair is part of the binational waterway between the United States and Canada that connects Lake Huron with Lake Erie in the Great Lakes Basin. Linear regression, regression-tree, and logistic regression models were developed from E. coli concentration and ancillary environmental data. Linear regression models on log10 E. coli concentrations indicated that rainfall prior to sampling, water temperature, and turbidity were positively associated with bacteria concentrations at both beaches. Flow from Clinton River, changes in water levels, wind conditions, and log10 E. coli concentrations 2 days before or after the target bacteria concentrations were statistically significant at one or both beaches. In addition, various interaction terms were significant at Memorial Beach. Linear regression models for both beaches explained only about 30 percent of the variability in log10 E. coli concentrations. Regression-tree models were developed from data from both Memorial and Metropolitan Beaches but were found to have limited predictive capability in this study. The results indicate that too few observations were available to develop reliable regression-tree models. Linear logistic models were developed to estimate the probability of E. coli concentrations exceeding 300 most probable number (MPN) per 100 milliliters (mL). Rainfall amounts before bacteria sampling were positively associated with exceedance probabilities at both beaches. Flow of Clinton River, turbidity, and log10 E. coli concentrations measured before or after the target E. coli measurements were related to exceedances at one or both beaches. The linear logistic models were effective in estimating bacteria exceedances at both beaches. A receiver operating characteristic (ROC) analysis was used to determine cut points for maximizing the true positive rate prediction while minimizing the false positive rate. A two-dimensional hydrodynamic model was developed to simulate horizontal current patterns on Lake St. Clair in response to wind, flow, and water-level conditions at model boundaries. Simulated velocity fields were used to track hypothetical massless particles backward in time from the beaches along flow paths toward source areas. Reverse particle tracking for idealized steady-state conditions shows changes in expected flow paths and traveltimes with wind speeds and directions from 24 sectors. The results indicate that three to four sets of contiguous wind sectors have similar effects on flow paths in the vicinity of the beaches. In addition, reverse particle tracking was used for transient conditions to identify expected flow paths for 10 E. coli sampling events in 2004. These results demonstrate the ability to track hypothetical particles from the beaches, backward in time, to likely source areas. This ability, coupled with a greater frequency of bacteria sampling, may provide insight into changes in bacteria concentrations between source and sink areas.

  14. Low temperature Voigt effect in the terbium gallium garnet crystal.

    PubMed

    Akbar, Ali; Khalid, Muhammad Waleed; Anwar, Muhammad Sabieh

    2017-11-27

    Magnetic linear birefringence and dichroism are investigated for the paramagnetic terbium gallium garnet (TGG) single crystal in the temperature range 8-100 K. The reciprocal nature is confirmed for the linear birefringence. Furthermore a theoretical model is validated that describes the intermixing of linear and circular birefringence. The ellipticity and rotation of the polarization ellipse are investigated in the light of these measurements. These otherwise minuscule magnetically induced effects are amplified at cryogenic temperatures and are determined by a phase-sensitive technique based on the Fourier decomposition of detected signal intensities. The correspondent measurements also allow us to determine the Curie-Weiss constant corroborating the presence of a magnetically frustrated spin system. Additionally we show how the Voigt geometry enables determining the direction of a magnetic field.

  15. Architectures for wrist-worn energy harvesting

    NASA Astrophysics Data System (ADS)

    Rantz, R.; Halim, M. A.; Xue, T.; Zhang, Q.; Gu, L.; Yang, K.; Roundy, S.

    2018-04-01

    This paper reports the simulation-based analysis of six dynamical structures with respect to their wrist-worn vibration energy harvesting capability. This work approaches the problem of maximizing energy harvesting potential at the wrist by considering multiple mechanical substructures; rotational and linear motion-based architectures are examined. Mathematical models are developed and experimentally corroborated. An optimization routine is applied to the proposed architectures to maximize average power output and allow for comparison. The addition of a linear spring element to the structures has the potential to improve power output; for example, in the case of rotational structures, a 211% improvement in power output was estimated under real walking excitation. The analysis concludes that a sprung rotational harvester architecture outperforms a sprung linear architecture by 66% when real walking data is used as input to the simulations.

  16. Fiber-optic epoxy composite cure sensor. I. Dependence of refractive index of an autocatalytic reaction epoxy system at 850 nm on temperature and extent of cure

    NASA Astrophysics Data System (ADS)

    Lam, Kai-Yuen; Afromowitz, Martin A.

    1995-09-01

    We discuss the behavior of the refractive index of a typical epoxy-aromatic diamine system. Near 850 nm the index of refraction is found to be largely controlled by the density of the epoxy. Models are derived to describe its dependence on temperature and extent of cure. Within the range of temperatures studied, the refractive index decreases linearly with increasing temperature. In addition, as the epoxy is cured, the refractive index increases linearly with conversion to the gel point. >From then on, shrinkage in the volume of the epoxy is restricted by local viscosity. Therefore the linear relationship between the refractive index and the extent of cure does not hold beyond the gel point.

  17. Particle rejuvenation of Rao-Blackwellized sequential Monte Carlo smoothers for conditionally linear and Gaussian models

    NASA Astrophysics Data System (ADS)

    Nguyen, Ngoc Minh; Corff, Sylvain Le; Moulines, Éric

    2017-12-01

    This paper focuses on sequential Monte Carlo approximations of smoothing distributions in conditionally linear and Gaussian state spaces. To reduce Monte Carlo variance of smoothers, it is typical in these models to use Rao-Blackwellization: particle approximation is used to sample sequences of hidden regimes while the Gaussian states are explicitly integrated conditional on the sequence of regimes and observations, using variants of the Kalman filter/smoother. The first successful attempt to use Rao-Blackwellization for smoothing extends the Bryson-Frazier smoother for Gaussian linear state space models using the generalized two-filter formula together with Kalman filters/smoothers. More recently, a forward-backward decomposition of smoothing distributions mimicking the Rauch-Tung-Striebel smoother for the regimes combined with backward Kalman updates has been introduced. This paper investigates the benefit of introducing additional rejuvenation steps in all these algorithms to sample at each time instant new regimes conditional on the forward and backward particles. This defines particle-based approximations of the smoothing distributions whose support is not restricted to the set of particles sampled in the forward or backward filter. These procedures are applied to commodity markets which are described using a two-factor model based on the spot price and a convenience yield for crude oil data.

  18. Quantitative sensing of corroded steel rebar embedded in cement mortar specimens using ultrasonic testing

    NASA Astrophysics Data System (ADS)

    Owusu Twumasi, Jones; Le, Viet; Tang, Qixiang; Yu, Tzuyang

    2016-04-01

    Corrosion of steel reinforcing bars (rebars) is the primary cause for the deterioration of reinforced concrete structures. Traditional corrosion monitoring methods such as half-cell potential and linear polarization resistance can only detect the presence of corrosion but cannot quantify it. This study presents an experimental investigation of quantifying degree of corrosion of steel rebar inside cement mortar specimens using ultrasonic testing (UT). A UT device with two 54 kHz transducers was used to measure ultrasonic pulse velocity (UPV) of cement mortar, uncorroded and corroded reinforced cement mortar specimens, utilizing the direct transmission method. The results obtained from the study show that UPV decreases linearly with increase in degree of corrosion and corrosion-induced cracks (surface cracks). With respect to quantifying the degree of corrosion, a model was developed by simultaneously fitting UPV and surface crack width measurements to a two-parameter linear model. The proposed model can be used for predicting the degree of corrosion of steel rebar embedded in cement mortar under similar conditions used in this study up to 3.03%. Furthermore, the modeling approach can be applied to corroded reinforced concrete specimens with additional modification. The findings from this study show that UT has the potential of quantifying the degree of corrosion inside reinforced cement mortar specimens.

  19. Construction and analysis of a modular model of caspase activation in apoptosis

    PubMed Central

    Harrington, Heather A; Ho, Kenneth L; Ghosh, Samik; Tung, KC

    2008-01-01

    Background A key physiological mechanism employed by multicellular organisms is apoptosis, or programmed cell death. Apoptosis is triggered by the activation of caspases in response to both extracellular (extrinsic) and intracellular (intrinsic) signals. The extrinsic and intrinsic pathways are characterized by the formation of the death-inducing signaling complex (DISC) and the apoptosome, respectively; both the DISC and the apoptosome are oligomers with complex formation dynamics. Additionally, the extrinsic and intrinsic pathways are coupled through the mitochondrial apoptosis-induced channel via the Bcl-2 family of proteins. Results A model of caspase activation is constructed and analyzed. The apoptosis signaling network is simplified through modularization methodologies and equilibrium abstractions for three functional modules. The mathematical model is composed of a system of ordinary differential equations which is numerically solved. Multiple linear regression analysis investigates the role of each module and reduced models are constructed to identify key contributions of the extrinsic and intrinsic pathways in triggering apoptosis for different cell lines. Conclusion Through linear regression techniques, we identified the feedbacks, dissociation of complexes, and negative regulators as the key components in apoptosis. The analysis and reduced models for our model formulation reveal that the chosen cell lines predominately exhibit strong extrinsic caspase, typical of type I cell, behavior. Furthermore, under the simplified model framework, the selected cells lines exhibit different modes by which caspase activation may occur. Finally the proposed modularized model of apoptosis may generalize behavior for additional cells and tissues, specifically identifying and predicting components responsible for the transition from type I to type II cell behavior. PMID:19077196

  20. Anomalous intrinsic viscosity of octadecylamine-functionalised carbon nanotubes in suspension.

    PubMed

    Donovan, K J; Scott, K

    2013-06-28

    Single walled carbon nanotubes, SWCNTs, are used as a model cylinder of nanoscopic dimensions for testing rheological theories of how addition of cylindrical particles affects the viscosity of a suspension of such particles. Using the rate of growth of the accompanying induced linear dichroism following application of an applied electric field, the dynamics of carbon nanotube alignment is studied in suspensions of octadecylamine functionalised single walled carbon nanotubes, ODA-SWCNTs, in 1,2 dichloroethane. From such measurements the viscosity of the suspension is measured as the concentration of the suspension is varied. While working within the dilute limit the viscosity is found to increase linearly with concentration and the intrinsic viscosity of the suspension is found to be 8000. This anomalously high intrinsic viscosity is compared with the predictions of various models for a rigid cylinder and found to be incompatible with any of the current models. Some suggestions are made as to the way this ODA-SWCNT result may be eventually accommodated within other models.

  1. Simulating run-up on steep slopes with operational Boussinesq models; capabilities, spurious effects and instabilities

    NASA Astrophysics Data System (ADS)

    Løvholt, F.; Lynett, P.; Pedersen, G.

    2013-06-01

    Tsunamis induced by rock slides plunging into fjords constitute a severe threat to local coastal communities. The rock slide impact may give rise to highly non-linear waves in the near field, and because the wave lengths are relatively short, frequency dispersion comes into play. Fjord systems are rugged with steep slopes, and modeling non-linear dispersive waves in this environment with simultaneous run-up is demanding. We have run an operational Boussinesq-type TVD (total variation diminishing) model using different run-up formulations. Two different tests are considered, inundation on steep slopes and propagation in a trapezoidal channel. In addition, a set of Lagrangian models serves as reference models. Demanding test cases with solitary waves with amplitudes ranging from 0.1 to 0.5 were applied, and slopes were ranging from 10 to 50°. Different run-up formulations yielded clearly different accuracy and stability, and only some provided similar accuracy as the reference models. The test cases revealed that the model was prone to instabilities for large non-linearity and fine resolution. Some of the instabilities were linked with false breaking during the first positive inundation, which was not observed for the reference models. None of the models were able to handle the bore forming during drawdown, however. The instabilities are linked to short-crested undulations on the grid scale, and appear on fine resolution during inundation. As a consequence, convergence was not always obtained. It is reason to believe that the instability may be a general problem for Boussinesq models in fjords.

  2. A general science-based framework for dynamical spatio-temporal models

    USGS Publications Warehouse

    Wikle, C.K.; Hooten, M.B.

    2010-01-01

    Spatio-temporal statistical models are increasingly being used across a wide variety of scientific disciplines to describe and predict spatially-explicit processes that evolve over time. Correspondingly, in recent years there has been a significant amount of research on new statistical methodology for such models. Although descriptive models that approach the problem from the second-order (covariance) perspective are important, and innovative work is being done in this regard, many real-world processes are dynamic, and it can be more efficient in some cases to characterize the associated spatio-temporal dependence by the use of dynamical models. The chief challenge with the specification of such dynamical models has been related to the curse of dimensionality. Even in fairly simple linear, first-order Markovian, Gaussian error settings, statistical models are often over parameterized. Hierarchical models have proven invaluable in their ability to deal to some extent with this issue by allowing dependency among groups of parameters. In addition, this framework has allowed for the specification of science based parameterizations (and associated prior distributions) in which classes of deterministic dynamical models (e. g., partial differential equations (PDEs), integro-difference equations (IDEs), matrix models, and agent-based models) are used to guide specific parameterizations. Most of the focus for the application of such models in statistics has been in the linear case. The problems mentioned above with linear dynamic models are compounded in the case of nonlinear models. In this sense, the need for coherent and sensible model parameterizations is not only helpful, it is essential. Here, we present an overview of a framework for incorporating scientific information to motivate dynamical spatio-temporal models. First, we illustrate the methodology with the linear case. We then develop a general nonlinear spatio-temporal framework that we call general quadratic nonlinearity and demonstrate that it accommodates many different classes of scientific-based parameterizations as special cases. The model is presented in a hierarchical Bayesian framework and is illustrated with examples from ecology and oceanography. ?? 2010 Sociedad de Estad??stica e Investigaci??n Operativa.

  3. Correlation between the change in the kinetics of the ribosomal RNA rrnB P2 promoter and the transition from lag to exponential phase with Pseudomonas fluorescens.

    PubMed

    McKellar, Robin C

    2008-01-15

    Developing accurate mathematical models to describe the pre-exponential lag phase in food-borne pathogens presents a considerable challenge to food microbiologists. While the growth rate is influenced by current environmental conditions, the lag phase is affected in addition by the history of the inoculum. A deeper understanding of physiological changes taking place during the lag phase would improve accuracy of models, and in earlier studies a strain of Pseudomonas fluorescens containing the Tn7-luxCDABE gene cassette regulated by the rRNA promoter rrnB P2 was used to measure the influence of starvation, growth temperature and sub-lethal heating on promoter expression and subsequent growth. The present study expands the models developed earlier to include a model which describes the change from exponential to linear increase in promoter expression with time when the exponential phase of growth commences. A two-phase linear model with Poisson weighting was used to estimate the lag (LPDLin) and the rate (RLin) for this linear increase in bioluminescence. The Spearman rank correlation coefficient (r=0.830) between the LPDLin and the growth lag phase (LPDOD) was extremely significant (P

  4. Normal forms for reduced stochastic climate models

    PubMed Central

    Majda, Andrew J.; Franzke, Christian; Crommelin, Daan

    2009-01-01

    The systematic development of reduced low-dimensional stochastic climate models from observations or comprehensive high-dimensional climate models is an important topic for atmospheric low-frequency variability, climate sensitivity, and improved extended range forecasting. Here techniques from applied mathematics are utilized to systematically derive normal forms for reduced stochastic climate models for low-frequency variables. The use of a few Empirical Orthogonal Functions (EOFs) (also known as Principal Component Analysis, Karhunen–Loéve and Proper Orthogonal Decomposition) depending on observational data to span the low-frequency subspace requires the assessment of dyad interactions besides the more familiar triads in the interaction between the low- and high-frequency subspaces of the dynamics. It is shown below that the dyad and multiplicative triad interactions combine with the climatological linear operator interactions to simultaneously produce both strong nonlinear dissipation and Correlated Additive and Multiplicative (CAM) stochastic noise. For a single low-frequency variable the dyad interactions and climatological linear operator alone produce a normal form with CAM noise from advection of the large scales by the small scales and simultaneously strong cubic damping. These normal forms should prove useful for developing systematic strategies for the estimation of stochastic models from climate data. As an illustrative example the one-dimensional normal form is applied below to low-frequency patterns such as the North Atlantic Oscillation (NAO) in a climate model. The results here also illustrate the short comings of a recent linear scalar CAM noise model proposed elsewhere for low-frequency variability. PMID:19228943

  5. On the linear programming bound for linear Lee codes.

    PubMed

    Astola, Helena; Tabus, Ioan

    2016-01-01

    Based on an invariance-type property of the Lee-compositions of a linear Lee code, additional equality constraints can be introduced to the linear programming problem of linear Lee codes. In this paper, we formulate this property in terms of an action of the multiplicative group of the field [Formula: see text] on the set of Lee-compositions. We show some useful properties of certain sums of Lee-numbers, which are the eigenvalues of the Lee association scheme, appearing in the linear programming problem of linear Lee codes. Using the additional equality constraints, we formulate the linear programming problem of linear Lee codes in a very compact form, leading to a fast execution, which allows to efficiently compute the bounds for large parameter values of the linear codes.

  6. Additive effects of word frequency and stimulus quality: the influence of trial history and data transformations.

    PubMed

    Balota, David A; Aschenbrenner, Andrew J; Yap, Melvin J

    2013-09-01

    A counterintuitive and theoretically important pattern of results in the visual word recognition literature is that both word frequency and stimulus quality produce large but additive effects in lexical decision performance. The additive nature of these effects has recently been called into question by Masson and Kliegl (in press), who used linear mixed effects modeling to provide evidence that the additive effects were actually being driven by previous trial history. Because Masson and Kliegl also included semantic priming as a factor in their study and recent evidence has shown that semantic priming can moderate the additivity of word frequency and stimulus quality (Scaltritti, Balota, & Peressotti, 2012), we reanalyzed data from 3 published studies to determine if previous trial history moderated the additive pattern when semantic priming was not also manipulated. The results indicated that previous trial history did not influence the joint influence of word frequency and stimulus quality. More important, and independent of Masson and Kliegl's conclusions, we also show how a common transformation used in linear mixed effects analyses to normalize the residuals can systematically alter the way in which two variables combine to influence performance. Specifically, using transformed, rather than raw reaction times, consistently produces more underadditive patterns. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  7. A flexible count data regression model for risk analysis.

    PubMed

    Guikema, Seth D; Coffelt, Jeremy P; Goffelt, Jeremy P

    2008-02-01

    In many cases, risk and reliability analyses involve estimating the probabilities of discrete events such as hardware failures and occurrences of disease or death. There is often additional information in the form of explanatory variables that can be used to help estimate the likelihood of different numbers of events in the future through the use of an appropriate regression model, such as a generalized linear model. However, existing generalized linear models (GLM) are limited in their ability to handle the types of variance structures often encountered in using count data in risk and reliability analysis. In particular, standard models cannot handle both underdispersed data (variance less than the mean) and overdispersed data (variance greater than the mean) in a single coherent modeling framework. This article presents a new GLM based on a reformulation of the Conway-Maxwell Poisson (COM) distribution that is useful for both underdispersed and overdispersed count data and demonstrates this model by applying it to the assessment of electric power system reliability. The results show that the proposed COM GLM can provide as good of fits to data as the commonly used existing models for overdispered data sets while outperforming these commonly used models for underdispersed data sets.

  8. Linear dynamic range enhancement in a CMOS imager

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor)

    2008-01-01

    A CMOS imager with increased linear dynamic range but without degradation in noise, responsivity, linearity, fixed-pattern noise, or photometric calibration comprises a linear calibrated dual gain pixel in which the gain is reduced after a pre-defined threshold level by switching in an additional capacitance. The pixel may include a novel on-pixel latch circuit that is used to switch in the additional capacitance.

  9. A polynomial based model for cell fate prediction in human diseases.

    PubMed

    Ma, Lichun; Zheng, Jie

    2017-12-21

    Cell fate regulation directly affects tissue homeostasis and human health. Research on cell fate decision sheds light on key regulators, facilitates understanding the mechanisms, and suggests novel strategies to treat human diseases that are related to abnormal cell development. In this study, we proposed a polynomial based model to predict cell fate. This model was derived from Taylor series. As a case study, gene expression data of pancreatic cells were adopted to test and verify the model. As numerous features (genes) are available, we employed two kinds of feature selection methods, i.e. correlation based and apoptosis pathway based. Then polynomials of different degrees were used to refine the cell fate prediction function. 10-fold cross-validation was carried out to evaluate the performance of our model. In addition, we analyzed the stability of the resultant cell fate prediction model by evaluating the ranges of the parameters, as well as assessing the variances of the predicted values at randomly selected points. Results show that, within both the two considered gene selection methods, the prediction accuracies of polynomials of different degrees show little differences. Interestingly, the linear polynomial (degree 1 polynomial) is more stable than others. When comparing the linear polynomials based on the two gene selection methods, it shows that although the accuracy of the linear polynomial that uses correlation analysis outcomes is a little higher (achieves 86.62%), the one within genes of the apoptosis pathway is much more stable. Considering both the prediction accuracy and the stability of polynomial models of different degrees, the linear model is a preferred choice for cell fate prediction with gene expression data of pancreatic cells. The presented cell fate prediction model can be extended to other cells, which may be important for basic research as well as clinical study of cell development related diseases.

  10. Quantification and parametrization of non-linearity effects by higher-order sensitivity terms in scattered light differential optical absorption spectroscopy

    NASA Astrophysics Data System (ADS)

    Puķīte, Jānis; Wagner, Thomas

    2016-05-01

    We address the application of differential optical absorption spectroscopy (DOAS) of scattered light observations in the presence of strong absorbers (in particular ozone), for which the absorption optical depth is a non-linear function of the trace gas concentration. This is the case because Beer-Lambert law generally does not hold for scattered light measurements due to many light paths contributing to the measurement. While in many cases linear approximation can be made, for scenarios with strong absorptions non-linear effects cannot always be neglected. This is especially the case for observation geometries, for which the light contributing to the measurement is crossing the atmosphere under spatially well-separated paths differing strongly in length and location, like in limb geometry. In these cases, often full retrieval algorithms are applied to address the non-linearities, requiring iterative forward modelling of absorption spectra involving time-consuming wavelength-by-wavelength radiative transfer modelling. In this study, we propose to describe the non-linear effects by additional sensitivity parameters that can be used e.g. to build up a lookup table. Together with widely used box air mass factors (effective light paths) describing the linear response to the increase in the trace gas amount, the higher-order sensitivity parameters eliminate the need for repeating the radiative transfer modelling when modifying the absorption scenario even in the presence of a strong absorption background. While the higher-order absorption structures can be described as separate fit parameters in the spectral analysis (so-called DOAS fit), in practice their quantitative evaluation requires good measurement quality (typically better than that available from current measurements). Therefore, we introduce an iterative retrieval algorithm correcting for the higher-order absorption structures not yet considered in the DOAS fit as well as the absorption dependence on temperature and scattering processes.

  11. Precise Point Positioning Using Triple GNSS Constellations in Various Modes

    PubMed Central

    Afifi, Akram; El-Rabbany, Ahmed

    2016-01-01

    This paper introduces a new dual-frequency precise point positioning (PPP) model, which combines the observations from three different global navigation satellite system (GNSS) constellations, namely GPS, Galileo, and BeiDou. Combining measurements from different GNSS systems introduces additional biases, including inter-system bias and hardware delays, which require rigorous modelling. Our model is based on the un-differenced and between-satellite single-difference (BSSD) linear combinations. BSSD linear combination cancels out some receiver-related biases, including receiver clock error and non-zero initial phase bias of the receiver oscillator. Forming the BSSD linear combination requires a reference satellite, which can be selected from any of the GPS, Galileo, and BeiDou systems. In this paper three BSSD scenarios are tested; each considers a reference satellite from a different GNSS constellation. Natural Resources Canada’s GPSPace PPP software is modified to enable a combined GPS, Galileo, and BeiDou PPP solution and to handle the newly introduced biases. A total of four data sets collected at four different IGS stations are processed to verify the developed PPP model. Precise satellite orbit and clock products from the International GNSS Service Multi-GNSS Experiment (IGS-MGEX) network are used to correct the GPS, Galileo, and BeiDou measurements in the post-processing PPP mode. A real-time PPP solution is also obtained, which is referred to as RT-PPP in the sequel, through the use of the IGS real-time service (RTS) for satellite orbit and clock corrections. However, only GPS and Galileo observations are used for the RT-PPP solution, as the RTS-IGS satellite products are not presently available for BeiDou system. All post-processed and real-time PPP solutions are compared with the traditional un-differenced GPS-only counterparts. It is shown that combining the GPS, Galileo, and BeiDou observations in the post-processing mode improves the PPP convergence time by 25% compared with the GPS-only counterpart, regardless of the linear combination used. The use of BSSD linear combination improves the precision of the estimated positioning parameters by about 25% in comparison with the GPS-only PPP solution. Additionally, the solution convergence time is reduced to 10 minutes for the BSSD model, which represents about 50% reduction, in comparison with the GPS-only PPP solution. The GNSS RT-PPP solution, on the other hand, shows a similar convergence time and precision to the GPS-only counterpart. PMID:27240376

  12. Precise Point Positioning Using Triple GNSS Constellations in Various Modes.

    PubMed

    Afifi, Akram; El-Rabbany, Ahmed

    2016-05-28

    This paper introduces a new dual-frequency precise point positioning (PPP) model, which combines the observations from three different global navigation satellite system (GNSS) constellations, namely GPS, Galileo, and BeiDou. Combining measurements from different GNSS systems introduces additional biases, including inter-system bias and hardware delays, which require rigorous modelling. Our model is based on the un-differenced and between-satellite single-difference (BSSD) linear combinations. BSSD linear combination cancels out some receiver-related biases, including receiver clock error and non-zero initial phase bias of the receiver oscillator. Forming the BSSD linear combination requires a reference satellite, which can be selected from any of the GPS, Galileo, and BeiDou systems. In this paper three BSSD scenarios are tested; each considers a reference satellite from a different GNSS constellation. Natural Resources Canada's GPSPace PPP software is modified to enable a combined GPS, Galileo, and BeiDou PPP solution and to handle the newly introduced biases. A total of four data sets collected at four different IGS stations are processed to verify the developed PPP model. Precise satellite orbit and clock products from the International GNSS Service Multi-GNSS Experiment (IGS-MGEX) network are used to correct the GPS, Galileo, and BeiDou measurements in the post-processing PPP mode. A real-time PPP solution is also obtained, which is referred to as RT-PPP in the sequel, through the use of the IGS real-time service (RTS) for satellite orbit and clock corrections. However, only GPS and Galileo observations are used for the RT-PPP solution, as the RTS-IGS satellite products are not presently available for BeiDou system. All post-processed and real-time PPP solutions are compared with the traditional un-differenced GPS-only counterparts. It is shown that combining the GPS, Galileo, and BeiDou observations in the post-processing mode improves the PPP convergence time by 25% compared with the GPS-only counterpart, regardless of the linear combination used. The use of BSSD linear combination improves the precision of the estimated positioning parameters by about 25% in comparison with the GPS-only PPP solution. Additionally, the solution convergence time is reduced to 10 minutes for the BSSD model, which represents about 50% reduction, in comparison with the GPS-only PPP solution. The GNSS RT-PPP solution, on the other hand, shows a similar convergence time and precision to the GPS-only counterpart.

  13. Short-term effects of meteorological factors on hand, foot and mouth disease among children in Shenzhen, China: Non-linearity, threshold and interaction.

    PubMed

    Zhang, Zhen; Xie, Xu; Chen, Xiliang; Li, Yuan; Lu, Yan; Mei, Shujiang; Liao, Yuxue; Lin, Hualiang

    2016-01-01

    Various meteorological factors have been associated with hand, foot and mouth disease (HFMD) among children; however, fewer studies have examined the non-linearity and interaction among the meteorological factors. A generalized additive model with a log link allowing Poisson auto-regression and over-dispersion was applied to investigate the short-term effects daily meteorological factors on children HFMD with adjustment of potential confounding factors. We found positive effects of mean temperature and wind speed, the excess relative risk (ERR) was 2.75% (95% CI: 1.98%, 3.53%) for one degree increase in daily mean temperature on lag day 6, and 3.93% (95% CI: 2.16% to 5.73%) for 1m/s increase in wind speed on lag day 3. We found a non-linear effect of relative humidity with thresholds with the low threshold at 45% and high threshold at 85%, within which there was positive effect, the ERR was 1.06% (95% CI: 0.85% to 1.27%) for 1 percent increase in relative humidity on lag day 5. No significant effect was observed for rainfall and sunshine duration. For the interactive effects, we found a weak additive interaction between mean temperature and relative humidity, and slightly antagonistic interaction between mean temperature and wind speed, and between relative humidity and wind speed in the additive models, but the interactions were not statistically significant. This study suggests that mean temperature, relative humidity and wind speed might be risk factors of children HFMD in Shenzhen, and the interaction analysis indicates that these meteorological factors might have played their roles individually. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Numerical Test of the Additivity Principle in Anomalous Transport

    NASA Astrophysics Data System (ADS)

    Tamaki, Shuji

    2017-10-01

    The additivity principle (AP) is one of the remarkable predictions that systematically generates all information on current fluctuations once the value of average current in the linear response regime is input. However, conditions to justify the AP are still ambiguous. We hence consider three tractable models, and discuss possible conditions. The models include the harmonic chain (HC), momentum exchange (ME) model, and momentum flip (MF) model, which respectively show ballistic, anomalous, and diffusive transport. We compare the heat current cumulants predicted by the AP with exact numerical data obtained for these models. The HC does not show the AP, whereas the MF model satisfies it, as expected, since the AP was originally proposed for diffusive systems. Surprisingly, the ME model also shows the AP. The ME model is known to show the anomalous transport similar to that shown in nonlinear systems such as the Fermi-Pasta-Ulam model. Our finding indicates that general nonlinear systems may satisfy the AP. Possible conditions for satisfying the AP are discussed.

  15. IMPACT: Investigating the impact of Models of Practice for Allied health Care in subacuTe settings. A protocol for a quasi-experimental mixed methods study of cost effectiveness and outcomes for patients exposed to different models of allied health care.

    PubMed

    Coker, Freya; Williams, Cylie M; Taylor, Nicholas F; Caspers, Kirsten; McAlinden, Fiona; Wilton, Anita; Shields, Nora; Haines, Terry P

    2018-05-10

    This protocol considers three allied health staffing models across public health subacute hospitals. This quasi-experimental mixed-methods study, including qualitative process evaluation, aims to evaluate the impact of additional allied health services in subacute care, in rehabilitation and geriatric evaluation management settings, on patient, health service and societal outcomes. This health services research will analyse outcomes of patients exposed to different allied health models of care at three health services. Each health service will have a control ward (routine care) and an intervention ward (additional allied health). This project has two parts. Part 1: a whole of site data extraction for included wards. Outcome measures will include: length of stay, rate of readmissions, discharge destinations, community referrals, patient feedback and staff perspectives. Part 2: Functional Independence Measure scores will be collected every 2-3 days for the duration of 60 patient admissions.Data from part 1 will be analysed by linear regression analysis for continuous outcomes using patient-level data and logistic regression analysis for binary outcomes. Qualitative data will be analysed using a deductive thematic approach. For part 2, a linear mixed model analysis will be conducted using therapy service delivery and days since admission to subacute care as fixed factors in the model and individual participant as a random factor. Graphical analysis will be used to examine the growth curve of the model and transformations. The days since admission factor will be used to examine non-linear growth trajectories to determine if they lead to better model fit. Findings will be disseminated through local reports and to the Department of Health and Human Services Victoria. Results will be presented at conferences and submitted to peer-reviewed journals. The Monash Health Human Research Ethics committee approved this multisite research (HREC/17/MonH/144 and HREC/17/MonH/547). © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  16. Batch-mode Reinforcement Learning for improved hydro-environmental systems management

    NASA Astrophysics Data System (ADS)

    Castelletti, A.; Galelli, S.; Restelli, M.; Soncini-Sessa, R.

    2010-12-01

    Despite the great progresses made in the last decades, the optimal management of hydro-environmental systems still remains a very active and challenging research area. The combination of multiple, often conflicting interests, high non-linearities of the physical processes and the management objectives, strong uncertainties in the inputs, and high dimensional state makes the problem challenging and intriguing. Stochastic Dynamic Programming (SDP) is one of the most suitable methods for designing (Pareto) optimal management policies preserving the original problem complexity. However, it suffers from a dual curse, which, de facto, prevents its practical application to even reasonably complex water systems. (i) Computational requirement grows exponentially with state and control dimension (Bellman's curse of dimensionality), so that SDP can not be used with water systems where the state vector includes more than few (2-3) units. (ii) An explicit model of each system's component is required (curse of modelling) to anticipate the effects of the system transitions, i.e. any information included into the SDP framework can only be either a state variable described by a dynamic model or a stochastic disturbance, independent in time, with the associated pdf. Any exogenous information that could effectively improve the system operation cannot be explicitly considered in taking the management decision, unless a dynamic model is identified for each additional information, thus adding to the problem complexity through the curse of dimensionality (additional state variables). To mitigate this dual curse, the combined use of batch-mode Reinforcement Learning (bRL) and Dynamic Model Reduction (DMR) techniques is explored in this study. bRL overcomes the curse of modelling by replacing explicit modelling with an external simulator and/or historical observations. The curse of dimensionality is averted using a functional approximation of the SDP value function based on proper non-linear regressors. DMR reduces the complexity and the associated computational requirements of non-linear distributed process based models, making them suitable for being included into optimization schemes. Results from real world applications of the approach are also presented, including reservoir operation with both quality and quantity targets.

  17. The subtle danger of symmetry restrictions in time series regressions, with application to fertility models.

    PubMed

    Haynes, S E

    1983-10-01

    It is widely known that linear restrictions involve bias. What is not known is that some linear restrictions are especially dangerous for hypothesis testing. For some, the expected value of the restricted coefficient does not lie between (among) the true unconstrained coefficients, which implies that the estimate is not a simple average of these coefficients. In this paper, the danger is examined regarding the additive linear restriction almost universally imposed in statistical research--the restriction of symmetry. Symmetry implies that the response of the dependent variable to a unit decrease in an expanatory variable is identical, but of opposite sign, to the response to a unit increase. The 1st section of the paper demonstrates theoretically that a coefficient restricted by symmetry (unlike coefficients embodying other additive restrictions) is not a simple average of the unconstrained coefficients because the relevant interacted variables are inversly correlated by definition. The next section shows that, under the restriction of symmetry, fertility in Finland from 1885-1925 appears to respond in a prolonged manner to infant mortality (significant and positive with a lag of 4-6 years), suggesting a response to expected deaths. However, unscontrained estimates indicate that this finding is spurious. When the restriction is relaxed, the dominant response is rapid (significant and positive with a lag of 1-2 years) and stronger for declines in mortality, supporting an aymmetric response to actual deaths. For 2 reasons, the danger of the symmetry restriction may be especially pervasive. 1st, unlike most other linear constraints, symmetry is passively imposed merely by ignoring the possibility of asymmetry. 2nd, modles in a wide range of fields--including macroeconomics (e.g., demand for money, consumption, and investment models, and the Phillips curve), international economics (e.g., intervention models of central banks), and labor economics (e.g., sticky wage models)--predict asymmetry. The conclusion of the study is that, to avoid spurious hypothesis testing, empirical research should systematically test for asymmetry, especially when predicted by theory.

  18. Comparison of linear and non-linear models for predicting energy expenditure from raw accelerometer data.

    PubMed

    Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A

    2017-02-01

    This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r  =  0.71-0.88, RMSE: 1.11-1.61 METs; p  >  0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r  =  0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r  =  0.88, RMSE: 1.10-1.11 METs; p  >  0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r  =  0.88, RMSE: 1.12 METs. Linear models-correlations: r  =  0.86, RMSE: 1.18-1.19 METs; p  <  0.05), and both ANNs had higher correlations and lower RMSE than both linear models for the wrist-worn accelerometers (ANN-correlations: r  =  0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r  =  0.71-0.73, RMSE: 1.55-1.61 METs; p  <  0.01). For studies using wrist-worn accelerometers, machine learning models offer a significant improvement in EE prediction accuracy over linear models. Conversely, linear models showed similar EE prediction accuracy to machine learning models for hip- and thigh-worn accelerometers and may be viable alternative modeling techniques for EE prediction for hip- or thigh-worn accelerometers.

  19. Estimation of suspended-sediment rating curves and mean suspended-sediment loads

    USGS Publications Warehouse

    Crawford, Charles G.

    1991-01-01

    A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.

  20. Summary of the SeaRISE Project's Experiments on Modeled Ice-Sheet Contributions to Future Sea Level: Linearities and Non-linearities

    NASA Astrophysics Data System (ADS)

    Bindschadler, Robert

    2013-04-01

    The SeaRISE (Sea-level Response to Ice Sheet Evolution) project achieved ice-sheet model ensemble responses to a variety of prescribed changes to surface mass balance, basal sliding and ocean boundary melting. Greenland ice sheet models are more sensitive than Antarctic ice sheet models to likely atmospheric changes in surface mass balance, while Antarctic models are most sensitive to basal melting of its ice shelves. An experiment approximating the IPCC's RCP8.5 scenario produces first century contributions to sea level of 22.3 and 7.3 cm from Greenland and Antarctica, respectively, with a range among models of 62 and 17 cm, respectively. By 200 years, these projections increase to 53.2 and 23.4 cm, respectively, with ranges of 79 and 57 cm. The considerable range among models was not only in the magnitude of ice lost, but also in the spatial pattern of response to identical forcing. Despite this variation, the response of any single model to a large range in the forcing intensity was remarkably linear in most cases. Additionally, the results of sensitivity experiments to single types of forcing (i.e., only one of the surface mass balance, or basal sliding, or ocean boundary melting) could be summed to accurately predict any model's result for an experiment when multiple forcings were applied simultaneously. This suggests a limited amount of feedback through the ice sheet's internal dynamics between these types of forcing over the time scale of a few centuries (SeaRISE experiments lasted 500 years).

  1. On testing an unspecified function through a linear mixed effects model with multiple variance components

    PubMed Central

    Wang, Yuanjia; Chen, Huaihou

    2012-01-01

    Summary We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 108 simulations) and asymptotic approximation may be unreliable and conservative. PMID:23020801

  2. On testing an unspecified function through a linear mixed effects model with multiple variance components.

    PubMed

    Wang, Yuanjia; Chen, Huaihou

    2012-12-01

    We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 10(8) simulations) and asymptotic approximation may be unreliable and conservative. © 2012, The International Biometric Society.

  3. On estimating probability of presence from use-availability or presence-background data.

    PubMed

    Phillips, Steven J; Elith, Jane

    2013-06-01

    A fundamental ecological modeling task is to estimate the probability that a species is present in (or uses) a site, conditional on environmental variables. For many species, available data consist of "presence" data (locations where the species [or evidence of it] has been observed), together with "background" data, a random sample of available environmental conditions. Recently published papers disagree on whether probability of presence is identifiable from such presence-background data alone. This paper aims to resolve the disagreement, demonstrating that additional information is required. We defined seven simulated species representing various simple shapes of response to environmental variables (constant, linear, convex, unimodal, S-shaped) and ran five logistic model-fitting methods using 1000 presence samples and 10 000 background samples; the simulations were repeated 100 times. The experiment revealed a stark contrast between two groups of methods: those based on a strong assumption that species' true probability of presence exactly matches a given parametric form had highly variable predictions and much larger RMS error than methods that take population prevalence (the fraction of sites in which the species is present) as an additional parameter. For six species, the former group grossly under- or overestimated probability of presence. The cause was not model structure or choice of link function, because all methods were logistic with linear and, where necessary, quadratic terms. Rather, the experiment demonstrates that an estimate of prevalence is not just helpful, but is necessary (except in special cases) for identifying probability of presence. We therefore advise against use of methods that rely on the strong assumption, due to Lele and Keim (recently advocated by Royle et al.) and Lancaster and Imbens. The methods are fragile, and their strong assumption is unlikely to be true in practice. We emphasize, however, that we are not arguing against standard statistical methods such as logistic regression, generalized linear models, and so forth, none of which requires the strong assumption. If probability of presence is required for a given application, there is no panacea for lack of data. Presence-background data must be augmented with an additional datum, e.g., species' prevalence, to reliably estimate absolute (rather than relative) probability of presence.

  4. Integration of Attributes from Non-Linear Characterization of Cardiovascular Time-Series for Prediction of Defibrillation Outcomes

    PubMed Central

    Shandilya, Sharad; Kurz, Michael C.; Ward, Kevin R.; Najarian, Kayvan

    2016-01-01

    Objective The timing of defibrillation is mostly at arbitrary intervals during cardio-pulmonary resuscitation (CPR), rather than during intervals when the out-of-hospital cardiac arrest (OOH-CA) patient is physiologically primed for successful countershock. Interruptions to CPR may negatively impact defibrillation success. Multiple defibrillations can be associated with decreased post-resuscitation myocardial function. We hypothesize that a more complete picture of the cardiovascular system can be gained through non-linear dynamics and integration of multiple physiologic measures from biomedical signals. Materials and Methods Retrospective analysis of 153 anonymized OOH-CA patients who received at least one defibrillation for ventricular fibrillation (VF) was undertaken. A machine learning model, termed Multiple Domain Integrative (MDI) model, was developed to predict defibrillation success. We explore the rationale for non-linear dynamics and statistically validate heuristics involved in feature extraction for model development. Performance of MDI is then compared to the amplitude spectrum area (AMSA) technique. Results 358 defibrillations were evaluated (218 unsuccessful and 140 successful). Non-linear properties (Lyapunov exponent > 0) of the ECG signals indicate a chaotic nature and validate the use of novel non-linear dynamic methods for feature extraction. Classification using MDI yielded ROC-AUC of 83.2% and accuracy of 78.8%, for the model built with ECG data only. Utilizing 10-fold cross-validation, at 80% specificity level, MDI (74% sensitivity) outperformed AMSA (53.6% sensitivity). At 90% specificity level, MDI had 68.4% sensitivity while AMSA had 43.3% sensitivity. Integrating available end-tidal carbon dioxide features into MDI, for the available 48 defibrillations, boosted ROC-AUC to 93.8% and accuracy to 83.3% at 80% sensitivity. Conclusion At clinically relevant sensitivity thresholds, the MDI provides improved performance as compared to AMSA, yielding fewer unsuccessful defibrillations. Addition of partial end-tidal carbon dioxide (PetCO2) signal improves accuracy and sensitivity of the MDI prediction model. PMID:26741805

  5. User's manual for LINEAR, a FORTRAN program to derive linear aircraft models

    NASA Technical Reports Server (NTRS)

    Duke, Eugene L.; Patterson, Brian P.; Antoniewicz, Robert F.

    1987-01-01

    This report documents a FORTRAN program that provides a powerful and flexible tool for the linearization of aircraft models. The program LINEAR numerically determines a linear system model using nonlinear equations of motion and a user-supplied nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model.

  6. Assessing Local Model Adequacy in Bayesian Hierarchical Models Using the Partitioned Deviance Information Criterion

    PubMed Central

    Wheeler, David C.; Hickson, DeMarc A.; Waller, Lance A.

    2010-01-01

    Many diagnostic tools and goodness-of-fit measures, such as the Akaike information criterion (AIC) and the Bayesian deviance information criterion (DIC), are available to evaluate the overall adequacy of linear regression models. In addition, visually assessing adequacy in models has become an essential part of any regression analysis. In this paper, we focus on a spatial consideration of the local DIC measure for model selection and goodness-of-fit evaluation. We use a partitioning of the DIC into the local DIC, leverage, and deviance residuals to assess local model fit and influence for both individual observations and groups of observations in a Bayesian framework. We use visualization of the local DIC and differences in local DIC between models to assist in model selection and to visualize the global and local impacts of adding covariates or model parameters. We demonstrate the utility of the local DIC in assessing model adequacy using HIV prevalence data from pregnant women in the Butare province of Rwanda during 1989-1993 using a range of linear model specifications, from global effects only to spatially varying coefficient models, and a set of covariates related to sexual behavior. Results of applying the diagnostic visualization approach include more refined model selection and greater understanding of the models as applied to the data. PMID:21243121

  7. Joint scaling laws in functional and evolutionary categories in prokaryotic genomes

    PubMed Central

    Grilli, J.; Bassetti, B.; Maslov, S.; Cosentino Lagomarsino, M.

    2012-01-01

    We propose and study a class-expansion/innovation/loss model of genome evolution taking into account biological roles of genes and their constituent domains. In our model, numbers of genes in different functional categories are coupled to each other. For example, an increase in the number of metabolic enzymes in a genome is usually accompanied by addition of new transcription factors regulating these enzymes. Such coupling can be thought of as a proportional ‘recipe’ for genome composition of the type ‘a spoonful of sugar for each egg yolk’. The model jointly reproduces two known empirical laws: the distribution of family sizes and the non-linear scaling of the number of genes in certain functional categories (e.g. transcription factors) with genome size. In addition, it allows us to derive a novel relation between the exponents characterizing these two scaling laws, establishing a direct quantitative connection between evolutionary and functional categories. It predicts that functional categories that grow faster-than-linearly with genome size to be characterized by flatter-than-average family size distributions. This relation is confirmed by our bioinformatics analysis of prokaryotic genomes. This proves that the joint quantitative trends of functional and evolutionary classes can be understood in terms of evolutionary growth with proportional recipes. PMID:21937509

  8. A continuous damage model based on stepwise-stress creep rupture tests

    NASA Technical Reports Server (NTRS)

    Robinson, D. N.

    1985-01-01

    A creep damage accumulation model is presented that makes use of the Kachanov damage rate concept with a provision accounting for damage that results from a variable stress history. This is accomplished through the introduction of an additional term in the Kachanov rate equation that is linear in the stress rate. Specification of the material functions and parameters in the model requires two types of constituting a data base: (1) standard constant-stress creep rupture tests, and (2) a sequence of two-step creep rupture tests.

  9. Investigations into the triggered lightning response of the F106B thunderstorm research aircraft

    NASA Technical Reports Server (NTRS)

    Rudolph, Terence H.; Perala, Rodney A.; Mckenna, Paul M.; Parker, Steven L.

    1985-01-01

    An investigation has been conducted into the lightning characteristics of the NASA F106B thunderstorm research aircraft. The investigation includes analysis of measured data from the aircraft in the time and frequency domains. Linear and nonlinear computer modelling has also been performed. In addition, new computer tools have been developed, including a new enhanced nonlinear air breakdown model, and a subgrid model useful for analyzing fine details of the aircraft's geometry. Comparison of measured and calculated electromagnetic responses of the aircraft to a triggered lightning environment are presented.

  10. Use of LANDSAT images of vegetation cover to estimate effective hydraulic properties of soils

    NASA Technical Reports Server (NTRS)

    Eagleson, Peter S.; Jasinski, Michael F.

    1988-01-01

    This work focuses on the characterization of natural, spatially variable, semivegetated landscapes using a linear, stochastic, canopy-soil reflectance model. A first application of the model was the investigation of the effects of subpixel and regional variability of scenes on the shape and structure of red-infrared scattergrams. Additionally, the model was used to investigate the inverse problem, the estimation of subpixel vegetation cover, given only the scattergrams of simulated satellite scale multispectral scenes. The major aspects of that work, including recent field investigations, are summarized.

  11. The Development of Sensitivity to Grammatical Violations in American Sign Language: Native versus Nonnative Signers

    ERIC Educational Resources Information Center

    Novogrodsky, Rama; Henner, Jon; Caldwell-Harris, Catherine; Hoffmeister, Robert

    2017-01-01

    Factors influencing native and nonnative signers' syntactic judgment ability in American Sign Language (ASL) were explored for 421 deaf students aged 7;6-18;5. Predictors for syntactic knowledge were chronological age, age of entering a school for the deaf, gender, and additional learning disabilities. Mixed-effects linear modeling analysis…

  12. Identity Profiles in Lesbian, Gay, and Bisexual Youth: The Role of Family Influences

    ERIC Educational Resources Information Center

    Bregman, Hallie R.; Malik, Neena M.; Page, Matthew J. L.; Makynen, Emily; Lindahl, Kristin M.

    2013-01-01

    Sexual identity development is a central task of adolescence and young adulthood and can be especially challenging for sexual minority youth. Recent research has moved from a stage model of identity development in lesbian, gay, and bisexual (LGB) youth to examining identity in a non-linear, multidimensional manner. In addition, although families…

  13. Development of a nonlinear model for the prediction of response times of glucose affinity sensors using concanavalin A and dextran and the development of a differential osmotic glucose affinity sensor

    NASA Astrophysics Data System (ADS)

    Reis, Louis G.

    With the increasing prevalence of diabetes in the United States and worldwide, blood glucose monitoring must be accurate and reliable. Current enzymatic sensors have numerous disadvantages that make them unreliable and unfavorable among patients. Recent research in glucose affinity sensors correct some of the problems that enzymatic sensors experience. Dextran and concanavalin A are two of the more common components used in glucose affinity sensors. When these sensors were first explored, a model was derived to predict the response time of a glucose affinity sensor using concanavalin A and dextran. However, the model assumed the system was linear and fell short of calculating times representative of the response times determined through experimental tests with the sensors. In this work, a new model that uses the Stokes-Einstein Equation to demonstrate the nonlinear behavior of the glucose affinity assay was developed to predict the response times of similar glucose affinity sensors. In addition to the device tested by the original linear model, additional devices were identified and tested with the proposed model. The nonlinear model was designed to accommodate the many different variations between systems. The proposed model was able to accurately calculate response times for sensors using the concanavalin A-dextran affinity assay with respect to the experimentally reported times by the independent research groups. Parameter studies using the nonlinear model were able to identify possible setbacks that could compromise the response of thesystem. Specifically, the model showed that the improper use of asymmetrical membranes could increase the response time by as little as 20% or more as the device is miniaturized. The model also demonstrated that systems using the concanavalin Adextran assay would experience higher response times in the hypoglycemic range. This work attempted to replicate and improve an osmotic glucose affinity sensor. The system was designed to negate additional effects that could cause artifacts or irregular readings such as external osmotic differences and external pressure differences. However, the experimental setup and execution faced numerous setbacks that highlighted the additional difficulty that sensors using asymmetrical ceramic membranes and the concanavalin A-dextran affinity assay may experience.

  14. An exponential decay model for mediation.

    PubMed

    Fritz, Matthew S

    2014-10-01

    Mediation analysis is often used to investigate mechanisms of change in prevention research. Results finding mediation are strengthened when longitudinal data are used because of the need for temporal precedence. Current longitudinal mediation models have focused mainly on linear change, but many variables in prevention change nonlinearly across time. The most common solution to nonlinearity is to add a quadratic term to the linear model, but this can lead to the use of the quadratic function to explain all nonlinearity, regardless of theory and the characteristics of the variables in the model. The current study describes the problems that arise when quadratic functions are used to describe all nonlinearity and how the use of nonlinear functions, such as exponential decay, address many of these problems. In addition, nonlinear models provide several advantages over polynomial models including usefulness of parameters, parsimony, and generalizability. The effects of using nonlinear functions for mediation analysis are then discussed and a nonlinear growth curve model for mediation is presented. An empirical example using data from a randomized intervention study is then provided to illustrate the estimation and interpretation of the model. Implications, limitations, and future directions are also discussed.

  15. Forecasting Container Throughput at the Doraleh Port in Djibouti through Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Mohamed Ismael, Hawa; Vandyck, George Kobina

    The Doraleh Container Terminal (DCT) located in Djibouti has been noted as the most technologically advanced container terminal on the African continent. DCT's strategic location at the crossroads of the main shipping lanes connecting Asia, Africa and Europe put it in a unique position to provide important shipping services to vessels plying that route. This paper aims to forecast container throughput through the Doraleh Container Port in Djibouti by Time Series Analysis. A selection of univariate forecasting models has been used, namely Triple Exponential Smoothing Model, Grey Model and Linear Regression Model. By utilizing the above three models and their combination, the forecast of container throughput through the Doraleh port was realized. A comparison of the different forecasting results of the three models, in addition to the combination forecast is then undertaken, based on commonly used evaluation criteria Mean Absolute Deviation (MAD) and Mean Absolute Percentage Error (MAPE). The study found that the Linear Regression forecasting Model was the best prediction method for forecasting the container throughput, since its forecast error was the least. Based on the regression model, a ten (10) year forecast for container throughput at DCT has been made.

  16. An Exponential Decay Model for Mediation

    PubMed Central

    Fritz, Matthew S.

    2013-01-01

    Mediation analysis is often used to investigate mechanisms of change in prevention research. Results finding mediation are strengthened when longitudinal data are used because of the need for temporal precedence. Current longitudinal mediation models have focused mainly on linear change, but many variables in prevention change nonlinearly across time. The most common solution to nonlinearity is to add a quadratic term to the linear model, but this can lead to the use of the quadratic function to explain all nonlinearity, regardless of theory and the characteristics of the variables in the model. The current study describes the problems that arise when quadratic functions are used to describe all nonlinearity and how the use of nonlinear functions, such as exponential decay, addresses many of these problems. In addition, nonlinear models provide several advantages over polynomial models including usefulness of parameters, parsimony, and generalizability. The effects of using nonlinear functions for mediation analysis are then discussed and a nonlinear growth curve model for mediation is presented. An empirical example using data from a randomized intervention study is then provided to illustrate the estimation and interpretation of the model. Implications, limitations, and future directions are also discussed. PMID:23625557

  17. A theory of fine structure image models with an application to detection and classification of dementia.

    PubMed

    O'Neill, William; Penn, Richard; Werner, Michael; Thomas, Justin

    2015-06-01

    Estimation of stochastic process models from data is a common application of time series analysis methods. Such system identification processes are often cast as hypothesis testing exercises whose intent is to estimate model parameters and test them for statistical significance. Ordinary least squares (OLS) regression and the Levenberg-Marquardt algorithm (LMA) have proven invaluable computational tools for models being described by non-homogeneous, linear, stationary, ordinary differential equations. In this paper we extend stochastic model identification to linear, stationary, partial differential equations in two independent variables (2D) and show that OLS and LMA apply equally well to these systems. The method employs an original nonparametric statistic as a test for the significance of estimated parameters. We show gray scale and color images are special cases of 2D systems satisfying a particular autoregressive partial difference equation which estimates an analogous partial differential equation. Several applications to medical image modeling and classification illustrate the method by correctly classifying demented and normal OLS models of axial magnetic resonance brain scans according to subject Mini Mental State Exam (MMSE) scores. Comparison with 13 image classifiers from the literature indicates our classifier is at least 14 times faster than any of them and has a classification accuracy better than all but one. Our modeling method applies to any linear, stationary, partial differential equation and the method is readily extended to 3D whole-organ systems. Further, in addition to being a robust image classifier, estimated image models offer insights into which parameters carry the most diagnostic image information and thereby suggest finer divisions could be made within a class. Image models can be estimated in milliseconds which translate to whole-organ models in seconds; such runtimes could make real-time medicine and surgery modeling possible.

  18. SU-F-J-138: An Extension of PCA-Based Respiratory Deformation Modeling Via Multi-Linear Decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliopoulos, AS; Sun, X; Pitsianis, N

    Purpose: To address and lift the limited degree of freedom (DoF) of globally bilinear motion components such as those based on principal components analysis (PCA), for encoding and modeling volumetric deformation motion. Methods: We provide a systematic approach to obtaining a multi-linear decomposition (MLD) and associated motion model from deformation vector field (DVF) data. We had previously introduced MLD for capturing multi-way relationships between DVF variables, without being restricted by the bilinear component format of PCA-based models. PCA-based modeling is commonly used for encoding patient-specific deformation as per planning 4D-CT images, and aiding on-board motion estimation during radiotherapy. However, themore » bilinear space-time decomposition inherently limits the DoF of such models by the small number of respiratory phases. While this limit is not reached in model studies using analytical or digital phantoms with low-rank motion, it compromises modeling power in the presence of relative motion, asymmetries and hysteresis, etc, which are often observed in patient data. Specifically, a low-DoF model will spuriously couple incoherent motion components, compromising its adaptability to on-board deformation changes. By the multi-linear format of extracted motion components, MLD-based models can encode higher-DoF deformation structure. Results: We conduct mathematical and experimental comparisons between PCA- and MLD-based models. A set of temporally-sampled analytical trajectories provides a synthetic, high-rank DVF; trajectories correspond to respiratory and cardiac motion factors, including different relative frequencies and spatial variations. Additionally, a digital XCAT phantom is used to simulate a lung lesion deforming incoherently with respect to the body, which adheres to a simple respiratory trend. In both cases, coupling of incoherent motion components due to a low model DoF is clearly demonstrated. Conclusion: Multi-linear decomposition can enable decoupling of distinct motion factors in high-rank DVF measurements. This may improve motion model expressiveness and adaptability to on-board deformation, aiding model-based image reconstruction for target verification. NIH Grant No. R01-184173.« less

  19. Generalized concentration addition: a method for examining mixtures containing partial agonists.

    PubMed

    Howard, Gregory J; Webster, Thomas F

    2009-08-07

    Environmentally relevant toxic exposures often consist of simultaneous exposure to multiple agents. Methods to predict the expected outcome of such combinations are critical both to risk assessment and to an accurate judgment of whether combinations are synergistic or antagonistic. Concentration addition (CA) has commonly been used to assess the presence of synergy or antagonism in combinations of similarly acting chemicals, and to predict effects of combinations of such agents. CA has the advantage of clear graphical interpretation: Curves of constant joint effect (isoboles) must be negatively sloped straight lines if the mixture is concentration additive. However, CA cannot be directly used to assess combinations that include partial agonists, although such agents are of considerable interest. Here, we propose a natural extension of CA to a functional form that may be applied to mixtures including full agonists and partial agonists. This extended definition, for which we suggest the term "generalized concentration addition," encompasses linear isoboles with slopes of any sign. We apply this approach to the simple example of agents with dose-response relationships described by Hill functions with slope parameter n=1. The resulting isoboles are in all cases linear, with negative, zero and positive slopes. Using simple mechanistic models of ligand-receptor systems, we show that the same isobole pattern and joint effects are generated by modeled combinations of full and partial agonists. Special cases include combinations of two full agonists and a full agonist plus a competitive antagonist.

  20. Restricted DCJ-indel model: sorting linear genomes with DCJ and indels

    PubMed Central

    2012-01-01

    Background The double-cut-and-join (DCJ) is a model that is able to efficiently sort a genome into another, generalizing the typical mutations (inversions, fusions, fissions, translocations) to which genomes are subject, but allowing the existence of circular chromosomes at the intermediate steps. In the general model many circular chromosomes can coexist in some intermediate step. However, when the compared genomes are linear, it is more plausible to use the so-called restricted DCJ model, in which we proceed the reincorporation of a circular chromosome immediately after its creation. These two consecutive DCJ operations, which create and reincorporate a circular chromosome, mimic a transposition or a block-interchange. When the compared genomes have the same content, it is known that the genomic distance for the restricted DCJ model is the same as the distance for the general model. If the genomes have unequal contents, in addition to DCJ it is necessary to consider indels, which are insertions and deletions of DNA segments. Linear time algorithms were proposed to compute the distance and to find a sorting scenario in a general, unrestricted DCJ-indel model that considers DCJ and indels. Results In the present work we consider the restricted DCJ-indel model for sorting linear genomes with unequal contents. We allow DCJ operations and indels with the following constraint: if a circular chromosome is created by a DCJ, it has to be reincorporated in the next step (no other DCJ or indel can be applied between the creation and the reincorporation of a circular chromosome). We then develop a sorting algorithm and give a tight upper bound for the restricted DCJ-indel distance. Conclusions We have given a tight upper bound for the restricted DCJ-indel distance. The question whether this bound can be reduced so that both the general and the restricted DCJ-indel distances are equal remains open. PMID:23281630

  1. SOCR Analyses – an Instructional Java Web-based Statistical Analysis Toolkit

    PubMed Central

    Chu, Annie; Cui, Jenny; Dinov, Ivo D.

    2011-01-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test. The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website. In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models. PMID:21546994

  2. A new method to approximate load-displacement relationships of spinal motion segments for patient-specific multi-body models of scoliotic spine.

    PubMed

    Jalalian, Athena; Tay, Francis E H; Arastehfar, Soheil; Liu, Gabriel

    2017-06-01

    Load-displacement relationships of spinal motion segments are crucial factors in characterizing the stiffness of scoliotic spine models to mimic the spine responses to loads. Although nonlinear approach to approximation of the relationships can be superior to linear ones, little mention has been made to deriving personalized nonlinear load-displacement relationships in previous studies. A method is developed for nonlinear approximation of load-displacement relationships of spinal motion segments to assist characterizing in vivo the stiffness of spine models. We propose approximation by tangent functions and focus on rotational displacements in lateral direction. The tangent functions are characterized using lateral bending test. A multi-body model was characterized to 18 patients and utilized to simulate four spine positions; right bending, left bending, neutral, and traction. The same was done using linear functions to assess the performance of the proposed tangent function in comparison with the linear function. Root-mean-square error (RMSE) of the displacements estimated by the tangent functions was 44 % smaller than the linear functions. This shows the ability of our tangent function in approximation of the relationships for a range of infinitesimal to large displacements involved in the spine movement to the four positions. In addition, the models based on the tangent functions yielded 67, 55, and 39 % smaller RMSEs of Ferguson angles, locations of vertebrae, and orientations of vertebrae, respectively, implying better estimates of spine responses to loads. Overall, it can be concluded that our method for approximating load-displacement relationships of spinal motion segments can offer good estimates of scoliotic spine stiffness.

  3. Pattern Recognition Analysis of Age-Related Retinal Ganglion Cell Signatures in the Human Eye

    PubMed Central

    Yoshioka, Nayuta; Zangerl, Barbara; Nivison-Smith, Lisa; Khuu, Sieu K.; Jones, Bryan W.; Pfeiffer, Rebecca L.; Marc, Robert E.; Kalloniatis, Michael

    2017-01-01

    Purpose To characterize macular ganglion cell layer (GCL) changes with age and provide a framework to assess changes in ocular disease. This study used data clustering to analyze macular GCL patterns from optical coherence tomography (OCT) in a large cohort of subjects without ocular disease. Methods Single eyes of 201 patients evaluated at the Centre for Eye Health (Sydney, Australia) were retrospectively enrolled (age range, 20–85); 8 × 8 grid locations obtained from Spectralis OCT macular scans were analyzed with unsupervised classification into statistically separable classes sharing common GCL thickness and change with age. The resulting classes and gridwise data were fitted with linear and segmented linear regression curves. Additionally, normalized data were analyzed to determine regression as a percentage. Accuracy of each model was examined through comparison of predicted 50-year-old equivalent macular GCL thickness for the entire cohort to a true 50-year-old reference cohort. Results Pattern recognition clustered GCL thickness across the macula into five to eight spatially concentric classes. F-test demonstrated segmented linear regression to be the most appropriate model for macular GCL change. The pattern recognition–derived and normalized model revealed less difference between the predicted macular GCL thickness and the reference cohort (average ± SD 0.19 ± 0.92 and −0.30 ± 0.61 μm) than a gridwise model (average ± SD 0.62 ± 1.43 μm). Conclusions Pattern recognition successfully identified statistically separable macular areas that undergo a segmented linear reduction with age. This regression model better predicted macular GCL thickness. The various unique spatial patterns revealed by pattern recognition combined with core GCL thickness data provide a framework to analyze GCL loss in ocular disease. PMID:28632847

  4. Simplified large African carnivore density estimators from track indices.

    PubMed

    Winterbach, Christiaan W; Ferreira, Sam M; Funston, Paul J; Somers, Michael J

    2016-01-01

    The range, population size and trend of large carnivores are important parameters to assess their status globally and to plan conservation strategies. One can use linear models to assess population size and trends of large carnivores from track-based surveys on suitable substrates. The conventional approach of a linear model with intercept may not intercept at zero, but may fit the data better than linear model through the origin. We assess whether a linear regression through the origin is more appropriate than a linear regression with intercept to model large African carnivore densities and track indices. We did simple linear regression with intercept analysis and simple linear regression through the origin and used the confidence interval for ß in the linear model y  =  αx  + ß, Standard Error of Estimate, Mean Squares Residual and Akaike Information Criteria to evaluate the models. The Lion on Clay and Low Density on Sand models with intercept were not significant ( P  > 0.05). The other four models with intercept and the six models thorough origin were all significant ( P  < 0.05). The models using linear regression with intercept all included zero in the confidence interval for ß and the null hypothesis that ß = 0 could not be rejected. All models showed that the linear model through the origin provided a better fit than the linear model with intercept, as indicated by the Standard Error of Estimate and Mean Square Residuals. Akaike Information Criteria showed that linear models through the origin were better and that none of the linear models with intercept had substantial support. Our results showed that linear regression through the origin is justified over the more typical linear regression with intercept for all models we tested. A general model can be used to estimate large carnivore densities from track densities across species and study areas. The formula observed track density = 3.26 × carnivore density can be used to estimate densities of large African carnivores using track counts on sandy substrates in areas where carnivore densities are 0.27 carnivores/100 km 2 or higher. To improve the current models, we need independent data to validate the models and data to test for non-linear relationship between track indices and true density at low densities.

  5. Sliding-Mode Control Applied for Robust Control of a Highly Unstable Aircraft

    NASA Technical Reports Server (NTRS)

    Vetter, Travis Kenneth

    2002-01-01

    An investigation into the application of an observer based sliding mode controller for robust control of a highly unstable aircraft and methods of compensating for actuator dynamics is performed. After a brief overview of some reconfigurable controllers, sliding mode control (SMC) is selected because of its invariance properties and lack of need for parameter identification. SMC is reviewed and issues with parasitic dynamics, which cause system instability, are addressed. Utilizing sliding manifold boundary layers, the nonlinear control is converted to a linear control and sliding manifold design is performed in the frequency domain. An additional feedback form of model reference hedging is employed which is similar to a prefilter and has large benefits to system performance. The effects of inclusion of actuator dynamics into the designed plant is heavily investigated. Multiple Simulink models of the full longitudinal dynamics and wing deflection modes of the forward swept aero elastic vehicle (FSAV) are constructed. Additionally a linear state space models to analyze effects from various system parameters. The FSAV has a pole at +7 rad/sec and is non-minimum phase. The use of 'model actuators' in the feedback path, and varying there design, is heavily investigated for the resulting effects on plant robustness and tolerance to actuator failure. The use of redundant actuators is also explored and improved robustness is shown. All models are simulated with severe failure and excellent tracking, and task dependent handling qualities, and low pilot induced oscillation tendency is shown.

  6. Optogenetic stimulation of a meso-scale human cortical model

    NASA Astrophysics Data System (ADS)

    Selvaraj, Prashanth; Szeri, Andrew; Sleigh, Jamie; Kirsch, Heidi

    2015-03-01

    Neurological phenomena like sleep and seizures depend not only on the activity of individual neurons, but on the dynamics of neuron populations as well. Meso-scale models of cortical activity provide a means to study neural dynamics at the level of neuron populations. Additionally, they offer a safe and economical way to test the effects and efficacy of stimulation techniques on the dynamics of the cortex. Here, we use a physiologically relevant meso-scale model of the cortex to study the hypersynchronous activity of neuron populations during epileptic seizures. The model consists of a set of stochastic, highly non-linear partial differential equations. Next, we use optogenetic stimulation to control seizures in a hyperexcited cortex, and to induce seizures in a normally functioning cortex. The high spatial and temporal resolution this method offers makes a strong case for the use of optogenetics in treating meso scale cortical disorders such as epileptic seizures. We use bifurcation analysis to investigate the effect of optogenetic stimulation in the meso scale model, and its efficacy in suppressing the non-linear dynamics of seizures.

  7. A sEMG model with experimentally based simulation parameters.

    PubMed

    Wheeler, Katherine A; Shimada, Hiroshima; Kumar, Dinesh K; Arjunan, Sridhar P

    2010-01-01

    A differential, time-invariant, surface electromyogram (sEMG) model has been implemented. While it is based on existing EMG models, the novelty of this implementation is that it assigns more accurate distributions of variables to create realistic motor unit (MU) characteristics. Variables such as muscle fibre conduction velocity, jitter (the change in the interpulse interval between subsequent action potential firings) and motor unit size have been considered to follow normal distributions about an experimentally obtained mean. In addition, motor unit firing frequencies have been considered to have non-linear and type based distributions that are in accordance with experimental results. Motor unit recruitment thresholds have been considered to be related to the MU type. The model has been used to simulate single channel differential sEMG signals from voluntary, isometric contractions of the biceps brachii muscle. The model has been experimentally verified by conducting experiments on three subjects. Comparison between simulated signals and experimental recordings shows that the Root Mean Square (RMS) increases linearly with force in both cases. The simulated signals also show similar values and rates of change of RMS to the experimental signals.

  8. Decomposition and model selection for large contingency tables.

    PubMed

    Dahinden, Corinne; Kalisch, Markus; Bühlmann, Peter

    2010-04-01

    Large contingency tables summarizing categorical variables arise in many areas. One example is in biology, where large numbers of biomarkers are cross-tabulated according to their discrete expression level. Interactions of the variables are of great interest and are generally studied with log-linear models. The structure of a log-linear model can be visually represented by a graph from which the conditional independence structure can then be easily read off. However, since the number of parameters in a saturated model grows exponentially in the number of variables, this generally comes with a heavy computational burden. Even if we restrict ourselves to models of lower-order interactions or other sparse structures, we are faced with the problem of a large number of cells which play the role of sample size. This is in sharp contrast to high-dimensional regression or classification procedures because, in addition to a high-dimensional parameter, we also have to deal with the analogue of a huge sample size. Furthermore, high-dimensional tables naturally feature a large number of sampling zeros which often leads to the nonexistence of the maximum likelihood estimate. We therefore present a decomposition approach, where we first divide the problem into several lower-dimensional problems and then combine these to form a global solution. Our methodology is computationally feasible for log-linear interaction models with many categorical variables each or some of them having many levels. We demonstrate the proposed method on simulated data and apply it to a bio-medical problem in cancer research.

  9. Model test on the relationship feed energy and protein ratio to the production and quality of milk protein

    NASA Astrophysics Data System (ADS)

    Hartanto, R.; Jantra, M. A. C.; Santosa, S. A. B.; Purnomoadi, A.

    2018-01-01

    The purpose of this research was to find an appropriate relationship model between the feed energy and protein ratio with the amount of production and quality of milk proteins. This research was conducted at Getasan Sub-district, Semarang Regency, Central Java Province, Indonesia using 40 samples (Holstein Friesian cattle, lactation period II-III and lactation month 3-4). Data were analyzed using linear and quadratic regressions, to predict the production and quality of milk protein from feed energy and protein ratio that describe the diet. The significance of model was tested using analysis of variance. Coefficient of determination (R2), residual variance (RV) and root mean square prediction error (RMSPE) were reported for the developed equations as an indicator of the goodness of model fit. The results showed no relationship in milk protein (kg), milk casein (%), milk casein (kg) and milk urea N (mg/dl) as function of CP/TDN. The significant relationship was observed in milk production (L or kg) and milk protein (%) as function of CP/TDN, both in linear and quadratic models. In addition, a quadratic change in milk production (L) (P = 0.003), milk production (kg) (P = 0.003) and milk protein concentration (%) (P = 0.026) were observed with increase of CP/TDN. It can be concluded that quadratic equation was the good fitting model for this research, because quadratic equation has larger R2, smaller RV and smaller RMSPE than those of linear equation.

  10. Modelling the spatio-temporal modulation response of ganglion cells with difference-of-Gaussians receptive fields: relation to photoreceptor response kinetics.

    PubMed

    Donner, K; Hemilä, S

    1996-01-01

    Difference-of-Gaussians (DOG) models for the receptive fields of retinal ganglion cells accurately predict linear responses to both periodic stimuli (typically moving sinusoidal gratings) and aperiodic stimuli (typically circular fields presented as square-wave pulses). While the relation of spatial organization to retinal anatomy has received considerable attention, temporal characteristics have been only loosely connected to retinal physiology. Here we integrate realistic photoreceptor response waveforms into the DOG model to clarify how far a single set of physiological parameters predict temporal aspects of linear responses to both periodic and aperiodic stimuli. Traditional filter-cascade models provide a useful first-order approximation of the single-photon response in photoreceptors. The absolute time scale of these, plus a time for retinal transmission, here construed as a fixed delay, are obtained from flash/step data. Using these values, we find that the DOG model predicts the main features of both the amplitude and phase response of linear cat ganglion cells to sinusoidal flicker. Where the simplest model formulation fails, it serves to reveal additional mechanisms. Unforeseen facts are the attenuation of low temporal frequencies even in pure center-type responses, and the phase advance of the response relative to the stimulus at low frequencies. Neither can be explained by any experimentally documented cone response waveform, but both would be explained by signal differentiation, e.g. in the retinal transmission pathway, as demonstrated at least in turtle retina.

  11. A new statistical approach to climate change detection and attribution

    NASA Astrophysics Data System (ADS)

    Ribes, Aurélien; Zwiers, Francis W.; Azaïs, Jean-Marc; Naveau, Philippe

    2017-01-01

    We propose here a new statistical approach to climate change detection and attribution that is based on additive decomposition and simple hypothesis testing. Most current statistical methods for detection and attribution rely on linear regression models where the observations are regressed onto expected response patterns to different external forcings. These methods do not use physical information provided by climate models regarding the expected response magnitudes to constrain the estimated responses to the forcings. Climate modelling uncertainty is difficult to take into account with regression based methods and is almost never treated explicitly. As an alternative to this approach, our statistical model is only based on the additivity assumption; the proposed method does not regress observations onto expected response patterns. We introduce estimation and testing procedures based on likelihood maximization, and show that climate modelling uncertainty can easily be accounted for. Some discussion is provided on how to practically estimate the climate modelling uncertainty based on an ensemble of opportunity. Our approach is based on the " models are statistically indistinguishable from the truth" paradigm, where the difference between any given model and the truth has the same distribution as the difference between any pair of models, but other choices might also be considered. The properties of this approach are illustrated and discussed based on synthetic data. Lastly, the method is applied to the linear trend in global mean temperature over the period 1951-2010. Consistent with the last IPCC assessment report, we find that most of the observed warming over this period (+0.65 K) is attributable to anthropogenic forcings (+0.67 ± 0.12 K, 90 % confidence range), with a very limited contribution from natural forcings (-0.01± 0.02 K).

  12. The Multiple Correspondence Analysis Method and Brain Functional Connectivity: Its Application to the Study of the Non-linear Relationships of Motor Cortex and Basal Ganglia.

    PubMed

    Rodriguez-Sabate, Clara; Morales, Ingrid; Sanchez, Alberto; Rodriguez, Manuel

    2017-01-01

    The complexity of basal ganglia (BG) interactions is often condensed into simple models mainly based on animal data and that present BG in closed-loop cortico-subcortical circuits of excitatory/inhibitory pathways which analyze the incoming cortical data and return the processed information to the cortex. This study was aimed at identifying functional relationships in the BG motor-loop of 24 healthy-subjects who provided written, informed consent and whose BOLD-activity was recorded by MRI methods. The analysis of the functional interaction between these centers by correlation techniques and multiple linear regression showed non-linear relationships which cannot be suitably addressed with these methods. The multiple correspondence analysis (MCA), an unsupervised multivariable procedure which can identify non-linear interactions, was used to study the functional connectivity of BG when subjects were at rest. Linear methods showed different functional interactions expected according to current BG models. MCA showed additional functional interactions which were not evident when using lineal methods. Seven functional configurations of BG were identified with MCA, two involving the primary motor and somatosensory cortex, one involving the deepest BG (external-internal globus pallidum, subthalamic nucleus and substantia nigral), one with the input-output BG centers (putamen and motor thalamus), two linking the input-output centers with other BG (external pallidum and subthalamic nucleus), and one linking the external pallidum and the substantia nigral. The results provide evidence that the non-linear MCA and linear methods are complementary and should be best used in conjunction to more fully understand the nature of functional connectivity of brain centers.

  13. Robust outer synchronization between two nonlinear complex networks with parametric disturbances and mixed time-varying delays

    NASA Astrophysics Data System (ADS)

    Zhang, Chuan; Wang, Xingyuan; Luo, Chao; Li, Junqiu; Wang, Chunpeng

    2018-03-01

    In this paper, we focus on the robust outer synchronization problem between two nonlinear complex networks with parametric disturbances and mixed time-varying delays. Firstly, a general complex network model is proposed. Besides the nonlinear couplings, the network model in this paper can possess parametric disturbances, internal time-varying delay, discrete time-varying delay and distributed time-varying delay. Then, according to the robust control strategy, linear matrix inequality and Lyapunov stability theory, several outer synchronization protocols are strictly derived. Simple linear matrix controllers are designed to driver the response network synchronize to the drive network. Additionally, our results can be applied on the complex networks without parametric disturbances. Finally, by utilizing the delayed Lorenz chaotic system as the dynamics of all nodes, simulation examples are given to demonstrate the effectiveness of our theoretical results.

  14. The role of climatic variables in winter cereal yields: a retrospective analysis.

    PubMed

    Luo, Qunying; Wen, Li

    2015-02-01

    This study examined the effects of observed climate including [CO2] on winter cereal [winter wheat (Triticum aestivum), barley (Hordeum vulgare) and oat (Avena sativa)] yields by adopting robust statistical analysis/modelling approaches (i.e. autoregressive fractionally integrated moving average, generalised addition model) based on long time series of historical climate data and cereal yield data at three locations (Moree, Dubbo and Wagga Wagga) in New South Wales, Australia. Research results show that (1) growing season rainfall was significantly, positively and non-linearly correlated with crop yield at all locations considered; (2) [CO2] was significantly, positively and non-linearly correlated with crop yields in all cases except wheat and barley yields at Wagga Wagga; (3) growing season maximum temperature was significantly, negatively and non-linearly correlated with crop yields at Dubbo and Moree (except for barley); and (4) radiation was only significantly correlated with oat yield at Wagga Wagga. This information will help to identify appropriate management adaptation options in dealing with the risk and in taking the opportunities of climate change.

  15. Linear regression analysis and its application to multivariate chromatographic calibration for the quantitative analysis of two-component mixtures.

    PubMed

    Dinç, Erdal; Ozdemir, Abdil

    2005-01-01

    Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.

  16. Grain-Size Based Additivity Models for Scaling Multi-rate Uranyl Surface Complexation in Subsurface Sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.

    This study statistically analyzed a grain-size based additivity model that has been proposed to scale reaction rates and parameters from laboratory to field. The additivity model assumed that reaction properties in a sediment including surface area, reactive site concentration, reaction rate, and extent can be predicted from field-scale grain size distribution by linearly adding reaction properties for individual grain size fractions. This study focused on the statistical analysis of the additivity model with respect to reaction rate constants using multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment as an example. Experimental data of rate-limited U(VI) desorption in amore » stirred flow-cell reactor were used to estimate the statistical properties of multi-rate parameters for individual grain size fractions. The statistical properties of the rate constants for the individual grain size fractions were then used to analyze the statistical properties of the additivity model to predict rate-limited U(VI) desorption in the composite sediment, and to evaluate the relative importance of individual grain size fractions to the overall U(VI) desorption. The result indicated that the additivity model provided a good prediction of the U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model, and U(VI) desorption in individual grain size fractions have to be simulated in order to apply the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel size fraction (2-8mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.« less

  17. Annual variation in Internet keyword searches: Linking dieting interest to obesity and negative health outcomes.

    PubMed

    Markey, Patrick M; Markey, Charlotte N

    2013-07-01

    This study investigated the annual variation in Internet searches regarding dieting. Time-series analysis was first used to examine the annual trends of Google keyword searches during the past 7 years for topics related to dieting within the United States. The results indicated that keyword searches for dieting fit a consistent 12-month linear model, peaking in January (following New Year's Eve) and then linearly decreasing until surging again the following January. Additional state-level analyses revealed that the size of the December-January dieting-related keyword surge was predictive of both obesity and mortality rates due to diabetes, heart disease, and stroke.

  18. Predicting the hand, foot, and mouth disease incidence using search engine query data and climate variables: an ecological study in Guangdong, China.

    PubMed

    Du, Zhicheng; Xu, Lin; Zhang, Wangjian; Zhang, Dingmei; Yu, Shicheng; Hao, Yuantao

    2017-10-06

    Hand, foot, and mouth disease (HFMD) has caused a substantial burden in China, especially in Guangdong Province. Based on the enhanced surveillance system, we aimed to explore whether the addition of temperate and search engine query data improves the risk prediction of HFMD. Ecological study. Information on the confirmed cases of HFMD, climate parameters and search engine query logs was collected. A total of 1.36 million HFMD cases were identified from the surveillance system during 2011-2014. Analyses were conducted at aggregate level and no confidential information was involved. A seasonal autoregressive integrated moving average (ARIMA) model with external variables (ARIMAX) was used to predict the HFMD incidence from 2011 to 2014, taking into account temperature and search engine query data (Baidu Index, BDI). Statistics of goodness-of-fit and precision of prediction were used to compare models (1) based on surveillance data only, and with the addition of (2) temperature, (3) BDI, and (4) both temperature and BDI. A high correlation between HFMD incidence and BDI ( r =0.794, p<0.001) or temperature ( r =0.657, p<0.001) was observed using both time series plot and correlation matrix. A linear effect of BDI (without lag) and non-linear effect of temperature (1 week lag) on HFMD incidence were found in a distributed lag non-linear model. Compared with the model based on surveillance data only, the ARIMAX model including BDI reached the best goodness-of-fit with an Akaike information criterion (AIC) value of -345.332, whereas the model including both BDI and temperature had the most accurate prediction in terms of the mean absolute percentage error (MAPE) of 101.745%. An ARIMAX model incorporating search engine query data significantly improved the prediction of HFMD. Further studies are warranted to examine whether including search engine query data also improves the prediction of other infectious diseases in other settings. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  19. Predicting the hand, foot, and mouth disease incidence using search engine query data and climate variables: an ecological study in Guangdong, China

    PubMed Central

    Du, Zhicheng; Xu, Lin; Zhang, Wangjian; Zhang, Dingmei; Yu, Shicheng; Hao, Yuantao

    2017-01-01

    Objectives Hand, foot, and mouth disease (HFMD) has caused a substantial burden in China, especially in Guangdong Province. Based on the enhanced surveillance system, we aimed to explore whether the addition of temperate and search engine query data improves the risk prediction of HFMD. Design Ecological study. Setting and participants Information on the confirmed cases of HFMD, climate parameters and search engine query logs was collected. A total of 1.36 million HFMD cases were identified from the surveillance system during 2011–2014. Analyses were conducted at aggregate level and no confidential information was involved. Outcome measures A seasonal autoregressive integrated moving average (ARIMA) model with external variables (ARIMAX) was used to predict the HFMD incidence from 2011 to 2014, taking into account temperature and search engine query data (Baidu Index, BDI). Statistics of goodness-of-fit and precision of prediction were used to compare models (1) based on surveillance data only, and with the addition of (2) temperature, (3) BDI, and (4) both temperature and BDI. Results A high correlation between HFMD incidence and BDI (r=0.794, p<0.001) or temperature (r=0.657, p<0.001) was observed using both time series plot and correlation matrix. A linear effect of BDI (without lag) and non-linear effect of temperature (1 week lag) on HFMD incidence were found in a distributed lag non-linear model. Compared with the model based on surveillance data only, the ARIMAX model including BDI reached the best goodness-of-fit with an Akaike information criterion (AIC) value of −345.332, whereas the model including both BDI and temperature had the most accurate prediction in terms of the mean absolute percentage error (MAPE) of 101.745%. Conclusions An ARIMAX model incorporating search engine query data significantly improved the prediction of HFMD. Further studies are warranted to examine whether including search engine query data also improves the prediction of other infectious diseases in other settings. PMID:28988169

  20. A Comparison of Mathematical Models of Fish Mercury Concentration as a Function of Atmospheric Mercury Deposition Rate and Watershed Characteristics

    NASA Astrophysics Data System (ADS)

    Smith, R. A.; Moore, R. B.; Shanley, J. B.; Miller, E. K.; Kamman, N. C.; Nacci, D.

    2009-12-01

    Mercury (Hg) concentrations in fish and aquatic wildlife are complex functions of atmospheric Hg deposition rate, terrestrial and aquatic watershed characteristics that influence Hg methylation and export, and food chain characteristics determining Hg bioaccumulation. Because of the complexity and incomplete understanding of these processes, regional-scale models of fish tissue Hg concentration are necessarily empirical in nature, typically constructed through regression analysis of fish tissue Hg concentration data from many sampling locations on a set of potential explanatory variables. Unless the data sets are unusually long and show clear time trends, the empirical basis for model building must be based solely on spatial correlation. Predictive regional scale models are highly useful for improving understanding of the relevant biogeochemical processes, as well as for practical fish and wildlife management and human health protection. Mechanistically, the logical arrangement of explanatory variables is to multiply each of the individual Hg source terms (e.g. dry, wet, and gaseous deposition rates, and residual watershed Hg) for a given fish sampling location by source-specific terms pertaining to methylation, watershed transport, and biological uptake for that location (e.g. SO4 availability, hill slope, lake size). This mathematical form has the desirable property that predicted tissue concentration will approach zero as all individual source terms approach zero. One complication with this form, however, is that it is inconsistent with the standard linear multiple regression equation in which all terms (including those for sources and physical conditions) are additive. An important practical disadvantage of a model in which the Hg source terms are additive (rather than multiplicative) with their modifying factors is that predicted concentration is not zero when all sources are zero, making it unreliable for predicting the effects of large future reductions in Hg deposition. In this paper we compare the results of using several different linear and non-linear models in an analysis of watershed and fish Hg data for 450 New England lakes. The differences in model results pertain to both their utility in interpreting methylation and export processes as well as in fisheries management.

  1. Comparison Between Linear and Non-parametric Regression Models for Genome-Enabled Prediction in Wheat

    PubMed Central

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-01-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models. PMID:23275882

  2. Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat.

    PubMed

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-12-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.

  3. Wireless acceleration sensor of moving elements for condition monitoring of mechanisms

    NASA Astrophysics Data System (ADS)

    Sinitsin, Vladimir V.; Shestakov, Aleksandr L.

    2017-09-01

    Comprehensive analysis of the angular and linear accelerations of moving elements (shafts, gears) allows an increase in the quality of the condition monitoring of mechanisms. However, existing tools and methods measure either linear or angular acceleration with postprocessing. This paper suggests a new construction design of an angular acceleration sensor for moving elements. The sensor is mounted on a moving element and, among other things, the data transfer and electric power supply are carried out wirelessly. In addition, the authors introduce a method for processing the received information which makes it possible to divide the measured acceleration into the angular and linear components. The design has been validated by the results of laboratory tests of an experimental model of the sensor. The study has shown that this method provides a definite separation of the measured acceleration into linear and angular components, even in noise. This research contributes an advance in the range of methods and tools for condition monitoring of mechanisms.

  4. Linear scaling relationships and volcano plots in homogeneous catalysis – revisiting the Suzuki reaction† †Electronic supplementary information (ESI) available: Detailed derivation of the linear scaling relationships and construction of the volcano plots as well as comparisons of computed values using PBE0-dDsC and M06 functionals is included. See DOI: 10.1039/c5sc02910d Click here for additional data file.

    PubMed Central

    Busch, Michael; Wodrich, Matthew D.

    2015-01-01

    Linear free energy scaling relationships and volcano plots are common tools used to identify potential heterogeneous catalysts for myriad applications. Despite the striking simplicity and predictive power of volcano plots, they remain unknown in homogeneous catalysis. Here, we construct volcano plots to analyze a prototypical reaction from homogeneous catalysis, the Suzuki cross-coupling of olefins. Volcano plots succeed both in discriminating amongst different catalysts and reproducing experimentally known trends, which serves as validation of the model for this proof-of-principle example. These findings indicate that the combination of linear scaling relationships and volcano plots could serve as a valuable methodology for identifying homogeneous catalysts possessing a desired activity through a priori computational screening. PMID:28757966

  5. Numerical studies on the force characteristic of superconducting linear synchronous motor with HTS bulk magnet

    NASA Astrophysics Data System (ADS)

    Tang, Junjie; Li, Jing; Li, Xiang; Han, Le

    2018-03-01

    High temperature superconductor (HTS) bulks have significant potential use in linear motor application act as quasi-permanent magnet to replace traditional magnets. Force characteristic between HTS bulk magnet and traveling magnetic field was investigated with numerical simulation and experimental measurement in this paper. Influences of bulk height and number on the force characteristic were studied by the finite element model considering the nonlinear E-J relationship. Study was also made on addition of a back iron plate to the bulk magnet. Besides, force characteristic of bulk was compared with the permanent magnet results. The small initial decrease of the thrust could be explained by inside superconducting current redistribution. It was found that efficiency of linear motor did not increase by adding more bulk magnets. The bulk magnet will be remagnetized instead of erasing trapped field with the increase of the traveling magnetic field strength. The conclusions are helpful in prediction and design the linear motor with HTS bulk magnet.

  6. Linear, multivariable robust control with a mu perspective

    NASA Technical Reports Server (NTRS)

    Packard, Andy; Doyle, John; Balas, Gary

    1993-01-01

    The structured singular value is a linear algebra tool developed to study a particular class of matrix perturbation problems arising in robust feedback control of multivariable systems. These perturbations are called linear fractional, and are a natural way to model many types of uncertainty in linear systems, including state-space parameter uncertainty, multiplicative and additive unmodeled dynamics uncertainty, and coprime factor and gap metric uncertainty. The structured singular value theory provides a natural extension of classical SISO robustness measures and concepts to MIMO systems. The structured singular value analysis, coupled with approximate synthesis methods, make it possible to study the tradeoff between performance and uncertainty that occurs in all feedback systems. In MIMO systems, the complexity of the spatial interactions in the loop gains make it difficult to heuristically quantify the tradeoffs that must occur. This paper examines the role played by the structured singular value (and its computable bounds) in answering these questions, as well as its role in the general robust, multivariable control analysis and design problem.

  7. Complexity and Productivity Differentiation Models of Metallogenic Indicator Elements in Rocks and Supergene Media Around Daijiazhuang Pb-Zn Deposit in Dangchang County, Gansu Province

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Jin-zhong, E-mail: viewsino@163.com; Yao, Shu-zhen; Zhang, Zhong-ping

    2013-03-15

    With the help of complexity indices, we quantitatively studied multifractals, frequency distributions, and linear and nonlinear characteristics of geochemical data for exploration of the Daijiazhuang Pb-Zn deposit. Furthermore, we derived productivity differentiation models of elements from thermodynamics and self-organized criticality of metallogenic systems. With respect to frequency distributions and multifractals, only Zn in rocks and most elements except Sb in secondary media, which had been derived mainly from weathering and alluviation, exhibit nonlinear distributions. The relations of productivity to concentrations of metallogenic elements and paragenic elements in rocks and those of elements strongly leached in secondary media can be seenmore » as linear addition of exponential functions with a characteristic weak chaos. The relations of associated elements such as Mo, Sb, and Hg in rocks and other elements in secondary media can be expressed as an exponential function, and the relations of one-phase self-organized geological or metallogenic processes can be represented by a power function, each representing secondary chaos or strong chaos. For secondary media, exploration data of most elements should be processed using nonlinear mathematical methods or should be transformed to linear distributions before processing using linear mathematical methods.« less

  8. Can Drugs Enhance Hypofractionated Radiotherapy? A Novel Method of Modeling Radiosensitization Using In Vitro Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohri, Nitin; Dicker, Adam P.; Lawrence, Yaacov Richard, E-mail: yaacovla@gmail.com

    2012-05-01

    Purpose: Hypofractionated radiotherapy (hRT) is being explored for a number of malignancies. The potential benefit of giving concurrent chemotherapy with hRT is not known. We sought to predict the effects of combined modality treatments by using mathematical models derived from laboratory data. Methods and Materials: Data from 26 published clonogenic survival assays for cancer cell lines with and without the use of radiosensitizing chemotherapy were collected. The first three data points of the RT arm of each assay were used to derive parameters for the linear quadratic (LQ) model, the multitarget (MT) model, and the generalized linear quadratic (gLQ) model.more » For each assay and model, the difference between the predicted and observed surviving fractions at the highest tested RT dose was calculated. The gLQ model was fitted to all the data from each RT cell survival assay, and the biologically equivalent doses in 2-Gy fractions (EQD2s) of clinically relevant hRT regimens were calculated. The increase in cell kill conferred by the addition of chemotherapy was used to estimate the EQD2 of hRT along with a radiosensitizing agent. For comparison, this was repeated using conventionally fractionated RT regimens. Results: At a mean RT dose of 8.0 Gy, the average errors for the LQ, MT, and gLQ models were 1.63, 0.83, and 0.56 log units, respectively, favoring the gLQ model (p < 0.05). Radiosensitizing chemotherapy increased the EQD2 of hRT schedules by an average of 28% to 82%, depending on disease site. This increase was similar to the gains predicted for the addition of chemotherapy to conventionally fractionated RT. Conclusions: Based on published in vitro assays, the gLQ equation is superior to the LQ and MT models in predicting cell kill at high doses of RT. Modeling exercises demonstrate that significant increases in biologically equivalent dose may be achieved with the addition of radiosensitizing agents to hRT. Clinical study of this approach is warranted.« less

  9. Escaping the snare of chronological growth and launching a free curve alternative: general deviance as latent growth model.

    PubMed

    Wood, Phillip Karl; Jackson, Kristina M

    2013-08-01

    Researchers studying longitudinal relationships among multiple problem behaviors sometimes characterize autoregressive relationships across constructs as indicating "protective" or "launch" factors or as "developmental snares." These terms are used to indicate that initial or intermediary states of one problem behavior subsequently inhibit or promote some other problem behavior. Such models are contrasted with models of "general deviance" over time in which all problem behaviors are viewed as indicators of a common linear trajectory. When fit of the "general deviance" model is poor and fit of one or more autoregressive models is good, this is taken as support for the inhibitory or enhancing effect of one construct on another. In this paper, we argue that researchers consider competing models of growth before comparing deviance and time-bound models. Specifically, we propose use of the free curve slope intercept (FCSI) growth model (Meredith & Tisak, 1990) as a general model to typify change in a construct over time. The FCSI model includes, as nested special cases, several statistical models often used for prospective data, such as linear slope intercept models, repeated measures multivariate analysis of variance, various one-factor models, and hierarchical linear models. When considering models involving multiple constructs, we argue the construct of "general deviance" can be expressed as a single-trait multimethod model, permitting a characterization of the deviance construct over time without requiring restrictive assumptions about the form of growth over time. As an example, prospective assessments of problem behaviors from the Dunedin Multidisciplinary Health and Development Study (Silva & Stanton, 1996) are considered and contrasted with earlier analyses of Hussong, Curran, Moffitt, and Caspi (2008), which supported launch and snare hypotheses. For antisocial behavior, the FCSI model fit better than other models, including the linear chronometric growth curve model used by Hussong et al. For models including multiple constructs, a general deviance model involving a single trait and multimethod factors (or a corresponding hierarchical factor model) fit the data better than either the "snares" alternatives or the general deviance model previously considered by Hussong et al. Taken together, the analyses support the view that linkages and turning points cannot be contrasted with general deviance models absent additional experimental intervention or control.

  10. Escaping the snare of chronological growth and launching a free curve alternative: General deviance as latent growth model

    PubMed Central

    WOOD, PHILLIP KARL; JACKSON, KRISTINA M.

    2014-01-01

    Researchers studying longitudinal relationships among multiple problem behaviors sometimes characterize autoregressive relationships across constructs as indicating “protective” or “launch” factors or as “developmental snares.” These terms are used to indicate that initial or intermediary states of one problem behavior subsequently inhibit or promote some other problem behavior. Such models are contrasted with models of “general deviance” over time in which all problem behaviors are viewed as indicators of a common linear trajectory. When fit of the “general deviance” model is poor and fit of one or more autoregressive models is good, this is taken as support for the inhibitory or enhancing effect of one construct on another. In this paper, we argue that researchers consider competing models of growth before comparing deviance and time-bound models. Specifically, we propose use of the free curve slope intercept (FCSI) growth model (Meredith & Tisak, 1990) as a general model to typify change in a construct over time. The FCSI model includes, as nested special cases, several statistical models often used for prospective data, such as linear slope intercept models, repeated measures multivariate analysis of variance, various one-factor models, and hierarchical linear models. When considering models involving multiple constructs, we argue the construct of “general deviance” can be expressed as a single-trait multimethod model, permitting a characterization of the deviance construct over time without requiring restrictive assumptions about the form of growth over time. As an example, prospective assessments of problem behaviors from the Dunedin Multidisciplinary Health and Development Study (Silva & Stanton, 1996) are considered and contrasted with earlier analyses of Hussong, Curran, Moffitt, and Caspi (2008), which supported launch and snare hypotheses. For antisocial behavior, the FCSI model fit better than other models, including the linear chronometric growth curve model used by Hussong et al. For models including multiple constructs, a general deviance model involving a single trait and multimethod factors (or a corresponding hierarchical factor model) fit the data better than either the “snares” alternatives or the general deviance model previously considered by Hussong et al. Taken together, the analyses support the view that linkages and turning points cannot be contrasted with general deviance models absent additional experimental intervention or control. PMID:23880389

  11. Computing Linear Mathematical Models Of Aircraft

    NASA Technical Reports Server (NTRS)

    Duke, Eugene L.; Antoniewicz, Robert F.; Krambeer, Keith D.

    1991-01-01

    Derivation and Definition of Linear Aircraft Model (LINEAR) computer program provides user with powerful, and flexible, standard, documented, and verified software tool for linearization of mathematical models of aerodynamics of aircraft. Intended for use in software tool to drive linear analysis of stability and design of control laws for aircraft. Capable of both extracting such linearized engine effects as net thrust, torque, and gyroscopic effects, and including these effects in linear model of system. Designed to provide easy selection of state, control, and observation variables used in particular model. Also provides flexibility of allowing alternate formulations of both state and observation equations. Written in FORTRAN.

  12. 3D-Digital soil property mapping by geoadditive models

    NASA Astrophysics Data System (ADS)

    Papritz, Andreas

    2016-04-01

    In many digital soil mapping (DSM) applications, soil properties must be predicted not only for a single but for multiple soil depth intervals. In the GlobalSoilMap project, as an example, predictions are computed for the 0-5 cm, 5-15 cm, 15-30 cm, 30-60 cm, 60-100 cm, 100-200 cm depth intervals (Arrouays et al., 2014). Legacy soil data are often used for DSM. It is common for such datasets that soil properties were measured for soil horizons or for layers at varying soil depth and with non-constant thickness (support). This poses problems for DSM: One strategy is to harmonize the soil data to common depth prior to the analyses (e.g. Bishop et al., 1999) and conduct the statistical analyses for each depth interval independently. The disadvantage of this approach is that the predictions for different depths are computed independently from each other so that the predicted depth profiles may be unrealistic. Furthermore, the error induced by the harmonization to common depth is ignored in this approach (Orton et al. 2016). A better strategy is therefore to process all soil data jointly without prior harmonization by a 3D-analysis that takes soil depth and geographical position explicitly into account. Usually, the non-constant support of the data is then ignored, but Orton et al. (2016) presented recently a geostatistical approach that accounts for non-constant support of soil data and relies on restricted maximum likelihood estimation (REML) of a linear geostatistical model with a separable, heteroscedastic, zonal anisotropic auto-covariance function and area-to-point kriging (Kyriakidis, 2004.) Although this model is theoretically coherent and elegant, estimating its many parameters by REML and selecting covariates for the spatial mean function is a formidable task. A simpler approach might be to use geoadditive models (Kammann and Wand, 2003; Wand, 2003) for 3D-analyses of soil data. geoAM extend the scope of the linear model with spatially correlated errors to account for nonlinear effects of covariates by fitting componentwise smooth, nonlinear functions to the covariates (additive terms). REML estimation of model parameters and computing best linear unbiased predictions (BLUP) builds in the geoAM framework on the fact that both geostatistical and additive models can be parametrized as linear mixed models Wand, 2003. For 3D-DSM analysis of soil data, it is natural to model depth profiles of soil properties by additive terms of soil depth. Including interactions between these additive terms and covariates of the spatial mean function allows to model spatially varying depth profiles. Furthermore, with suitable choice of the basis functions of the additive term (e.g. polynomial regression splines), non-constant support of the soil data can be taken into account. Finally, boosting (Bühlmann and Hothorn, 2007) can be used for selecting covariates for the spatial mean function. The presentation will detail the geoAM approach and present an example of geoAM for 3D-analysis of legacy soil data. Arrouays, D., McBratney, A. B., Minasny, B., Hempel, J. W., Heuvelink, G. B. M., MacMillan, R. A., Hartemink, A. E., Lagacherie, P., and McKenzie, N. J. (2014). The GlobalSoilMap project specifications. In GlobalSoilMap Basis of the global spatial soil information system, pages 9-12. CRC Press. Bishop, T., McBratney, A., and Laslett, G. (1999). Modelling soil attribute depth functions with equal-area quadratic smoothing splines. Geoderma, 91(1-2), 27-45. Bühlmann, P. and Hothorn, T. (2007). Boosting algorithms: Regularization, prediction and model fitting. Statistical Science, 22(4), 477-505. Kammann, E. E. and Wand, M. P. (2003). Geoadditive models. Journal of the Royal Statistical Society. Series C: Applied Statistics, 52(1), 1-18. Kyriakidis, P. (2004). A geostatistical framework for area-to-point spatial interpolation. Geographical Analysis, 36(3), 259-289. Orton, T., Pringle, M., and Bishop, T. (2016). A one-step approach for modelling and mapping soil properties based on profile data sampled over varying depth intervals. Geoderma, 262, 174-186. Wand, M. P. (2003). Smoothing and mixed models. Computational Statistics, 18(2), 223-249.

  13. Predictive models of poly(ethylene-terephthalate) film degradation under multi-factor accelerated weathering exposures

    PubMed Central

    Ngendahimana, David K.; Fagerholm, Cara L.; Sun, Jiayang; Bruckman, Laura S.

    2017-01-01

    Accelerated weathering exposures were performed on poly(ethylene-terephthalate) (PET) films. Longitudinal multi-level predictive models as a function of PET grades and exposure types were developed for the change in yellowness index (YI) and haze (%). Exposures with similar change in YI were modeled using a linear fixed-effects modeling approach. Due to the complex nature of haze formation, measurement uncertainty, and the differences in the samples’ responses, the change in haze (%) depended on individual samples’ responses and a linear mixed-effects modeling approach was used. When compared to fixed-effects models, the addition of random effects in the haze formation models significantly increased the variance explained. For both modeling approaches, diagnostic plots confirmed independence and homogeneity with normally distributed residual errors. Predictive R2 values for true prediction error and predictive power of the models demonstrated that the models were not subject to over-fitting. These models enable prediction under pre-defined exposure conditions for a given exposure time (or photo-dosage in case of UV light exposure). PET degradation under cyclic exposures combining UV light and condensing humidity is caused by photolytic and hydrolytic mechanisms causing yellowing and haze formation. Quantitative knowledge of these degradation pathways enable cross-correlation of these lab-based exposures with real-world conditions for service life prediction. PMID:28498875

  14. Linear or linearizable first-order delay ordinary differential equations and their Lie point symmetries

    NASA Astrophysics Data System (ADS)

    Dorodnitsyn, Vladimir A.; Kozlov, Roman; Meleshko, Sergey V.; Winternitz, Pavel

    2018-05-01

    A recent article was devoted to an analysis of the symmetry properties of a class of first-order delay ordinary differential systems (DODSs). Here we concentrate on linear DODSs, which have infinite-dimensional Lie point symmetry groups due to the linear superposition principle. Their symmetry algebra always contains a two-dimensional subalgebra realized by linearly connected vector fields. We identify all classes of linear first-order DODSs that have additional symmetries, not due to linearity alone, and we present representatives of each class. These additional symmetries are then used to construct exact analytical particular solutions using symmetry reduction.

  15. Paper-Based Electrochemical Detection of Chlorate

    PubMed Central

    Shriver-Lake, Lisa C.; Zabetakis, Dan; Dressick, Walter J.; Stenger, David A.; Trammell, Scott A.

    2018-01-01

    We describe the use of a paper-based probe impregnated with a vanadium-containing polyoxometalate anion, [PMo11VO40]5−, on screen-printed carbon electrodes for the electrochemical determination of chlorate. Cyclic voltammetry (CV) and chronocoulometry were used to characterize the ClO3− response in a pH = 2.5 solution of 100 mM sodium acetate. A linear CV current response was observed between 0.156 and 1.25 mg/mL with a detection limit of 0.083 mg/mL (S/N > 3). This performance was reproducible using [PMo11VO40]5−-impregnated filter paper stored under ambient conditions for as long as 8 months prior to use. At high concentration of chlorate, an additional catalytic cathodic peak was seen in the reverse scan of the CVs, which was digitally simulated using a simple model. For chronocoulometry, the charge measured after 5 min gave a linear response from 0.625 to 2.5 mg/mL with a detection limit of 0.31 mg/mL (S/N > 3). In addition, the slope of charge vs. time also gave a linear response. In this case the linear range was from 0.312 to 2.5 mg/mL with a detection limit of 0.15 mg/mL (S/N > 3). Simple assays were conducted using three types of soil, and recovery measurements reported. PMID:29364153

  16. Septic tank additive impacts on microbial populations.

    PubMed

    Pradhan, S; Hoover, M T; Clark, G H; Gumpertz, M; Wollum, A G; Cobb, C; Strock, J

    2008-01-01

    Environmental health specialists, other onsite wastewater professionals, scientists, and homeowners have questioned the effectiveness of septic tank additives. This paper describes an independent, third-party, field scale, research study of the effects of three liquid bacterial septic tank additives and a control (no additive) on septic tank microbial populations. Microbial populations were measured quarterly in a field study for 12 months in 48 full-size, functioning septic tanks. Bacterial populations in the 48 septic tanks were statistically analyzed with a mixed linear model. Additive effects were assessed for three septic tank maintenance levels (low, intermediate, and high). Dunnett's t-test for tank bacteria (alpha = .05) indicated that none of the treatments were significantly different, overall, from the control at the statistical level tested. In addition, the additives had no significant effects on septic tank bacterial populations at any of the septic tank maintenance levels. Additional controlled, field-based research iswarranted, however, to address additional additives and experimental conditions.

  17. Longitudinal nonlinear wave propagation through soft tissue.

    PubMed

    Valdez, M; Balachandran, B

    2013-04-01

    In this paper, wave propagation through soft tissue is investigated. A primary aim of this investigation is to gain a fundamental understanding of the influence of soft tissue nonlinear material properties on the propagation characteristics of stress waves generated by transient loadings. Here, for computational modeling purposes, the soft tissue is modeled as a nonlinear visco-hyperelastic material, the geometry is assumed to be one-dimensional rod geometry, and uniaxial propagation of longitudinal waves is considered. By using the linearized model, a basic understanding of the characteristics of wave propagation is developed through the dispersion relation and in terms of the propagation speed and attenuation. In addition, it is illustrated as to how the linear system can be used to predict brain tissue material parameters through the use of available experimental ultrasonic attenuation curves. Furthermore, frequency thresholds for wave propagation along internal structures, such as axons in the white matter of the brain, are obtained through the linear analysis. With the nonlinear material model, the authors analyze cases in which one of the ends of the rods is fixed and the other end is subjected to a loading. Two variants of the nonlinear model are analyzed and the associated predictions are compared with the predictions of the corresponding linear model. The numerical results illustrate that one of the imprints of the nonlinearity on the wave propagation phenomenon is the steepening of the wave front, leading to jump-like variations in the stress wave profiles. This phenomenon is a consequence of the dependence of the local wave speed on the local deformation of the material. As per the predictions of the nonlinear material model, compressive waves in the structure travel faster than tensile waves. Furthermore, it is found that wave pulses with large amplitudes and small elapsed times are attenuated over shorter spans. This feature is due to the elevated strain-rates introduced at the end of the structure where the load is applied. In addition, it is shown that when steep wave fronts are generated in the nonlinear viscoelastic material, energy dissipation is focused in those wave fronts implying deposition of energy in a highly localized region of the material. Novel mechanisms for brain tissue damage are proposed based on the results obtained. The first mechanism is related to the dissipation of energy at steep wave fronts, while the second one is related to the interaction of steep wave fronts with axons encountered on its way through the structure. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Predicting the occurrence of wildfires with binary structured additive regression models.

    PubMed

    Ríos-Pena, Laura; Kneib, Thomas; Cadarso-Suárez, Carmen; Marey-Pérez, Manuel

    2017-02-01

    Wildfires are one of the main environmental problems facing societies today, and in the case of Galicia (north-west Spain), they are the main cause of forest destruction. This paper used binary structured additive regression (STAR) for modelling the occurrence of wildfires in Galicia. Binary STAR models are a recent contribution to the classical logistic regression and binary generalized additive models. Their main advantage lies in their flexibility for modelling non-linear effects, while simultaneously incorporating spatial and temporal variables directly, thereby making it possible to reveal possible relationships among the variables considered. The results showed that the occurrence of wildfires depends on many covariates which display variable behaviour across space and time, and which largely determine the likelihood of ignition of a fire. The joint possibility of working on spatial scales with a resolution of 1 × 1 km cells and mapping predictions in a colour range makes STAR models a useful tool for plotting and predicting wildfire occurrence. Lastly, it will facilitate the development of fire behaviour models, which can be invaluable when it comes to drawing up fire-prevention and firefighting plans. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Boosting structured additive quantile regression for longitudinal childhood obesity data.

    PubMed

    Fenske, Nora; Fahrmeir, Ludwig; Hothorn, Torsten; Rzehak, Peter; Höhle, Michael

    2013-07-25

    Childhood obesity and the investigation of its risk factors has become an important public health issue. Our work is based on and motivated by a German longitudinal study including 2,226 children with up to ten measurements on their body mass index (BMI) and risk factors from birth to the age of 10 years. We introduce boosting of structured additive quantile regression as a novel distribution-free approach for longitudinal quantile regression. The quantile-specific predictors of our model include conventional linear population effects, smooth nonlinear functional effects, varying-coefficient terms, and individual-specific effects, such as intercepts and slopes. Estimation is based on boosting, a computer intensive inference method for highly complex models. We propose a component-wise functional gradient descent boosting algorithm that allows for penalized estimation of the large variety of different effects, particularly leading to individual-specific effects shrunken toward zero. This concept allows us to flexibly estimate the nonlinear age curves of upper quantiles of the BMI distribution, both on population and on individual-specific level, adjusted for further risk factors and to detect age-varying effects of categorical risk factors. Our model approach can be regarded as the quantile regression analog of Gaussian additive mixed models (or structured additive mean regression models), and we compare both model classes with respect to our obesity data.

  20. North Adriatic Tides: Observations, Variational Data Assimilation Modeling, and Linear Tide Dynamics

    DTIC Science & Technology

    2009-12-01

    of the North Adriatic ( Lee et al., 2005). In addition to the ADCP measurements of currents through- out the water column, bottom pressure (by ADCP or...of the year with low levels of stratification (Figure 2, Jeffries and Lee , 2007). Actual generation of internal tides in the North Adriatic would...Thompson, K.R., Teague, W. J., Jacobs, G.A., Suk, M.-S., Chang, K.-I., Lee , J.-C. and Choi, B.H. (2004): Data assimilation modeling of the barotropic

  1. Computer Simulation Studies of the Tearing Mode Instability in a Field-Reversed Ion Layer.

    DTIC Science & Technology

    1980-09-15

    CLASSIFICATION OF THIS PAGE (when Date Entered) 20. (Abstract continued) .those of the linear theory. In addition, it has been demonstrated that when...However, all the results obtained so far are very encouraging. Using the energy prin- ciple Sudan and Rosenbluth5 have shown with a hybrid model that a...found that finite length layers are stable to tearing modes as a consequence of axial kinetic pressure. Using a hybrid model , in which the ion layer is

  2. Transit-time and age distributions for nonlinear time-dependent compartmental systems.

    PubMed

    Metzler, Holger; Müller, Markus; Sierra, Carlos A

    2018-02-06

    Many processes in nature are modeled using compartmental systems (reservoir/pool/box systems). Usually, they are expressed as a set of first-order differential equations describing the transfer of matter across a network of compartments. The concepts of age of matter in compartments and the time required for particles to transit the system are important diagnostics of these models with applications to a wide range of scientific questions. Until now, explicit formulas for transit-time and age distributions of nonlinear time-dependent compartmental systems were not available. We compute densities for these types of systems under the assumption of well-mixed compartments. Assuming that a solution of the nonlinear system is available at least numerically, we show how to construct a linear time-dependent system with the same solution trajectory. We demonstrate how to exploit this solution to compute transit-time and age distributions in dependence on given start values and initial age distributions. Furthermore, we derive equations for the time evolution of quantiles and moments of the age distributions. Our results generalize available density formulas for the linear time-independent case and mean-age formulas for the linear time-dependent case. As an example, we apply our formulas to a nonlinear and a linear version of a simple global carbon cycle model driven by a time-dependent input signal which represents fossil fuel additions. We derive time-dependent age distributions for all compartments and calculate the time it takes to remove fossil carbon in a business-as-usual scenario.

  3. Modeling of anisotropic properties of double quantum rings by the terahertz laser field.

    PubMed

    Baghramyan, Henrikh M; Barseghyan, Manuk G; Kirakosyan, Albert A; Ojeda, Judith H; Bragard, Jean; Laroze, David

    2018-04-18

    The rendering of different shapes of just a single sample of a concentric double quantum ring is demonstrated realizable with a terahertz laser field, that in turn, allows the manipulation of electronic and optical properties of a sample. It is shown that by changing the intensity or frequency of laser field, one can come to a new set of degenerated levels in double quantum rings and switch the charge distribution between the rings. In addition, depending on the direction of an additional static electric field, the linear and quadratic quantum confined Stark effects are observed. The absorption spectrum shifts and the additive absorption coefficient variations affected by laser and electric fields are discussed. Finally, anisotropic electronic and optical properties of isotropic concentric double quantum rings are modeled with the help of terahertz laser field.

  4. Psychophysiological interaction between superior temporal gyrus (STG) and cerebellum: An fMRI study

    NASA Astrophysics Data System (ADS)

    Yusoff, A. N.; Teng, X. L.; Ng, S. B.; Hamid, A. I. A.; Mukari, S. Z. M.

    2016-03-01

    This study aimed to model the psychophysiological interaction (PPI) between the bilateral STG and cerebellum (lobule VI and lobule VII) during an arithmetic addition task. Eighteen young adults participated in this study. They were instructed to solve single-digit addition tasks in quiet and noisy backgrounds during an fMRI scan. Results showed that in both hemispheres, the response in the cerebellum was found to be linearly influenced by the activity in STG (vice-versa) for both in-quiet and in-noise conditions. However, the influence of the cerebellum on STG seemed to be modulated by noise. A two-way PPI model between STG and cerebellum is suggested. The connectivity between the two regions during a simple addition task in a noisy condition is modulated by the participants’ higher attention to perceive.

  5. Effects of hydrokinetic turbine sound on the behavior of four species of fish within an experimental mesocosm

    DOE PAGES

    Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin

    2017-02-04

    The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less

  6. Effects of hydrokinetic turbine sound on the behavior of four species of fish within an experimental mesocosm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin

    The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less

  7. Numerical and analytical investigation towards performance enhancement of a newly developed rockfall protective cable-net structure

    NASA Astrophysics Data System (ADS)

    Dhakal, S.; Bhandary, N. P.; Yatabe, R.; Kinoshita, N.

    2012-04-01

    In a previous companion paper, we presented a three-tier modelling of a particular type of rockfall protective cable-net structure (barrier), developed newly in Japan. Therein, we developed a three-dimensional, Finite Element based, nonlinear numerical model having been calibrated/back-calculated and verified with the element- and structure-level physical tests. Moreover, using a very simple, lumped-mass, single-degree-of-freedom, equivalently linear analytical model, a global-displacement-predictive correlation was devised by modifying the basic equation - obtained by combining the principles of conservation of linear momentum and energy - based on the back-analysis of the tests on the numerical model. In this paper, we use the developed models to explore the performance enhancement potential of the structure in terms of (a) the control of global displacement - possibly the major performance criterion for the proposed structure owing to a narrow space available in the targeted site, and (b) the increase in energy dissipation by the existing U-bolt-type Friction-brake Devices - which are identified to have performed weakly when integrated into the structure. A set of parametric investigations have revealed correlations to achieve the first objective in terms of the structure's mass, particularly by manipulating the wire-net's characteristics, and has additionally disclosed the effects of the impacting-block's parameters. Towards achieving the second objective, another set of parametric investigations have led to a proposal of a few innovative improvements in the constitutive behaviour (model) of the studied brake device (dissipator), in addition to an important recommendation of careful handling of the device based on the identified potential flaw.

  8. User's manual for interactive LINEAR: A FORTRAN program to derive linear aircraft models

    NASA Technical Reports Server (NTRS)

    Antoniewicz, Robert F.; Duke, Eugene L.; Patterson, Brian P.

    1988-01-01

    An interactive FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models is documented in this report. The program LINEAR numerically determines a linear system model using nonlinear equations of motion and a user-supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model.

  9. Constrained State Estimation for Individual Localization in Wireless Body Sensor Networks

    PubMed Central

    Feng, Xiaoxue; Snoussi, Hichem; Liang, Yan; Jiao, Lianmeng

    2014-01-01

    Wireless body sensor networks based on ultra-wideband radio have recently received much research attention due to its wide applications in health-care, security, sports and entertainment. Accurate localization is a fundamental problem to realize the development of effective location-aware applications above. In this paper the problem of constrained state estimation for individual localization in wireless body sensor networks is addressed. Priori knowledge about geometry among the on-body nodes as additional constraint is incorporated into the traditional filtering system. The analytical expression of state estimation with linear constraint to exploit the additional information is derived. Furthermore, for nonlinear constraint, first-order and second-order linearizations via Taylor series expansion are proposed to transform the nonlinear constraint to the linear case. Examples between the first-order and second-order nonlinear constrained filters based on interacting multiple model extended kalman filter (IMM-EKF) show that the second-order solution for higher order nonlinearity as present in this paper outperforms the first-order solution, and constrained IMM-EKF obtains superior estimation than IMM-EKF without constraint. Another brownian motion individual localization example also illustrates the effectiveness of constrained nonlinear iterative least square (NILS), which gets better filtering performance than NILS without constraint. PMID:25390408

  10. Fidelity and enhanced sensitivity of differential transcription profiles following linear amplification of nanogram amounts of endothelial mRNA

    NASA Technical Reports Server (NTRS)

    Polacek, Denise C.; Passerini, Anthony G.; Shi, Congzhu; Francesco, Nadeene M.; Manduchi, Elisabetta; Grant, Gregory R.; Powell, Steven; Bischof, Helen; Winkler, Hans; Stoeckert, Christian J Jr; hide

    2003-01-01

    Although mRNA amplification is necessary for microarray analyses from limited amounts of cells and tissues, the accuracy of transcription profiles following amplification has not been well characterized. We tested the fidelity of differential gene expression following linear amplification by T7-mediated transcription in a well-established in vitro model of cytokine [tumor necrosis factor alpha (TNFalpha)]-stimulated human endothelial cells using filter arrays of 13,824 human cDNAs. Transcriptional profiles generated from amplified antisense RNA (aRNA) (from 100 ng total RNA, approximately 1 ng mRNA) were compared with profiles generated from unamplified RNA originating from the same homogeneous pool. Amplification accurately identified TNFalpha-induced differential expression in 94% of the genes detected using unamplified samples. Furthermore, an additional 1,150 genes were identified as putatively differentially expressed using amplified RNA which remained undetected using unamplified RNA. Of genes sampled from this set, 67% were validated by quantitative real-time PCR as truly differentially expressed. Thus, in addition to demonstrating fidelity in gene expression relative to unamplified samples, linear amplification results in improved sensitivity of detection and enhances the discovery potential of high-throughput screening by microarrays.

  11. Measurement bias detection with Kronecker product restricted models for multivariate longitudinal data: an illustration with health-related quality of life data from thirteen measurement occasions

    PubMed Central

    Verdam, Mathilde G. E.; Oort, Frans J.

    2014-01-01

    Highlights Application of Kronecker product to construct parsimonious structural equation models for multivariate longitudinal data. A method for the investigation of measurement bias with Kronecker product restricted models. Application of these methods to health-related quality of life data from bone metastasis patients, collected at 13 consecutive measurement occasions. The use of curves to facilitate substantive interpretation of apparent measurement bias. Assessment of change in common factor means, after accounting for apparent measurement bias. Longitudinal measurement invariance is usually investigated with a longitudinal factor model (LFM). However, with multiple measurement occasions, the number of parameters to be estimated increases with a multiple of the number of measurement occasions. To guard against too low ratios of numbers of subjects and numbers of parameters, we can use Kronecker product restrictions to model the multivariate longitudinal structure of the data. These restrictions can be imposed on all parameter matrices, including measurement invariance restrictions on factor loadings and intercepts. The resulting models are parsimonious and have attractive interpretation, but require different methods for the investigation of measurement bias. Specifically, additional parameter matrices are introduced to accommodate possible violations of measurement invariance. These additional matrices consist of measurement bias parameters that are either fixed at zero or free to be estimated. In cases of measurement bias, it is also possible to model the bias over time, e.g., with linear or non-linear curves. Measurement bias detection with Kronecker product restricted models will be illustrated with multivariate longitudinal data from 682 bone metastasis patients whose health-related quality of life (HRQL) was measured at 13 consecutive weeks. PMID:25295016

  12. Measurement bias detection with Kronecker product restricted models for multivariate longitudinal data: an illustration with health-related quality of life data from thirteen measurement occasions.

    PubMed

    Verdam, Mathilde G E; Oort, Frans J

    2014-01-01

    Application of Kronecker product to construct parsimonious structural equation models for multivariate longitudinal data.A method for the investigation of measurement bias with Kronecker product restricted models.Application of these methods to health-related quality of life data from bone metastasis patients, collected at 13 consecutive measurement occasions.The use of curves to facilitate substantive interpretation of apparent measurement bias.Assessment of change in common factor means, after accounting for apparent measurement bias.Longitudinal measurement invariance is usually investigated with a longitudinal factor model (LFM). However, with multiple measurement occasions, the number of parameters to be estimated increases with a multiple of the number of measurement occasions. To guard against too low ratios of numbers of subjects and numbers of parameters, we can use Kronecker product restrictions to model the multivariate longitudinal structure of the data. These restrictions can be imposed on all parameter matrices, including measurement invariance restrictions on factor loadings and intercepts. The resulting models are parsimonious and have attractive interpretation, but require different methods for the investigation of measurement bias. Specifically, additional parameter matrices are introduced to accommodate possible violations of measurement invariance. These additional matrices consist of measurement bias parameters that are either fixed at zero or free to be estimated. In cases of measurement bias, it is also possible to model the bias over time, e.g., with linear or non-linear curves. Measurement bias detection with Kronecker product restricted models will be illustrated with multivariate longitudinal data from 682 bone metastasis patients whose health-related quality of life (HRQL) was measured at 13 consecutive weeks.

  13. Galaxy Clustering, Photometric Redshifts and Diagnosis of Systematics in the DES Science Verification Data

    DOE PAGES

    Crocce, M.

    2015-12-09

    We study the clustering of galaxies detected at i < 22.5 in the Science Verification observations of the Dark Energy Survey (DES). Two-point correlation functions are measured using 2.3 × 106 galaxies over a contiguous 116 deg 2 region in five bins of photometric redshift width Δz = 0.2 in the range 0.2 < z < 1.2. The impact of photometric redshift errors is assessed by comparing results using a template-based photo-zalgorithm (BPZ) to a machine-learning algorithm (TPZ). A companion paper presents maps of several observational variables (e.g. seeing, sky brightness) which could modulate the galaxy density. Here we characterizemore » and mitigate systematic errors on the measured clustering which arise from these observational variables, in addition to others such as Galactic dust and stellar contamination. After correcting for systematic effects, we then measure galaxy bias over a broad range of linear scales relative to mass clustering predicted from the Planck Λ cold dark matter model, finding agreement with the Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) measurements with χ 2 of 4.0 (8.7) with 5 degrees of freedom for the TPZ (BPZ) redshifts. Furthermore, we test a ‘linear bias’ model, in which the galaxy clustering is a fixed multiple of the predicted non-linear dark matter clustering. The precision of the data allows us to determine that the linear bias model describes the observed galaxy clustering to 2.5 percent accuracy down to scales at least 4–10 times smaller than those on which linear theory is expected to be sufficient.« less

  14. Galaxy Clustering, Photometric Redshifts and Diagnosis of Systematics in the DES Science Verification Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crocce, M.

    We study the clustering of galaxies detected at i < 22.5 in the Science Verification observations of the Dark Energy Survey (DES). Two-point correlation functions are measured using 2.3 × 106 galaxies over a contiguous 116 deg 2 region in five bins of photometric redshift width Δz = 0.2 in the range 0.2 < z < 1.2. The impact of photometric redshift errors is assessed by comparing results using a template-based photo-zalgorithm (BPZ) to a machine-learning algorithm (TPZ). A companion paper presents maps of several observational variables (e.g. seeing, sky brightness) which could modulate the galaxy density. Here we characterizemore » and mitigate systematic errors on the measured clustering which arise from these observational variables, in addition to others such as Galactic dust and stellar contamination. After correcting for systematic effects, we then measure galaxy bias over a broad range of linear scales relative to mass clustering predicted from the Planck Λ cold dark matter model, finding agreement with the Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) measurements with χ 2 of 4.0 (8.7) with 5 degrees of freedom for the TPZ (BPZ) redshifts. Furthermore, we test a ‘linear bias’ model, in which the galaxy clustering is a fixed multiple of the predicted non-linear dark matter clustering. The precision of the data allows us to determine that the linear bias model describes the observed galaxy clustering to 2.5 percent accuracy down to scales at least 4–10 times smaller than those on which linear theory is expected to be sufficient.« less

  15. Development and validation of a general purpose linearization program for rigid aircraft models

    NASA Technical Reports Server (NTRS)

    Duke, E. L.; Antoniewicz, R. F.

    1985-01-01

    A FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models is discussed. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high performance aircraft.

  16. Additivity of nonsimultaneous masking for short Gaussian-shaped sinusoids.

    PubMed

    Laback, Bernhard; Balazs, Peter; Necciari, Thibaud; Savel, Sophie; Ystad, Solvi; Meunier, Sabine; Kronland-Martinet, Richard

    2011-02-01

    The additivity of nonsimultaneous masking was studied using Gaussian-shaped tone pulses (referred to as Gaussians) as masker and target stimuli. Combinations of up to four temporally separated Gaussian maskers with an equivalent rectangular bandwidth of 600 Hz and an equivalent rectangular duration of 1.7 ms were tested. Each masker was level-adjusted to produce approximately 8 dB of masking. Excess masking (exceeding linear additivity) was generally stronger than reported in the literature for longer maskers and comparable target levels. A model incorporating a compressive input/output function, followed by a linear summation stage, underestimated excess masking when using an input/output function derived from literature data for longer maskers and comparable target levels. The data could be predicted with a more compressive input/output function. Stronger compression may be explained by assuming that the Gaussian stimuli were too short to evoke the medial olivocochlear reflex (MOCR), whereas for longer maskers tested previously the MOCR caused reduced compression. Overall, the interpretation of the data suggests strong basilar membrane compression for very short stimuli.

  17. Lidar-Based Estimates of Above-Ground Biomass in the Continental US and Mexico Using Ground, Airborne, and Satellite Observations

    NASA Technical Reports Server (NTRS)

    Nelson, Ross; Margolis, Hank; Montesano, Paul; Sun, Guoqing; Cook, Bruce; Corp, Larry; Andersen, Hans-Erik; DeJong, Ben; Pellat, Fernando Paz; Fickel, Thaddeus; hide

    2016-01-01

    Existing national forest inventory plots, an airborne lidar scanning (ALS) system, and a space profiling lidar system (ICESat-GLAS) are used to generate circa 2005 estimates of total aboveground dry biomass (AGB) in forest strata, by state, in the continental United States (CONUS) and Mexico. The airborne lidar is used to link ground observations of AGB to space lidar measurements. Two sets of models are generated, the first relating ground estimates of AGB to airborne laser scanning (ALS) measurements and the second set relating ALS estimates of AGB (generated using the first model set) to GLAS measurements. GLAS then, is used as a sampling tool within a hybrid estimation framework to generate stratum-, state-, and national-level AGB estimates. A two-phase variance estimator is employed to quantify GLAS sampling variability and, additively, ALS-GLAS model variability in this current, three-phase (ground-ALS-space lidar) study. The model variance component characterizes the variability of the regression coefficients used to predict ALS-based estimates of biomass as a function of GLAS measurements. Three different types of predictive models are considered in CONUS to determine which produced biomass totals closest to ground-based national forest inventory estimates - (1) linear (LIN), (2) linear-no-intercept (LNI), and (3) log-linear. For CONUS at the national level, the GLAS LNI model estimate (23.95 +/- 0.45 Gt AGB), agreed most closely with the US national forest inventory ground estimate, 24.17 +/- 0.06 Gt, i.e., within 1%. The national biomass total based on linear ground-ALS and ALS-GLAS models (25.87 +/- 0.49 Gt) overestimated the national ground-based estimate by 7.5%. The comparable log-linear model result (63.29 +/-1.36 Gt) overestimated ground results by 261%. All three national biomass GLAS estimates, LIN, LNI, and log-linear, are based on 241,718 pulses collected on 230 orbits. The US national forest inventory (ground) estimates are based on 119,414 ground plots. At the US state level, the average absolute value of the deviation of LNI GLAS estimates from the comparable ground estimate of total biomass was 18.8% (range: Oregon,-40.8% to North Dakota, 128.6%). Log-linear models produced gross overestimates in the continental US, i.e., N2.6x, and the use of this model to predict regional biomass using GLAS data in temperate, western hemisphere forests is not appropriate. The best model form, LNI, is used to produce biomass estimates in Mexico. The average biomass density in Mexican forests is 53.10 +/- 0.88 t/ha, and the total biomass for the country, given a total forest area of 688,096 sq km, is 3.65 +/- 0.06 Gt. In Mexico, our GLAS biomass total underestimated a 2005 FAO estimate (4.152 Gt) by 12% and overestimated a 2007/8 radar study's figure (3.06 Gt) by 19%.

  18. Anisotropic piezoelectric twist actuation of helicopter rotor blades: Aeroelastic analysis and design optimization

    NASA Astrophysics Data System (ADS)

    Wilkie, William Keats

    1997-12-01

    An aeroelastic model suitable for control law and preliminary structural design of composite helicopter rotor blades incorporating embedded anisotropic piezoelectric actuator laminae is developed. The aeroelasticity model consists of a linear, nonuniform beam representation of the blade structure, including linear piezoelectric actuation terms, coupled with a nonlinear, finite-state unsteady aerodynamics model. A Galerkin procedure and numerical integration in the time domain are used to obtain a soluti An aeroelastic model suitable for control law and preliminary structural design of composite helicopter rotor blades incorporating embedded anisotropic piezoelectric actuator laminae is developed. The aeroelasticity model consists of a linear, nonuniform beam representation of the blade structure, including linear piezoelectric actuation terms, coupled with a nonlinear, finite-state unsteady aerodynamics model. A Galerkin procedure and numerical integration in the time domain are used to obtain amited additional piezoelectric material mass, it is shown that blade twist actuation approaches which exploit in-plane piezoelectric free-stain anisotropies are capable of producing amplitudes of oscillatory blade twisting sufficient for rotor vibration reduction applications. The second study examines the effectiveness of using embedded piezoelectric actuator laminae to alleviate vibratory loads due to retreating blade stall. A 10 to 15 percent improvement in dynamic stall limited forward flight speed, and a 5 percent improvement in stall limited rotor thrust were numerically demonstrated for the active twist rotor blade relative to a conventional blade design. The active twist blades are also demonstrated to be more susceptible than the conventional blades to dynamic stall induced vibratory loads when not operating with twist actuation. This is the result of designing the active twist blades with low torsional stiffness in order to maximize piezoelectric twist authority. Determining the optimum tradeoff between blade torsional stiffness and piezoelectric twist actuation authority is the subject of the third study. For this investigation, a linearized hovering-flight eigenvalue analysis is developed. Linear optimal control theory is then utilized to develop an optimum active twist blade design in terms of reducing structural energy and control effort cost. The forward flight vibratory loads characteristics of the torsional stiffness optimized active twist blade are then examined using the nonlinear, forward flight aeroelastic analysis. The optimized active twist rotor blade is shown to have improved passive and active vibratory loads characteristics relative to the baseline active twist blades.

  19. Regional variability among nonlinear chlorophyll-phosphorus relationships in lakes

    USGS Publications Warehouse

    Filstrup, Christopher T.; Wagner, Tyler; Soranno, Patricia A.; Stanley, Emily H.; Stow, Craig A.; Webster, Katherine E.; Downing, John A.

    2014-01-01

    The relationship between chlorophyll a (Chl a) and total phosphorus (TP) is a fundamental relationship in lakes that reflects multiple aspects of ecosystem function and is also used in the regulation and management of inland waters. The exact form of this relationship has substantial implications on its meaning and its use. We assembled a spatially extensive data set to examine whether nonlinear models are a better fit for Chl a—TP relationships than traditional log-linear models, whether there were regional differences in the form of the relationships, and, if so, which regional factors were related to these differences. We analyzed a data set from 2105 temperate lakes across 35 ecoregions by fitting and comparing two different nonlinear models and one log-linear model. The two nonlinear models fit the data better than the log-linear model. In addition, the parameters for the best-fitting model varied among regions: the maximum and lower Chl aasymptotes were positively and negatively related to percent regional pasture land use, respectively, and the rate at which chlorophyll increased with TP was negatively related to percent regional wetland cover. Lakes in regions with more pasture fields had higher maximum chlorophyll concentrations at high TP concentrations but lower minimum chlorophyll concentrations at low TP concentrations. Lakes in regions with less wetland cover showed a steeper Chl a—TP relationship than wetland-rich regions. Interpretation of Chl a—TP relationships depends on regional differences, and theory and management based on a monolithic relationship may be inaccurate.

  20. Latent degradation indicators estimation and prediction: A Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Zhou, Yifan; Sun, Yong; Mathew, Joseph; Wolff, Rodney; Ma, Lin

    2011-01-01

    Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.

Top