ERIC Educational Resources Information Center
Ferrando, Pere J.
2004-01-01
This study used kernel-smoothing procedures to estimate the item characteristic functions (ICFs) of a set of continuous personality items. The nonparametric ICFs were compared with the ICFs estimated (a) by the linear model and (b) by Samejima's continuous-response model. The study was based on a conditioned approach and used an error-in-variables…
A Linear Variable-[theta] Model for Measuring Individual Differences in Response Precision
ERIC Educational Resources Information Center
Ferrando, Pere J.
2011-01-01
Models for measuring individual response precision have been proposed for binary and graded responses. However, more continuous formats are quite common in personality measurement and are usually analyzed with the linear factor analysis model. This study extends the general Gaussian person-fluctuation model to the continuous-response case and…
NASA Astrophysics Data System (ADS)
Farag, Mohammed; Fleckenstein, Matthias; Habibi, Saeid
2017-02-01
Model-order reduction and minimization of the CPU run-time while maintaining the model accuracy are critical requirements for real-time implementation of lithium-ion electrochemical battery models. In this paper, an isothermal, continuous, piecewise-linear, electrode-average model is developed by using an optimal knot placement technique. The proposed model reduces the univariate nonlinear function of the electrode's open circuit potential dependence on the state of charge to continuous piecewise regions. The parameterization experiments were chosen to provide a trade-off between extensive experimental characterization techniques and purely identifying all parameters using optimization techniques. The model is then parameterized in each continuous, piecewise-linear, region. Applying the proposed technique cuts down the CPU run-time by around 20%, compared to the reduced-order, electrode-average model. Finally, the model validation against real-time driving profiles (FTP-72, WLTP) demonstrates the ability of the model to predict the cell voltage accurately with less than 2% error.
Hybrid Discrete-Continuous Markov Decision Processes
NASA Technical Reports Server (NTRS)
Feng, Zhengzhu; Dearden, Richard; Meuleau, Nicholas; Washington, Rich
2003-01-01
This paper proposes a Markov decision process (MDP) model that features both discrete and continuous state variables. We extend previous work by Boyan and Littman on the mono-dimensional time-dependent MDP to multiple dimensions. We present the principle of lazy discretization, and piecewise constant and linear approximations of the model. Having to deal with several continuous dimensions raises several new problems that require new solutions. In the (piecewise) linear case, we use techniques from partially- observable MDPs (POMDPS) to represent value functions as sets of linear functions attached to different partitions of the state space.
Theoretical and Empirical Comparisons between Two Models for Continuous Item Responses.
ERIC Educational Resources Information Center
Ferrando, Pere J.
2002-01-01
Analyzed the relations between two continuous response models intended for typical response items: the linear congeneric model and Samejima's continuous response model (CRM). Illustrated the relations described using an empirical example and assessed the relations through a simulation study. (SLD)
Some Statistics for Assessing Person-Fit Based on Continuous-Response Models
ERIC Educational Resources Information Center
Ferrando, Pere Joan
2010-01-01
This article proposes several statistics for assessing individual fit based on two unidimensional models for continuous responses: linear factor analysis and Samejima's continuous response model. Both models are approached using a common framework based on underlying response variables and are formulated at the individual level as fixed regression…
Nonlinear 2D arm dynamics in response to continuous and pulse-shaped force perturbations.
Happee, Riender; de Vlugt, Erwin; van Vliet, Bart
2015-01-01
Ample evidence exists regarding the nonlinearity of the neuromuscular system but linear models are widely applied to capture postural dynamics. This study quantifies the nonlinearity of human arm postural dynamics applying 2D continuous force perturbations (0.2-40 Hz) inducing three levels of hand displacement (5, 15, 45 mm RMS) followed by force-pulse perturbations inducing large hand displacements (up to 250 mm) in a position task (PT) and a relax task (RT) recording activity of eight shoulder and elbow muscles. The continuous perturbation data were used to analyze the 2D endpoint dynamics in the frequency domain and to identify reflexive and intrinsic parameters of a linear neuromuscular shoulder-elbow model. Subsequently, it was assessed to what extent the large displacements in response to force pulses could be predicted from the 'small amplitude' linear neuromuscular model. Continuous and pulse perturbation responses with varying amplitudes disclosed highly nonlinear effects. In PT, a larger continuous perturbation induced stiffening with a factor of 1.5 attributed to task adaptation evidenced by increased co-contraction and reflexive activity. This task adaptation was even more profound in the pulse responses where reflexes and displacements were strongly affected by the presence and amplitude of preceding continuous perturbations. In RT, a larger continuous perturbation resulted in yielding with a factor of 3.8 attributed to nonlinear mechanical properties as no significant reflexive activity was found. Pulse perturbations always resulted in yielding where a model fitted to the preceding 5-mm continuous perturbations predicted only 37% of the recorded peak displacements in RT and 79% in PT. This demonstrates that linear neuromuscular models, identified using continuous perturbations with small amplitudes, strongly underestimate displacements in pulse-shaped (e.g., impact) loading conditions. The data will be used to validate neuromuscular models including nonlinear muscular (e.g., Hill and Huxley) and reflexive components.
Non-Linear System Identification for Aeroelastic Systems with Application to Experimental Data
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2008-01-01
Representation and identification of a non-linear aeroelastic pitch-plunge system as a model of the NARMAX class is considered. A non-linear difference equation describing this aircraft model is derived theoretically and shown to be of the NARMAX form. Identification methods for NARMAX models are applied to aeroelastic dynamics and its properties demonstrated via continuous-time simulations of experimental conditions. Simulation results show that (i) the outputs of the NARMAX model match closely those generated using continuous-time methods and (ii) NARMAX identification methods applied to aeroelastic dynamics provide accurate discrete-time parameter estimates. Application of NARMAX identification to experimental pitch-plunge dynamics data gives a high percent fit for cross-validated data.
An Expert System for the Evaluation of Cost Models
1990-09-01
contrast to the condition of equal error variance, called homoscedasticity. (Reference: Applied Linear Regression Models by John Neter - page 423...normal. (Reference: Applied Linear Regression Models by John Neter - page 125) Click Here to continue -> Autocorrelation Click Here for the index - Index...over time. Error terms correlated over time are said to be autocorrelated or serially correlated. (REFERENCE: Applied Linear Regression Models by John
A method for fitting regression splines with varying polynomial order in the linear mixed model.
Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W
2006-02-15
The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.
Henry, B I; Langlands, T A M; Wearne, S L
2006-09-01
We have revisited the problem of anomalously diffusing species, modeled at the mesoscopic level using continuous time random walks, to include linear reaction dynamics. If a constant proportion of walkers are added or removed instantaneously at the start of each step then the long time asymptotic limit yields a fractional reaction-diffusion equation with a fractional order temporal derivative operating on both the standard diffusion term and a linear reaction kinetics term. If the walkers are added or removed at a constant per capita rate during the waiting time between steps then the long time asymptotic limit has a standard linear reaction kinetics term but a fractional order temporal derivative operating on a nonstandard diffusion term. Results from the above two models are compared with a phenomenological model with standard linear reaction kinetics and a fractional order temporal derivative operating on a standard diffusion term. We have also developed further extensions of the CTRW model to include more general reaction dynamics.
Design of Linear-Quadratic-Regulator for a CSTR process
NASA Astrophysics Data System (ADS)
Meghna, P. R.; Saranya, V.; Jaganatha Pandian, B.
2017-11-01
This paper aims at creating a Linear Quadratic Regulator (LQR) for a Continuous Stirred Tank Reactor (CSTR). A CSTR is a common process used in chemical industries. It is a highly non-linear system. Therefore, in order to create the gain feedback controller, the model is linearized. The controller is designed for the linearized model and the concentration and volume of the liquid in the reactor are kept at a constant value as required.
2016-04-01
incorporated with nonlinear elements to produce a continuous, quasi -nonlinear simulation model. Extrapolation methods within the model stitching architecture...Simulation Model, Quasi -Nonlinear, Piloted Simulation, Flight-Test Implications, System Identification, Off-Nominal Loading Extrapolation, Stability...incorporated with nonlinear elements to produce a continuous, quasi -nonlinear simulation model. Extrapolation methods within the model stitching
Yang, Xiaowei; Nie, Kun
2008-03-15
Longitudinal data sets in biomedical research often consist of large numbers of repeated measures. In many cases, the trajectories do not look globally linear or polynomial, making it difficult to summarize the data or test hypotheses using standard longitudinal data analysis based on various linear models. An alternative approach is to apply the approaches of functional data analysis, which directly target the continuous nonlinear curves underlying discretely sampled repeated measures. For the purposes of data exploration, many functional data analysis strategies have been developed based on various schemes of smoothing, but fewer options are available for making causal inferences regarding predictor-outcome relationships, a common task seen in hypothesis-driven medical studies. To compare groups of curves, two testing strategies with good power have been proposed for high-dimensional analysis of variance: the Fourier-based adaptive Neyman test and the wavelet-based thresholding test. Using a smoking cessation clinical trial data set, this paper demonstrates how to extend the strategies for hypothesis testing into the framework of functional linear regression models (FLRMs) with continuous functional responses and categorical or continuous scalar predictors. The analysis procedure consists of three steps: first, apply the Fourier or wavelet transform to the original repeated measures; then fit a multivariate linear model in the transformed domain; and finally, test the regression coefficients using either adaptive Neyman or thresholding statistics. Since a FLRM can be viewed as a natural extension of the traditional multiple linear regression model, the development of this model and computational tools should enhance the capacity of medical statistics for longitudinal data.
Rajeswaran, Jeevanantham; Blackstone, Eugene H
2017-02-01
In medical sciences, we often encounter longitudinal temporal relationships that are non-linear in nature. The influence of risk factors may also change across longitudinal follow-up. A system of multiphase non-linear mixed effects model is presented to model temporal patterns of longitudinal continuous measurements, with temporal decomposition to identify the phases and risk factors within each phase. Application of this model is illustrated using spirometry data after lung transplantation using readily available statistical software. This application illustrates the usefulness of our flexible model when dealing with complex non-linear patterns and time-varying coefficients.
How Robust Is Linear Regression with Dummy Variables?
ERIC Educational Resources Information Center
Blankmeyer, Eric
2006-01-01
Researchers in education and the social sciences make extensive use of linear regression models in which the dependent variable is continuous-valued while the explanatory variables are a combination of continuous-valued regressors and dummy variables. The dummies partition the sample into groups, some of which may contain only a few observations.…
NASA Astrophysics Data System (ADS)
Yang, Liang-Yi; Sun, Di-Hua; Zhao, Min; Cheng, Sen-Lin; Zhang, Geng; Liu, Hui
2018-03-01
In this paper, a new micro-cooperative driving car-following model is proposed to investigate the effect of continuous historical velocity difference information on traffic stability. The linear stability criterion of the new model is derived with linear stability theory and the results show that the unstable region in the headway-sensitivity space will be shrunk by taking the continuous historical velocity difference information into account. Through nonlinear analysis, the mKdV equation is derived to describe the traffic evolution behavior of the new model near the critical point. Via numerical simulations, the theoretical analysis results are verified and the results indicate that the continuous historical velocity difference information can enhance the stability of traffic flow in the micro-cooperative driving process.
A Bayesian Semiparametric Latent Variable Model for Mixed Responses
ERIC Educational Resources Information Center
Fahrmeir, Ludwig; Raach, Alexander
2007-01-01
In this paper we introduce a latent variable model (LVM) for mixed ordinal and continuous responses, where covariate effects on the continuous latent variables are modelled through a flexible semiparametric Gaussian regression model. We extend existing LVMs with the usual linear covariate effects by including nonparametric components for nonlinear…
A Multiphase Non-Linear Mixed Effects Model: An Application to Spirometry after Lung Transplantation
Rajeswaran, Jeevanantham; Blackstone, Eugene H.
2014-01-01
In medical sciences, we often encounter longitudinal temporal relationships that are non-linear in nature. The influence of risk factors may also change across longitudinal follow-up. A system of multiphase non-linear mixed effects model is presented to model temporal patterns of longitudinal continuous measurements, with temporal decomposition to identify the phases and risk factors within each phase. Application of this model is illustrated using spirometry data after lung transplantation using readily available statistical software. This application illustrates the usefulness of our flexible model when dealing with complex non-linear patterns and time varying coefficients. PMID:24919830
Correlation and simple linear regression.
Eberly, Lynn E
2007-01-01
This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.
ORACLS: A system for linear-quadratic-Gaussian control law design
NASA Technical Reports Server (NTRS)
Armstrong, E. S.
1978-01-01
A modern control theory design package (ORACLS) for constructing controllers and optimal filters for systems modeled by linear time-invariant differential or difference equations is described. Numerical linear-algebra procedures are used to implement the linear-quadratic-Gaussian (LQG) methodology of modern control theory. Algorithms are included for computing eigensystems of real matrices, the relative stability of a matrix, factored forms for nonnegative definite matrices, the solutions and least squares approximations to the solutions of certain linear matrix algebraic equations, the controllability properties of a linear time-invariant system, and the steady state covariance matrix of an open-loop stable system forced by white noise. Subroutines are provided for solving both the continuous and discrete optimal linear regulator problems with noise free measurements and the sampled-data optimal linear regulator problem. For measurement noise, duality theory and the optimal regulator algorithms are used to solve the continuous and discrete Kalman-Bucy filter problems. Subroutines are also included which give control laws causing the output of a system to track the output of a prescribed model.
A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates
ERIC Educational Resources Information Center
Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.
2012-01-01
A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…
Identifying the Factors That Influence Change in SEBD Using Logistic Regression Analysis
ERIC Educational Resources Information Center
Camilleri, Liberato; Cefai, Carmel
2013-01-01
Multiple linear regression and ANOVA models are widely used in applications since they provide effective statistical tools for assessing the relationship between a continuous dependent variable and several predictors. However these models rely heavily on linearity and normality assumptions and they do not accommodate categorical dependent…
ERIC Educational Resources Information Center
Song, Xin-Yuan; Lee, Sik-Yum
2006-01-01
Structural equation models are widely appreciated in social-psychological research and other behavioral research to model relations between latent constructs and manifest variables and to control for measurement error. Most applications of SEMs are based on fully observed continuous normal data and models with a linear structural equation.…
ERIC Educational Resources Information Center
Ferrando, Pere J.
2008-01-01
This paper develops results and procedures for obtaining linear composites of factor scores that maximize: (a) test information, and (b) validity with respect to external variables in the multiple factor analysis (FA) model. I treat FA as a multidimensional item response theory model, and use Ackerman's multidimensional information approach based…
Prakash, J; Srinivasan, K
2009-07-01
In this paper, the authors have represented the nonlinear system as a family of local linear state space models, local PID controllers have been designed on the basis of linear models, and the weighted sum of the output from the local PID controllers (Nonlinear PID controller) has been used to control the nonlinear process. Further, Nonlinear Model Predictive Controller using the family of local linear state space models (F-NMPC) has been developed. The effectiveness of the proposed control schemes has been demonstrated on a CSTR process, which exhibits dynamic nonlinearity.
An Alternative Derivation of the Energy Levels of the "Particle on a Ring" System
NASA Astrophysics Data System (ADS)
Vincent, Alan
1996-10-01
All acceptable wave functions must be continuous mathematical functions. This criterion limits the acceptable functions for a particle in a linear 1-dimensional box to sine functions. If, however, the linear box is bent round into a ring, acceptable wave functions are those which are continuous at the 'join'. On this model some acceptable linear functions become unacceptable for the ring and some unacceptable cosine functions become acceptable. This approach can be used to produce a straightforward derivation of the energy levels and wave functions of the particle on a ring. These simple wave mechanical systems can be used as models of linear and cyclic delocalised systems such as conjugated hydrocarbons or the benzene ring. The promotion energy of an electron can then be used to calculate the wavelength of absorption of uv light. The simple model gives results of the correct order of magnitude and shows that, as the chain length increases, the uv maximum moves to longer wavelengths, as found experimentally.
Spatial generalised linear mixed models based on distances.
Melo, Oscar O; Mateu, Jorge; Melo, Carlos E
2016-10-01
Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.
NASA Astrophysics Data System (ADS)
Taousser, Fatima; Defoort, Michael; Djemai, Mohamed
2016-01-01
This paper investigates the consensus problem for linear multi-agent system with fixed communication topology in the presence of intermittent communication using the time-scale theory. Since each agent can only obtain relative local information intermittently, the proposed consensus algorithm is based on a discontinuous local interaction rule. The interaction among agents happens at a disjoint set of continuous-time intervals. The closed-loop multi-agent system can be represented using mixed linear continuous-time and linear discrete-time models due to intermittent information transmissions. The time-scale theory provides a powerful tool to combine continuous-time and discrete-time cases and study the consensus protocol under a unified framework. Using this theory, some conditions are derived to achieve exponential consensus under intermittent information transmissions. Simulations are performed to validate the theoretical results.
Direct use of linear time-domain aerodynamics in aeroservoelastic analysis: Aerodynamic model
NASA Technical Reports Server (NTRS)
Woods, J. A.; Gilbert, Michael G.
1990-01-01
The work presented here is the first part of a continuing effort to expand existing capabilities in aeroelasticity by developing the methodology which is necessary to utilize unsteady time-domain aerodynamics directly in aeroservoelastic design and analysis. The ultimate objective is to define a fully integrated state-space model of an aeroelastic vehicle's aerodynamics, structure and controls which may be used to efficiently determine the vehicle's aeroservoelastic stability. Here, the current status of developing a state-space model for linear or near-linear time-domain indicial aerodynamic forces is presented.
ERIC Educational Resources Information Center
Tarasenko, Larissa V.; Ougolnitsky, Guennady A.; Usov, Anatoly B.; Vaskov, Maksim A.; Kirik, Vladimir A.; Astoyanz, Margarita S.; Angel, Olga Y.
2016-01-01
A dynamic game theoretic model of concordance of interests in the process of social partnership in the system of continuing professional education is proposed. Non-cooperative, cooperative, and hierarchical setups are examined. Analytical solution for a linear state version of the model is provided. Nash equilibrium algorithms (for non-cooperative…
Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.
2009-01-01
In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.
Flora, David B.; LaBrish, Cathy; Chalmers, R. Philip
2011-01-01
We provide a basic review of the data screening and assumption testing issues relevant to exploratory and confirmatory factor analysis along with practical advice for conducting analyses that are sensitive to these concerns. Historically, factor analysis was developed for explaining the relationships among many continuous test scores, which led to the expression of the common factor model as a multivariate linear regression model with observed, continuous variables serving as dependent variables, and unobserved factors as the independent, explanatory variables. Thus, we begin our paper with a review of the assumptions for the common factor model and data screening issues as they pertain to the factor analysis of continuous observed variables. In particular, we describe how principles from regression diagnostics also apply to factor analysis. Next, because modern applications of factor analysis frequently involve the analysis of the individual items from a single test or questionnaire, an important focus of this paper is the factor analysis of items. Although the traditional linear factor model is well-suited to the analysis of continuously distributed variables, commonly used item types, including Likert-type items, almost always produce dichotomous or ordered categorical variables. We describe how relationships among such items are often not well described by product-moment correlations, which has clear ramifications for the traditional linear factor analysis. An alternative, non-linear factor analysis using polychoric correlations has become more readily available to applied researchers and thus more popular. Consequently, we also review the assumptions and data-screening issues involved in this method. Throughout the paper, we demonstrate these procedures using an historic data set of nine cognitive ability variables. PMID:22403561
On Discontinuous Piecewise Linear Models for Memristor Oscillators
NASA Astrophysics Data System (ADS)
Amador, Andrés; Freire, Emilio; Ponce, Enrique; Ros, Javier
2017-06-01
In this paper, we provide for the first time rigorous mathematical results regarding the rich dynamics of piecewise linear memristor oscillators. In particular, for each nonlinear oscillator given in [Itoh & Chua, 2008], we show the existence of an infinite family of invariant manifolds and that the dynamics on such manifolds can be modeled without resorting to discontinuous models. Our approach provides topologically equivalent continuous models with one dimension less but with one extra parameter associated to the initial conditions. It is possible to justify the periodic behavior exhibited by three-dimensional memristor oscillators, by taking advantage of known results for planar continuous piecewise linear systems. The analysis developed not only confirms the numerical results contained in previous works [Messias et al., 2010; Scarabello & Messias, 2014] but also goes much further by showing the existence of closed surfaces in the state space which are foliated by periodic orbits. The important role of initial conditions that justify the infinite number of periodic orbits exhibited by these models, is stressed. The possibility of unsuspected bistable regimes under specific configurations of parameters is also emphasized.
The capability and constraint model of recoverability: An integrated theory of continuity planning.
Lindstedt, David
2017-01-01
While there are best practices, good practices, regulations and standards for continuity planning, there is no single model to collate and sort their various recommended activities. To address this deficit, this paper presents the capability and constraint model of recoverability - a new model to provide an integrated foundation for business continuity planning. The model is non-linear in both construct and practice, thus allowing practitioners to remain adaptive in its application. The paper presents each facet of the model, outlines the model's use in both theory and practice, suggests a subsequent approach that arises from the model, and discusses some possible ramifications to the industry.
Topology-induced bifurcations for the nonlinear Schrödinger equation on the tadpole graph.
Cacciapuoti, Claudio; Finco, Domenico; Noja, Diego
2015-01-01
In this paper we give the complete classification of solitons for a cubic nonlinear Schrödinger equation on the simplest network with a nontrivial topology: the tadpole graph, i.e., a ring with a half line attached to it and free boundary conditions at the junction. This is a step toward the modelization of condensate propagation and confinement in quasi-one-dimensional traps. The model, although simple, exhibits a surprisingly rich behavior and in particular we show that it admits: (i) a denumerable family of continuous branches of embedded solitons vanishing on the half line and bifurcating from linear eigenstates and threshold resonances of the system; (ii) a continuous branch of edge solitons bifurcating from the previous families at the threshold of the continuous spectrum with a pitchfork bifurcation; and (iii) a finite family of continuous branches of solitons without linear analog. All the solutions are explicitly constructed in terms of elliptic Jacobian functions. Moreover we show that families of nonlinear bound states of the above kind continue to exist in the presence of a uniform magnetic field orthogonal to the plane of the ring when a well definite flux quantization condition holds true. In this sense the magnetic field acts as a control parameter. Finally we highlight the role of resonances in the linearization as a signature of the occurrence of bifurcations of solitons from the continuous spectrum.
ERIC Educational Resources Information Center
Foley, Greg
2011-01-01
Continuous feed and bleed ultrafiltration, modeled with the gel polarization model for the limiting flux, is shown to provide a rich source of non-linear algebraic equations that can be readily solved using numerical and graphical techniques familiar to undergraduate students. We present a variety of numerical problems in the design, analysis, and…
Hagenbeek, R E; Rombouts, S A R B; Veltman, D J; Van Strien, J W; Witter, M P; Scheltens, P; Barkhof, F
2007-10-01
Changes in brain activation as a function of continuous multiparametric word recognition have not been studied before by using functional MR imaging (fMRI), to our knowledge. Our aim was to identify linear changes in brain activation and, what is more interesting, nonlinear changes in brain activation as a function of extended word repetition. Fifteen healthy young right-handed individuals participated in this study. An event-related extended continuous word-recognition task with 30 target words was used to study the parametric effect of word recognition on brain activation. Word-recognition-related brain activation was studied as a function of 9 word repetitions. fMRI data were analyzed with a general linear model with regressors for linearly changing signal intensity and nonlinearly changing signal intensity, according to group average reaction time (RT) and individual RTs. A network generally associated with episodic memory recognition showed either constant or linearly decreasing brain activation as a function of word repetition. Furthermore, both anterior and posterior cingulate cortices and the left middle frontal gyrus followed the nonlinear curve of the group RT, whereas the anterior cingulate cortex was also associated with individual RT. Linear alteration in brain activation as a function of word repetition explained most changes in blood oxygen level-dependent signal intensity. Using a hierarchically orthogonalized model, we found evidence for nonlinear activation associated with both group and individual RTs.
Linear Modeling and Evaluation of Controls on Flow Response in Western Post-Fire Watersheds
NASA Astrophysics Data System (ADS)
Saxe, S.; Hogue, T. S.; Hay, L.
2015-12-01
This research investigates the impact of wildfires on watershed flow regimes throughout the western United States, specifically focusing on evaluation of fire events within specified subregions and determination of the impact of climate and geophysical variables in post-fire flow response. Fire events were collected through federal and state-level databases and streamflow data were collected from U.S. Geological Survey stream gages. 263 watersheds were identified with at least 10 years of continuous pre-fire daily streamflow records and 5 years of continuous post-fire daily flow records. For each watershed, percent changes in runoff ratio (RO), annual seven day low-flows (7Q2) and annual seven day high-flows (7Q10) were calculated from pre- to post-fire. Numerous independent variables were identified for each watershed and fire event, including topographic, land cover, climate, burn severity, and soils data. The national watersheds were divided into five regions through K-clustering and a lasso linear regression model, applying the Leave-One-Out calibration method, was calculated for each region. Nash-Sutcliffe Efficiency (NSE) was used to determine the accuracy of the resulting models. The regions encompassing the United States along and west of the Rocky Mountains, excluding the coastal watersheds, produced the most accurate linear models. The Pacific coast region models produced poor and inconsistent results, indicating that the regions need to be further subdivided. Presently, RO and HF response variables appear to be more easily modeled than LF. Results of linear regression modeling showed varying importance of watershed and fire event variables, with conflicting correlation between land cover types and soil types by region. The addition of further independent variables and constriction of current variables based on correlation indicators is ongoing and should allow for more accurate linear regression modeling.
An error bound for a discrete reduced order model of a linear multivariable system
NASA Technical Reports Server (NTRS)
Al-Saggaf, Ubaid M.; Franklin, Gene F.
1987-01-01
The design of feasible controllers for high dimension multivariable systems can be greatly aided by a method of model reduction. In order for the design based on the order reduction to include a guarantee of stability, it is sufficient to have a bound on the model error. Previous work has provided such a bound for continuous-time systems for algorithms based on balancing. In this note an L-infinity bound is derived for model error for a method of order reduction of discrete linear multivariable systems based on balancing.
Some comparisons of complexity in dictionary-based and linear computational models.
Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello
2011-03-01
Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator. Copyright © 2010 Elsevier Ltd. All rights reserved.
Levene, Louis S; Baker, Richard; Walker, Nicola; Williams, Christopher; Wilson, Andrew; Bankart, John
2018-06-01
Increased relationship continuity in primary care is associated with better health outcomes, greater patient satisfaction, and fewer hospital admissions. Greater socioeconomic deprivation is associated with lower levels of continuity, as well as poorer health outcomes. To investigate whether deprivation scores predicted variations in the decline over time of patient-perceived relationship continuity of care, after adjustment for practice organisational and population factors. An observational study in 6243 primary care practices with more than one GP, in England, using a longitudinal multilevel linear model, 2012-2017 inclusive. Patient-perceived relationship continuity was calculated using two questions from the GP Patient Survey. The effect of deprivation on the linear slope of continuity over time was modelled, adjusting for nine confounding variables (practice population and organisational factors). Clustering of measurements within general practices was adjusted for by using a random intercepts and random slopes model. Descriptive statistics and univariable analyses were also undertaken. Relationship continuity declined by 27.5% between 2012 and 2017, and at all deprivation levels. Deprivation scores from 2012 did not predict variations in the decline of relationship continuity at practice level, after accounting for the effects of organisational and population confounding variables, which themselves did not predict, or weakly predicted with very small effect sizes, the decline of continuity. Cross-sectionally, continuity and deprivation were negatively correlated within each year. The decline in relationship continuity of care has been marked and widespread. Measures to maximise continuity will need to be feasible for individual practices with diverse population and organisational characteristics. © British Journal of General Practice 2018.
Log-Multiplicative Association Models as Item Response Models
ERIC Educational Resources Information Center
Anderson, Carolyn J.; Yu, Hsiu-Ting
2007-01-01
Log-multiplicative association (LMA) models, which are special cases of log-linear models, have interpretations in terms of latent continuous variables. Two theoretical derivations of LMA models based on item response theory (IRT) arguments are presented. First, we show that Anderson and colleagues (Anderson & Vermunt, 2000; Anderson & Bockenholt,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abrahamson, S.; Bender, M.; Book, S.
1989-05-01
This report provides dose-response models intended to be used in estimating the radiological health effects of nuclear power plant accidents. Models of early and continuing effects, cancers and thyroid nodules, and genetic effects are provided. Two-parameter Weibull hazard functions are recommended for estimating the risks of early and continuing health effects. Three potentially lethal early effects -- the hematopoietic, pulmonary and gastrointestinal syndromes -- are considered. Linear and linear-quadratic models are recommended for estimating cancer risks. Parameters are given for analyzing the risks of seven types of cancer in adults -- leukemia, bone, lung, breast, gastrointestinal, thyroid and ''other''. Themore » category, ''other'' cancers, is intended to reflect the combined risks of multiple myeloma, lymphoma, and cancers of the bladder, kidney, brain, ovary, uterus and cervix. Models of childhood cancers due to in utero exposure are also provided. For most cancers, both incidence and mortality are addressed. Linear and linear-quadratic models are also recommended for assessing genetic risks. Five classes of genetic disease -- dominant, x-linked, aneuploidy, unbalanced translocation and multifactorial diseases --are considered. In addition, the impact of radiation-induced genetic damage on the incidence of peri-implantation embryo losses is discussed. The uncertainty in modeling radiological health risks is addressed by providing central, upper, and lower estimates of all model parameters. Data are provided which should enable analysts to consider the timing and severity of each type of health risk. 22 refs., 14 figs., 51 tabs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granita, E-mail: granitafc@gmail.com; Bahar, A.
This paper discusses on linear birth and death with immigration and emigration (BIDE) process to stochastic differential equation (SDE) model. Forward Kolmogorov equation in continuous time Markov chain (CTMC) with a central-difference approximation was used to find Fokker-Planckequation corresponding to a diffusion process having the stochastic differential equation of BIDE process. The exact solution, mean and variance function of BIDE process was found.
Robust model predictive control for constrained continuous-time nonlinear systems
NASA Astrophysics Data System (ADS)
Sun, Tairen; Pan, Yongping; Zhang, Jun; Yu, Haoyong
2018-02-01
In this paper, a robust model predictive control (MPC) is designed for a class of constrained continuous-time nonlinear systems with bounded additive disturbances. The robust MPC consists of a nonlinear feedback control and a continuous-time model-based dual-mode MPC. The nonlinear feedback control guarantees the actual trajectory being contained in a tube centred at the nominal trajectory. The dual-mode MPC is designed to ensure asymptotic convergence of the nominal trajectory to zero. This paper extends current results on discrete-time model-based tube MPC and linear system model-based tube MPC to continuous-time nonlinear model-based tube MPC. The feasibility and robustness of the proposed robust MPC have been demonstrated by theoretical analysis and applications to a cart-damper springer system and a one-link robot manipulator.
Stochastic Stability of Sampled Data Systems with a Jump Linear Controller
NASA Technical Reports Server (NTRS)
Gonzalez, Oscar R.; Herencia-Zapana, Heber; Gray, W. Steven
2004-01-01
In this paper an equivalence between the stochastic stability of a sampled-data system and its associated discrete-time representation is established. The sampled-data system consists of a deterministic, linear, time-invariant, continuous-time plant and a stochastic, linear, time-invariant, discrete-time, jump linear controller. The jump linear controller models computer systems and communication networks that are subject to stochastic upsets or disruptions. This sampled-data model has been used in the analysis and design of fault-tolerant systems and computer-control systems with random communication delays without taking into account the inter-sample response. This paper shows that the known equivalence between the stability of a deterministic sampled-data system and the associated discrete-time representation holds even in a stochastic framework.
NASA Technical Reports Server (NTRS)
Guo, Tong-Yi; Hwang, Chyi; Shieh, Leang-San
1994-01-01
This paper deals with the multipoint Cauer matrix continued-fraction expansion (MCFE) for model reduction of linear multi-input multi-output (MIMO) systems with various numbers of inputs and outputs. A salient feature of the proposed MCFE approach to model reduction of MIMO systems with square transfer matrices is its equivalence to the matrix Pade approximation approach. The Cauer second form of the ordinary MCFE for a square transfer function matrix is generalized in this paper to a multipoint and nonsquare-matrix version. An interesting connection of the multipoint Cauer MCFE method to the multipoint matrix Pade approximation method is established. Also, algorithms for obtaining the reduced-degree matrix-fraction descriptions and reduced-dimensional state-space models from a transfer function matrix via the multipoint Cauer MCFE algorithm are presented. Practical advantages of using the multipoint Cauer MCFE are discussed and a numerical example is provided to illustrate the algorithms.
2D discontinuous piecewise linear map: Emergence of fashion cycles.
Gardini, L; Sushko, I; Matsuyama, K
2018-05-01
We consider a discrete-time version of the continuous-time fashion cycle model introduced in Matsuyama, 1992. Its dynamics are defined by a 2D discontinuous piecewise linear map depending on three parameters. In the parameter space of the map periodicity, regions associated with attracting cycles of different periods are organized in the period adding and period incrementing bifurcation structures. The boundaries of all the periodicity regions related to border collision bifurcations are obtained analytically in explicit form. We show the existence of several partially overlapping period incrementing structures, that is, a novelty for the considered class of maps. Moreover, we show that if the time-delay in the discrete time formulation of the model shrinks to zero, the number of period incrementing structures tends to infinity and the dynamics of the discrete time fashion cycle model converges to those of continuous-time fashion cycle model.
On a q-extension of the linear harmonic oscillator with the continuous orthogonality property on ℝ
NASA Astrophysics Data System (ADS)
Alvarez-Nodarse, R.; Atakishiyeva, M. K.; Atakishiyev, N. M.
2005-11-01
We discuss a q-analogue of the linear harmonic oscillator in quantum mechanics based on a q-extension of the classical Hermite polynomials H n ( x) recently introduced by us in R. Alvarez-Nodarse et al.: Boletin de la Sociedad Matematica Mexicana (3) 8 (2002) 127. The wave functions in this q-model of the quantum harmonic oscillator possess the continuous orthogonality property on the whole real line ℝ with respect to a positive weight function. A detailed description of the corresponding q-system is carried out.
NASA Astrophysics Data System (ADS)
Bhattacharjee, Sudip; Swamy, Aravind Krishna; Daniel, Jo S.
2012-08-01
This paper presents a simple and practical approach to obtain the continuous relaxation and retardation spectra of asphalt concrete directly from the complex (dynamic) modulus test data. The spectra thus obtained are continuous functions of relaxation and retardation time. The major advantage of this method is that the continuous form is directly obtained from the master curves which are readily available from the standard characterization tests of linearly viscoelastic behavior of asphalt concrete. The continuous spectrum method offers efficient alternative to the numerical computation of discrete spectra and can be easily used for modeling viscoelastic behavior. In this research, asphalt concrete specimens have been tested for linearly viscoelastic characterization. The linearly viscoelastic test data have been used to develop storage modulus and storage compliance master curves. The continuous spectra are obtained from the fitted sigmoid function of the master curves via the inverse integral transform. The continuous spectra are shown to be the limiting case of the discrete distributions. The continuous spectra and the time-domain viscoelastic functions (relaxation modulus and creep compliance) computed from the spectra matched very well with the approximate solutions. It is observed that the shape of the spectra is dependent on the master curve parameters. The continuous spectra thus obtained can easily be implemented in material mix design process. Prony-series coefficients can be easily obtained from the continuous spectra and used in numerical analysis such as finite element analysis.
2007-03-01
mathematical frame- 1-6 work of linear algebra and functional analysis [122, 33], while Kalman-Bucy filtering [96, 32] is an especially important...Engineering, Air Force Institute of Technology (AU), Wright- Patterson AFB, Ohio, March 2002. 85. Hoffman, Kenneth and Ray Kunze. Linear Algebra (Second Edition...Engineering, Air Force Institute of Technology (AU), Wright- Patterson AFB, Ohio, December 1989. 189. Strang, Gilbert. Linear Algebra and Its Applications
NASA Astrophysics Data System (ADS)
Lei, Mingfeng; Lin, Dayong; Liu, Jianwen; Shi, Chenghua; Ma, Jianjun; Yang, Weichao; Yu, Xiaoniu
2018-03-01
For the purpose of investigating lining concrete durability, this study derives a modified chloride diffusion model for concrete based on the odd continuation of boundary conditions and Fourier transform. In order to achieve this, the linear stress distribution on a sectional structure is considered, detailed procedures and methods are presented for model verification and parametric analysis. Simulation results show that the chloride diffusion model can reflect the effects of linear stress distribution of the sectional structure on the chloride diffusivity with reliable accuracy. Along with the natural environmental characteristics of practical engineering structures, reference value ranges of model parameters are provided. Furthermore, a chloride diffusion model is extended for the consideration of multi-factor coupling of linear stress distribution, chloride concentration and diffusion time. Comparison between model simulation and typical current research results shows that the presented model can produce better considerations with a greater universality.
Reply to Steele & Ferrer: Modeling Oscillation, Approximately or Exactly?
ERIC Educational Resources Information Center
Oud, Johan H. L.; Folmer, Henk
2011-01-01
This article addresses modeling oscillation in continuous time. It criticizes Steele and Ferrer's article "Latent Differential Equation Modeling of Self-Regulatory and Coregulatory Affective Processes" (2011), particularly the approximate estimation procedure applied. This procedure is the latent version of the local linear approximation procedure…
Boutonnet, Audrey; Morin, Arnaud; Petit, Pierre; Vicendo, Patricia; Poinsot, Véréna; Couderc, François
2016-03-17
Pulsed lasers are widely used in capillary electrophoresis (CE) studies to provide laser induced fluorescence (LIF) detection. Unfortunately pulsed lasers do not give linear calibration curves over a wide range of concentrations. While this does not prevent their use in CE/LIF studies, the non-linear behavior must be understood. Using 7-hydroxycoumarin (7-HC) (10-5000 nM), Tamra (10-5000 nM) and tryptophan (1-200 μM) as dyes, we observe that continuous lasers and LEDs result in linear calibration curves, while pulsed lasers give polynomial ones. The effect is seen with both visible light (530 nm) and with UV light (355 nm, 266 nm). In this work we point out the formation of byproducts induced by pulsed laser upon irradiation of 7-HC. Their separation by CE using two Zeta LIF detectors clearly shows that this process is related to the first laser detection. All of these photodegradation products can be identified by an ESI-/MS investigation and correspond to at least two 7HC dimers. By using the photodegradation model proposed by Heywood and Farnsworth (2010) and by taking into account the 7-HC results and the fact that in our system we do not have a constant concentration of fluorophore, it is possible to propose a new photochemical model of fluorescence in LIF detection. The model, like the experiment, shows that it is difficult to obtain linear quantitation curves with pulsed lasers while UV-LEDs used in continuous mode have this advantage. They are a good alternative to UV pulsed lasers. An application involving the separation and linear quantification of oligosaccharides labeled with 2-aminobezoic acid is presented using HILIC and LED (365 nm) induced fluorescence. Copyright © 2016 Elsevier B.V. All rights reserved.
ORACLS- OPTIMAL REGULATOR ALGORITHMS FOR THE CONTROL OF LINEAR SYSTEMS (CDC VERSION)
NASA Technical Reports Server (NTRS)
Armstrong, E. S.
1994-01-01
This control theory design package, called Optimal Regulator Algorithms for the Control of Linear Systems (ORACLS), was developed to aid in the design of controllers and optimal filters for systems which can be modeled by linear, time-invariant differential and difference equations. Optimal linear quadratic regulator theory, currently referred to as the Linear-Quadratic-Gaussian (LQG) problem, has become the most widely accepted method of determining optimal control policy. Within this theory, the infinite duration time-invariant problems, which lead to constant gain feedback control laws and constant Kalman-Bucy filter gains for reconstruction of the system state, exhibit high tractability and potential ease of implementation. A variety of new and efficient methods in the field of numerical linear algebra have been combined into the ORACLS program, which provides for the solution to time-invariant continuous or discrete LQG problems. The ORACLS package is particularly attractive to the control system designer because it provides a rigorous tool for dealing with multi-input and multi-output dynamic systems in both continuous and discrete form. The ORACLS programming system is a collection of subroutines which can be used to formulate, manipulate, and solve various LQG design problems. The ORACLS program is constructed in a manner which permits the user to maintain considerable flexibility at each operational state. This flexibility is accomplished by providing primary operations, analysis of linear time-invariant systems, and control synthesis based on LQG methodology. The input-output routines handle the reading and writing of numerical matrices, printing heading information, and accumulating output information. The basic vector-matrix operations include addition, subtraction, multiplication, equation, norm construction, tracing, transposition, scaling, juxtaposition, and construction of null and identity matrices. The analysis routines provide for the following computations: the eigenvalues and eigenvectors of real matrices; the relative stability of a given matrix; matrix factorization; the solution of linear constant coefficient vector-matrix algebraic equations; the controllability properties of a linear time-invariant system; the steady-state covariance matrix of an open-loop stable system forced by white noise; and the transient response of continuous linear time-invariant systems. The control law design routines of ORACLS implement some of the more common techniques of time-invariant LQG methodology. For the finite-duration optimal linear regulator problem with noise-free measurements, continuous dynamics, and integral performance index, a routine is provided which implements the negative exponential method for finding both the transient and steady-state solutions to the matrix Riccati equation. For the discrete version of this problem, the method of backwards differencing is applied to find the solutions to the discrete Riccati equation. A routine is also included to solve the steady-state Riccati equation by the Newton algorithms described by Klein, for continuous problems, and by Hewer, for discrete problems. Another routine calculates the prefilter gain to eliminate control state cross-product terms in the quadratic performance index and the weighting matrices for the sampled data optimal linear regulator problem. For cases with measurement noise, duality theory and optimal regulator algorithms are used to calculate solutions to the continuous and discrete Kalman-Bucy filter problems. Finally, routines are included to implement the continuous and discrete forms of the explicit (model-in-the-system) and implicit (model-in-the-performance-index) model following theory. These routines generate linear control laws which cause the output of a dynamic time-invariant system to track the output of a prescribed model. In order to apply ORACLS, the user must write an executive (driver) program which inputs the problem coefficients, formulates and selects the routines to be used to solve the problem, and specifies the desired output. There are three versions of ORACLS source code available for implementation: CDC, IBM, and DEC. The CDC version has been implemented on a CDC 6000 series computer with a central memory of approximately 13K (octal) of 60 bit words. The CDC version is written in FORTRAN IV, was developed in 1978, and last updated in 1989. The IBM version has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The IBM version is written in FORTRAN IV and was generated in 1981. The DEC version has been implemented on a VAX series computer operating under VMS. The VAX version is written in FORTRAN 77 and was generated in 1986.
ORACLS- OPTIMAL REGULATOR ALGORITHMS FOR THE CONTROL OF LINEAR SYSTEMS (DEC VAX VERSION)
NASA Technical Reports Server (NTRS)
Frisch, H.
1994-01-01
This control theory design package, called Optimal Regulator Algorithms for the Control of Linear Systems (ORACLS), was developed to aid in the design of controllers and optimal filters for systems which can be modeled by linear, time-invariant differential and difference equations. Optimal linear quadratic regulator theory, currently referred to as the Linear-Quadratic-Gaussian (LQG) problem, has become the most widely accepted method of determining optimal control policy. Within this theory, the infinite duration time-invariant problems, which lead to constant gain feedback control laws and constant Kalman-Bucy filter gains for reconstruction of the system state, exhibit high tractability and potential ease of implementation. A variety of new and efficient methods in the field of numerical linear algebra have been combined into the ORACLS program, which provides for the solution to time-invariant continuous or discrete LQG problems. The ORACLS package is particularly attractive to the control system designer because it provides a rigorous tool for dealing with multi-input and multi-output dynamic systems in both continuous and discrete form. The ORACLS programming system is a collection of subroutines which can be used to formulate, manipulate, and solve various LQG design problems. The ORACLS program is constructed in a manner which permits the user to maintain considerable flexibility at each operational state. This flexibility is accomplished by providing primary operations, analysis of linear time-invariant systems, and control synthesis based on LQG methodology. The input-output routines handle the reading and writing of numerical matrices, printing heading information, and accumulating output information. The basic vector-matrix operations include addition, subtraction, multiplication, equation, norm construction, tracing, transposition, scaling, juxtaposition, and construction of null and identity matrices. The analysis routines provide for the following computations: the eigenvalues and eigenvectors of real matrices; the relative stability of a given matrix; matrix factorization; the solution of linear constant coefficient vector-matrix algebraic equations; the controllability properties of a linear time-invariant system; the steady-state covariance matrix of an open-loop stable system forced by white noise; and the transient response of continuous linear time-invariant systems. The control law design routines of ORACLS implement some of the more common techniques of time-invariant LQG methodology. For the finite-duration optimal linear regulator problem with noise-free measurements, continuous dynamics, and integral performance index, a routine is provided which implements the negative exponential method for finding both the transient and steady-state solutions to the matrix Riccati equation. For the discrete version of this problem, the method of backwards differencing is applied to find the solutions to the discrete Riccati equation. A routine is also included to solve the steady-state Riccati equation by the Newton algorithms described by Klein, for continuous problems, and by Hewer, for discrete problems. Another routine calculates the prefilter gain to eliminate control state cross-product terms in the quadratic performance index and the weighting matrices for the sampled data optimal linear regulator problem. For cases with measurement noise, duality theory and optimal regulator algorithms are used to calculate solutions to the continuous and discrete Kalman-Bucy filter problems. Finally, routines are included to implement the continuous and discrete forms of the explicit (model-in-the-system) and implicit (model-in-the-performance-index) model following theory. These routines generate linear control laws which cause the output of a dynamic time-invariant system to track the output of a prescribed model. In order to apply ORACLS, the user must write an executive (driver) program which inputs the problem coefficients, formulates and selects the routines to be used to solve the problem, and specifies the desired output. There are three versions of ORACLS source code available for implementation: CDC, IBM, and DEC. The CDC version has been implemented on a CDC 6000 series computer with a central memory of approximately 13K (octal) of 60 bit words. The CDC version is written in FORTRAN IV, was developed in 1978, and last updated in 1986. The IBM version has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The IBM version is written in FORTRAN IV and was generated in 1981. The DEC version has been implemented on a VAX series computer operating under VMS. The VAX version is written in FORTRAN 77 and was generated in 1986.
Functional linear models to test for differences in prairie wetland hydraulic gradients
Greenwood, Mark C.; Sojda, Richard S.; Preston, Todd M.; Swayne, David A.; Yang, Wanhong; Voinov, A.A.; Rizzoli, A.; Filatova, T.
2010-01-01
Functional data analysis provides a framework for analyzing multiple time series measured frequently in time, treating each series as a continuous function of time. Functional linear models are used to test for effects on hydraulic gradient functional responses collected from three types of land use in Northeastern Montana at fourteen locations. Penalized regression-splines are used to estimate the underlying continuous functions based on the discretely recorded (over time) gradient measurements. Permutation methods are used to assess the statistical significance of effects. A method for accommodating missing observations in each time series is described. Hydraulic gradients may be an initial and fundamental ecosystem process that responds to climate change. We suggest other potential uses of these methods for detecting evidence of climate change.
Canards in a minimal piecewise-linear square-wave burster
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desroches, M.; Krupa, M.; Fernández-García, S., E-mail: soledad@us.es
We construct a piecewise-linear (PWL) approximation of the Hindmarsh-Rose (HR) neuron model that is minimal, in the sense that the vector field has the least number of linearity zones, in order to reproduce all the dynamics present in the original HR model with classical parameter values. This includes square-wave bursting and also special trajectories called canards, which possess long repelling segments and organise the transitions between stable bursting patterns with n and n + 1 spikes, also referred to as spike-adding canard explosions. We propose a first approximation of the smooth HR model, using a continuous PWL system, and show that itsmore » fast subsystem cannot possess a homoclinic bifurcation, which is necessary to obtain proper square-wave bursting. We then relax the assumption of continuity of the vector field across all zones, and we show that we can obtain a homoclinic bifurcation in the fast subsystem. We use the recently developed canard theory for PWL systems in order to reproduce the spike-adding canard explosion feature of the HR model as studied, e.g., in Desroches et al., Chaos 23(4), 046106 (2013).« less
Sobhani Tehrani, Ehsan; Jalaleddini, Kian; Kearney, Robert E
2013-01-01
This paper describes a novel model structure and identification method for the time-varying, intrinsic stiffness of human ankle joint during imposed walking (IW) movements. The model structure is based on the superposition of a large signal, linear, time-invariant (LTI) model and a small signal linear-parameter varying (LPV) model. The methodology is based on a two-step algorithm; the LTI model is first estimated using data from an unperturbed IW trial. Then, the LPV model is identified using data from a perturbed IW trial with the output predictions of the LTI model removed from the measured torque. Experimental results demonstrate that the method accurately tracks the continuous-time variation of normal ankle intrinsic stiffness when the joint position changes during the IW movement. Intrinsic stiffness gain decreases from full plantarflexion to near the mid-point of plantarflexion and then increases substantially as the ankle is dosriflexed.
Managerial and environmental factors in the continuity of mental health care across institutions.
Greenberg, Greg A; Rosenheck, Robert A
2003-04-01
The authors examined the association of continuity of care with factors assumed to be under the control of health care administrators and environmental factors not under managerial control. The authors used a facility-level administrative data set for 139 Department of Veterans Affairs medical centers over a six-year period and supplemental data on environmental factors to conduct two types of analysis. First, simple correlations were used to examine bivariate associations between eight continuity-of-care measures and nine measures of the institutional environment and the social context. Second, to control for potential autocorrelation, multivariate hierarchical linear models with all nine independent measures were created. The strongest predictors of continuity of care were per capita outpatient expenditure and the degree of emphasis on outpatient care as measured by the percentage of all mental health expenditures devoted to outpatient care. The former was significantly associated with greater continuity of care on six of eight measures and the latter on seven of eight measures. The environmental factor of social capital (the degree of civic involvement and trust at the state level) was associated with greater continuity of care on five measures. The degree to which non-VA mental health services were funded in a state was unexpectedly found to be positively associated with greater continuity of care. In multivariate analysis using hierarchical linear modeling, significant relationships with continuity of care remained for per capita outpatient expenditures, overall outpatient emphasis, and social capital, but not for non-VA mental health funding. A linear term representing the year was positively and significantly associated with six of the eight examined continuity-of-care measures, indicating improvement in continuity of care for the period under study, although the explanation for this trend over time is unclear. Several factors potentially under managerial control are associated with increased mental health continuity of care.
Quantile Regression in the Study of Developmental Sciences
Petscher, Yaacov; Logan, Jessica A. R.
2014-01-01
Linear regression analysis is one of the most common techniques applied in developmental research, but only allows for an estimate of the average relations between the predictor(s) and the outcome. This study describes quantile regression, which provides estimates of the relations between the predictor(s) and outcome, but across multiple points of the outcome’s distribution. Using data from the High School and Beyond and U.S. Sustained Effects Study databases, quantile regression is demonstrated and contrasted with linear regression when considering models with: (a) one continuous predictor, (b) one dichotomous predictor, (c) a continuous and a dichotomous predictor, and (d) a longitudinal application. Results from each example exhibited the differential inferences which may be drawn using linear or quantile regression. PMID:24329596
Health effects models for nuclear power plant accident consequence analysis: Low LET radiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, J.S.
1990-01-01
This report describes dose-response models intended to be used in estimating the radiological health effects of nuclear power plant accidents. Models of early and continuing effects, cancers and thyroid nodules, and genetic effects are provided. Weibull dose-response functions are recommended for evaluating the risks of early and continuing health effects. Three potentially lethal early effects -- the hematopoietic, pulmonary, and gastrointestinal syndromes -- are considered. In addition, models are included for assessing the risks of several nonlethal early and continuing effects -- including prodromal vomiting and diarrhea, hypothyroidism and radiation thyroiditis, skin burns, reproductive effects, and pregnancy losses. Linear andmore » linear-quadratic models are recommended for estimating cancer risks. Parameters are given for analyzing the risks of seven types of cancer in adults -- leukemia, bone, lung, breast, gastrointestinal, thyroid, and other.'' The category, other'' cancers, is intended to reflect the combined risks of multiple myeloma, lymphoma, and cancers of the bladder, kidney, brain, ovary, uterus and cervix. Models of childhood cancers due to in utero exposure are also developed. For most cancers, both incidence and mortality are addressed. The models of cancer risk are derived largely from information summarized in BEIR III -- with some adjustment to reflect more recent studies. 64 refs., 18 figs., 46 tabs.« less
Valeri, Linda; Lin, Xihong; VanderWeele, Tyler J.
2014-01-01
Mediation analysis is a popular approach to examine the extent to which the effect of an exposure on an outcome is through an intermediate variable (mediator) and the extent to which the effect is direct. When the mediator is mis-measured the validity of mediation analysis can be severely undermined. In this paper we first study the bias of classical, non-differential measurement error on a continuous mediator in the estimation of direct and indirect causal effects in generalized linear models when the outcome is either continuous or discrete and exposure-mediator interaction may be present. Our theoretical results as well as a numerical study demonstrate that in the presence of non-linearities the bias of naive estimators for direct and indirect effects that ignore measurement error can take unintuitive directions. We then develop methods to correct for measurement error. Three correction approaches using method of moments, regression calibration and SIMEX are compared. We apply the proposed method to the Massachusetts General Hospital lung cancer study to evaluate the effect of genetic variants mediated through smoking on lung cancer risk. PMID:25220625
Scoring and staging systems using cox linear regression modeling and recursive partitioning.
Lee, J W; Um, S H; Lee, J B; Mun, J; Cho, H
2006-01-01
Scoring and staging systems are used to determine the order and class of data according to predictors. Systems used for medical data, such as the Child-Turcotte-Pugh scoring and staging systems for ordering and classifying patients with liver disease, are often derived strictly from physicians' experience and intuition. We construct objective and data-based scoring/staging systems using statistical methods. We consider Cox linear regression modeling and recursive partitioning techniques for censored survival data. In particular, to obtain a target number of stages we propose cross-validation and amalgamation algorithms. We also propose an algorithm for constructing scoring and staging systems by integrating local Cox linear regression models into recursive partitioning, so that we can retain the merits of both methods such as superior predictive accuracy, ease of use, and detection of interactions between predictors. The staging system construction algorithms are compared by cross-validation evaluation of real data. The data-based cross-validation comparison shows that Cox linear regression modeling is somewhat better than recursive partitioning when there are only continuous predictors, while recursive partitioning is better when there are significant categorical predictors. The proposed local Cox linear recursive partitioning has better predictive accuracy than Cox linear modeling and simple recursive partitioning. This study indicates that integrating local linear modeling into recursive partitioning can significantly improve prediction accuracy in constructing scoring and staging systems.
NASA Astrophysics Data System (ADS)
Magga, Zoi; Tzovolou, Dimitra N.; Theodoropoulou, Maria A.; Tsakiroglou, Christos D.
2012-03-01
The risk assessment of groundwater pollution by pesticides may be based on pesticide sorption and biodegradation kinetic parameters estimated with inverse modeling of datasets from either batch or continuous flow soil column experiments. In the present work, a chemical non-equilibrium and non-linear 2-site sorption model is incorporated into solute transport models to invert the datasets of batch and soil column experiments, and estimate the kinetic sorption parameters for two pesticides: N-phosphonomethyl glycine (glyphosate) and 2,4-dichlorophenoxy-acetic acid (2,4-D). When coupling the 2-site sorption model with the 2-region transport model, except of the kinetic sorption parameters, the soil column datasets enable us to estimate the mass-transfer coefficients associated with solute diffusion between mobile and immobile regions. In order to improve the reliability of models and kinetic parameter values, a stepwise strategy that combines batch and continuous flow tests with adequate true-to-the mechanism analytical of numerical models, and decouples the kinetics of purely reactive steps of sorption from physical mass-transfer processes is required.
Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William
2016-01-01
Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p < 0.001) when using a linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p < 0.001) and slopes (p < 0.001) of the individual growth trajectories. We also identified important serial correlation within the structure of the data (ρ = 0.66; 95 % CI 0.64 to 0.68; p < 0.001), which we modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19,598, respectively). While the regression parameters are more complex to interpret in the former, we argue that inference for any problem depends more on the estimated curve or differences in curves rather than the coefficients. Moreover, use of cubic regression splines provides biological meaningful growth velocity and acceleration curves despite increased complexity in coefficient interpretation. Through this stepwise approach, we provide a set of tools to model longitudinal childhood data for non-statisticians using linear mixed-effect models.
Modeling workplace bullying using catastrophe theory.
Escartin, J; Ceja, L; Navarro, J; Zapf, D
2013-10-01
Workplace bullying is defined as negative behaviors directed at organizational members or their work context that occur regularly and repeatedly over a period of time. Employees' perceptions of psychosocial safety climate, workplace bullying victimization, and workplace bullying perpetration were assessed within a sample of nearly 5,000 workers. Linear and nonlinear approaches were applied in order to model both continuous and sudden changes in workplace bullying. More specifically, the present study examines whether a nonlinear dynamical systems model (i.e., a cusp catastrophe model) is superior to the linear combination of variables for predicting the effect of psychosocial safety climate and workplace bullying victimization on workplace bullying perpetration. According to the AICc, and BIC indices, the linear regression model fits the data better than the cusp catastrophe model. The study concludes that some phenomena, especially unhealthy behaviors at work (like workplace bullying), may be better studied using linear approaches as opposed to nonlinear dynamical systems models. This can be explained through the healthy variability hypothesis, which argues that positive organizational behavior is likely to present nonlinear behavior, while a decrease in such variability may indicate the occurrence of negative behaviors at work.
NASA Astrophysics Data System (ADS)
Zielnica, J.; Ziółkowski, A.; Cempel, C.
2003-03-01
Design and theoretical and experimental investigation of vibroisolation pads with non-linear static and dynamic responses is the objective of the paper. The analytical investigations are based on non-linear finite element analysis where the load-deflection response is traced against the shape and material properties of the analysed model of the vibroisolation pad. A new model of vibroisolation pad of antisymmetrical type was designed and analysed by the finite element method based on the second-order theory (large displacements and strains) with the assumption of material's non-linearities (Mooney-Rivlin model). Stability loss phenomenon was used in the design of the vibroisolators, and it was proved that it would be possible to design a model of vibroisolator in the form of a continuous pad with non-linear static and dynamic response, typical to vibroisolation purposes. The materials used for the vibroisolator are those of rubber, elastomers, and similar ones. The results of theoretical investigations were examined experimentally. A series of models made of soft rubber were designed for the test purposes. The experimental investigations of the vibroisolation models, under static and dynamic loads, confirmed the results of the FEM analysis.
Modeling the Wake as a Continuous Vortex Sheet in a Potential-Flow Solution Using Vortex Panels
1989-12-01
Continuous Vortex Sheet ........ 30 0 Redistributing the Vorticity Over anlIncreasing Area ............... 31 System of Linear Equations inG-Primes...i)* 9 ~=- r(x) L~~3 (29) 4v ji -i13 where dl is a differential length along the filament dl = dx 1 ( 30 ) when expressed in the local coordinate frame...which 30 models the wing serves as a pattern for this effort, but modifications must be made since the wake is continually growing and distorting. In
Robustness of neuroprosthetic decoding algorithms.
Serruya, Mijail; Hatsopoulos, Nicholas; Fellows, Matthew; Paninski, Liam; Donoghue, John
2003-03-01
We assessed the ability of two algorithms to predict hand kinematics from neural activity as a function of the amount of data used to determine the algorithm parameters. Using chronically implanted intracortical arrays, single- and multineuron discharge was recorded during trained step tracking and slow continuous tracking tasks in macaque monkeys. The effect of increasing the amount of data used to build a neural decoding model on the ability of that model to predict hand kinematics accurately was examined. We evaluated how well a maximum-likelihood model classified discrete reaching directions and how well a linear filter model reconstructed continuous hand positions over time within and across days. For each of these two models we asked two questions: (1) How does classification performance change as the amount of data the model is built upon increases? (2) How does varying the time interval between the data used to build the model and the data used to test the model affect reconstruction? Less than 1 min of data for the discrete task (8 to 13 neurons) and less than 3 min (8 to 18 neurons) for the continuous task were required to build optimal models. Optimal performance was defined by a cost function we derived that reflects both the ability of the model to predict kinematics accurately and the cost of taking more time to build such models. For both the maximum-likelihood classifier and the linear filter model, increasing the duration between the time of building and testing the model within a day did not cause any significant trend of degradation or improvement in performance. Linear filters built on one day and tested on neural data on a subsequent day generated error-measure distributions that were not significantly different from those generated when the linear filters were tested on neural data from the initial day (p<0.05, Kolmogorov-Smirnov test). These data show that only a small amount of data from a limited number of cortical neurons appears to be necessary to construct robust models to predict kinematic parameters for the subsequent hours. Motor-control signals derived from neurons in motor cortex can be reliably acquired for use in neural prosthetic devices. Adequate decoding models can be built rapidly from small numbers of cells and maintained with daily calibration sessions.
Analyzing longitudinal data with the linear mixed models procedure in SPSS.
West, Brady T
2009-09-01
Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.
Dynamics of attitudes and genetic processes.
Guastello, Stephen J; Guastello, Denise D
2008-01-01
Relatively new discoveries of a genetic component to attitudes have challenged the traditional viewpoint that attitudes are primarily learned ideas and behaviors. Attitudes that are regarded by respondents as "more important" tend to have greater genetic components to them, and tend to be more closely associated with authoritarianism. Nonlinear theories, nonetheless, have also been introduced to study attitude change. The objective of this study was to determine whether change in authoritarian attitudes across two generations would be more aptly described by a linear or a nonlinear model. Participants were 372 college students, their mothers, and their fathers who completed an attitude questionnaire. Results indicated that the nonlinear model (R2 = .09) was slightly better than the linear model (R2 = .08), but the two models offered very different forecasts for future generations of US society. The linear model projected a gradual and continuing bifurcation between authoritarians and non-authoritarians. The nonlinear model projected a stabilization of authoritarian attitudes.
Flatness-based control and Kalman filtering for a continuous-time macroeconomic model
NASA Astrophysics Data System (ADS)
Rigatos, G.; Siano, P.; Ghosh, T.; Busawon, K.; Binns, R.
2017-11-01
The article proposes flatness-based control for a nonlinear macro-economic model of the UK economy. The differential flatness properties of the model are proven. This enables to introduce a transformation (diffeomorphism) of the system's state variables and to express the state-space description of the model in the linear canonical (Brunowsky) form in which both the feedback control and the state estimation problem can be solved. For the linearized equivalent model of the macroeconomic system, stabilizing feedback control can be achieved using pole placement methods. Moreover, to implement stabilizing feedback control of the system by measuring only a subset of its state vector elements the Derivative-free nonlinear Kalman Filter is used. This consists of the Kalman Filter recursion applied on the linearized equivalent model of the financial system and of an inverse transformation that is based again on differential flatness theory. The asymptotic stability properties of the control scheme are confirmed.
Aims or Purposes of School Mediation in Spain
ERIC Educational Resources Information Center
Viana-Orta, María-Isabel
2013-01-01
Mediation continues to expand, both geographically and in terms of scope. Depending on its purpose, there are three main consolidated mediation models or schools worldwide: the Traditional-Linear Harvard model, which seeks to find an agreement between the parties; the Circular-Narrative model, which apart from the agreement also emphasizes…
Simpson, G; Fisher, C; Wright, D K
2001-01-01
Continuing earlier studies into the relationship between the residual limb, liner and socket in transtibial amputees, we describe a geometrically accurate non-linear model simulating the donning of a liner and then a socket. The socket is rigid and rectified and the liner is a polyurethane geltype which is accurately described using non-linear (Mooney-Rivlin) material properties. The soft tissue of the residual limb is modelled as homogeneous, non-linear and hyperelastic and the bone structure within the residual limb is taken as rigid. The work gives an indication of how the stress induced by the process of donning the rigid socket is redistributed by the liner. Ultimately we hope to understand how the liner design might be modified to reduce discomfort. The ANSYS finite element code, version 5.6 is used.
Gain scheduled linear quadratic control for quadcopter
NASA Astrophysics Data System (ADS)
Okasha, M.; Shah, J.; Fauzi, W.; Hanouf, Z.
2017-12-01
This study exploits the dynamics and control of quadcopters using Linear Quadratic Regulator (LQR) control approach. The quadcopter’s mathematical model is derived using the Newton-Euler method. It is a highly manoeuvrable, nonlinear, coupled with six degrees of freedom (DOF) model, which includes aerodynamics and detailed gyroscopic moments that are often ignored in many literatures. The linearized model is obtained and characterized by the heading angle (i.e. yaw angle) of the quadcopter. The adopted control approach utilizes LQR method to track several reference trajectories including circle and helix curves with significant variation in the yaw angle. The controller is modified to overcome difficulties related to the continuous changes in the operating points and eliminate chattering and discontinuity that is observed in the control input signal. Numerical non-linear simulations are performed using MATLAB and Simulink to illustrate to accuracy and effectiveness of the proposed controller.
Stress testing hydrologic models using bottom-up climate change assessment
NASA Astrophysics Data System (ADS)
Stephens, C.; Johnson, F.; Marshall, L. A.
2017-12-01
Bottom-up climate change assessment is a promising approach for understanding the vulnerability of a system to potential future changes. The technique has been utilised successfully in risk-based assessments of future flood severity and infrastructure vulnerability. We find that it is also an ideal tool for assessing hydrologic model performance in a changing climate. In this study, we applied bottom-up climate change to compare the performance of two different hydrologic models (an event-based and a continuous model) under increasingly severe climate change scenarios. This allowed us to diagnose likely sources of future prediction error in the two models. The climate change scenarios were based on projections for southern Australia, which indicate drier average conditions with increased extreme rainfall intensities. We found that the key weakness in using the event-based model to simulate drier future scenarios was the model's inability to dynamically account for changing antecedent conditions. This led to increased variability in model performance relative to the continuous model, which automatically accounts for the wetness of a catchment through dynamic simulation of water storages. When considering more intense future rainfall events, representation of antecedent conditions became less important than assumptions around (non)linearity in catchment response. The linear continuous model we applied may underestimate flood risk in a future climate with greater extreme rainfall intensity. In contrast with the recommendations of previous studies, this indicates that continuous simulation is not necessarily the key to robust flood modelling under climate change. By applying bottom-up climate change assessment, we were able to understand systematic changes in relative model performance under changing conditions and deduce likely sources of prediction error in the two models.
NASA Technical Reports Server (NTRS)
Yu, Xiaolong; Lewis, Edwin R.
1989-01-01
It is shown that noise can be an important element in the translation of neuronal generator potentials (summed inputs) to neuronal spike trains (outputs), creating or expanding a range of amplitudes over which the spike rate is proportional to the generator potential amplitude. Noise converts the basically nonlinear operation of a spike initiator into a nearly linear modulation process. This linearization effect of noise is examined in a simple intuitive model of a static threshold and in a more realistic computer simulation of spike initiator based on the Hodgkin-Huxley (HH) model. The results are qualitatively similar; in each case larger noise amplitude results in a larger range of nearly linear modulation. The computer simulation of the HH model with noise shows linear and nonlinear features that were earlier observed in spike data obtained from the VIIIth nerve of the bullfrog. This suggests that these features can be explained in terms of spike initiator properties, and it also suggests that the HH model may be useful for representing basic spike initiator properties in vertebrates.
Elenchezhiyan, M; Prakash, J
2015-09-01
In this work, state estimation schemes for non-linear hybrid dynamic systems subjected to stochastic state disturbances and random errors in measurements using interacting multiple-model (IMM) algorithms are formulated. In order to compute both discrete modes and continuous state estimates of a hybrid dynamic system either an IMM extended Kalman filter (IMM-EKF) or an IMM based derivative-free Kalman filters is proposed in this study. The efficacy of the proposed IMM based state estimation schemes is demonstrated by conducting Monte-Carlo simulation studies on the two-tank hybrid system and switched non-isothermal continuous stirred tank reactor system. Extensive simulation studies reveal that the proposed IMM based state estimation schemes are able to generate fairly accurate continuous state estimates and discrete modes. In the presence and absence of sensor bias, the simulation studies reveal that the proposed IMM unscented Kalman filter (IMM-UKF) based simultaneous state and parameter estimation scheme outperforms multiple-model UKF (MM-UKF) based simultaneous state and parameter estimation scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Theory of the Lattice Boltzmann Equation: Symmetry properties of Discrete Velocity Sets
NASA Technical Reports Server (NTRS)
Rubinstein, Robert; Luo, Li-Shi
2007-01-01
In the lattice Boltzmann equation, continuous particle velocity space is replaced by a finite dimensional discrete set. The number of linearly independent velocity moments in a lattice Boltzmann model cannot exceed the number of discrete velocities. Thus, finite dimensionality introduces linear dependencies among the moments that do not exist in the exact continuous theory. Given a discrete velocity set, it is important to know to exactly what order moments are free of these dependencies. Elementary group theory is applied to the solution of this problem. It is found that by decomposing the velocity set into subsets that transform among themselves under an appropriate symmetry group, it becomes relatively straightforward to assess the behavior of moments in the theory. The construction of some standard two- and three-dimensional models is reviewed from this viewpoint, and procedures for constructing some new higher dimensional models are suggested.
Berns, G S; Song, A W; Mao, H
1999-07-15
Linear experimental designs have dominated the field of functional neuroimaging, but although successful at mapping regions of relative brain activation, the technique assumes that both cognition and brain activation are linear processes. To test these assumptions, we performed a continuous functional magnetic resonance imaging (MRI) experiment of finger opposition. Subjects performed a visually paced bimanual finger-tapping task. The frequency of finger tapping was continuously varied between 1 and 5 Hz, without any rest blocks. After continuous acquisition of fMRI images, the task-related brain regions were identified with independent components analysis (ICA). When the time courses of the task-related components were plotted against tapping frequency, nonlinear "dose- response" curves were obtained for most subjects. Nonlinearities appeared in both the static and dynamic sense, with hysteresis being prominent in several subjects. The ICA decomposition also demonstrated the spatial dynamics with different components active at different times. These results suggest that the brain response to tapping frequency does not scale linearly, and that it is history-dependent even after accounting for the hemodynamic response function. This implies that finger tapping, as measured with fMRI, is a nonstationary process. When analyzed with a conventional general linear model, a strong correlation to tapping frequency was identified, but the spatiotemporal dynamics were not apparent.
Henrard, S; Speybroeck, N; Hermans, C
2015-11-01
Haemophilia is a rare genetic haemorrhagic disease characterized by partial or complete deficiency of coagulation factor VIII, for haemophilia A, or IX, for haemophilia B. As in any other medical research domain, the field of haemophilia research is increasingly concerned with finding factors associated with binary or continuous outcomes through multivariable models. Traditional models include multiple logistic regressions, for binary outcomes, and multiple linear regressions for continuous outcomes. Yet these regression models are at times difficult to implement, especially for non-statisticians, and can be difficult to interpret. The present paper sought to didactically explain how, why, and when to use classification and regression tree (CART) analysis for haemophilia research. The CART method is non-parametric and non-linear, based on the repeated partitioning of a sample into subgroups based on a certain criterion. Breiman developed this method in 1984. Classification trees (CTs) are used to analyse categorical outcomes and regression trees (RTs) to analyse continuous ones. The CART methodology has become increasingly popular in the medical field, yet only a few examples of studies using this methodology specifically in haemophilia have to date been published. Two examples using CART analysis and previously published in this field are didactically explained in details. There is increasing interest in using CART analysis in the health domain, primarily due to its ease of implementation, use, and interpretation, thus facilitating medical decision-making. This method should be promoted for analysing continuous or categorical outcomes in haemophilia, when applicable. © 2015 John Wiley & Sons Ltd.
Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Piunovskiy, A. B., E-mail: piunov@liv.ac.uk
2016-08-15
In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures ofmore » the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.« less
NASA Astrophysics Data System (ADS)
Vásquez Lavín, F. A.; Hernandez, J. I.; Ponce, R. D.; Orrego, S. A.
2017-07-01
During recent decades, water demand estimation has gained considerable attention from scholars. From an econometric perspective, the most used functional forms include log-log and linear specifications. Despite the advances in this field and the relevance for policymaking, little attention has been paid to the functional forms used in these estimations, and most authors have not provided justifications for their selection of functional forms. A discrete continuous choice model of the residential water demand is estimated using six functional forms (log-log, full-log, log-quadratic, semilog, linear, and Stone-Geary), and the expected consumption and price elasticity are evaluated. From a policy perspective, our results highlight the relevance of functional form selection for both the expected consumption and price elasticity.
A primary shift rotation nurse scheduling using zero-one linear goal programming.
Huarng, F
1999-01-01
In this study, the author discusses the effect of nurse shift schedules on circadian rhythm and some important ergonomics criteria. The author also reviews and compares different nurse shift scheduling methods via the criteria of flexibility, fairness, continuity in shift assignments, nurses' preferences, and ergonomics principles. In this article, a primary shift rotation system is proposed to provide better continuity in shift assignments to satisfy nurses' preferences. The primary shift rotation system is modeled as a zero-one linear goal programming (LGP) problem. To generate the shift assignment for a unit with 13 nurses, the zero-one LGP model takes less than 3 minutes on average, whereas the head nurses spend approximately 2 to 3 hours on shift scheduling. This study reports the process of implementing the primary shift rotation system.
Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks
NASA Astrophysics Data System (ADS)
Sun, Wei; Chang, K. C.
2005-05-01
Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.
NASA Astrophysics Data System (ADS)
Yu, Yen Ching
An analytical model based on linearized Euler equations (LEE) is developed and used in conjunction with a validating experiment to study combustion instability. The LEE model features mean flow effects, entropy waves, adaptability for more physically-realistic boundary conditions, and is generalized for multiple-domain conditions. The model calculates spatial modes, resonant frequencies and linear growth rates of the overall system. The predicted resonant frequencies and spatially-resolved mode shapes agree with the experimental data from a longitudinally-unstable model rocket combustor to within 7%. Different gaseous fuels (methane, ethylene, and hydrogen) were tested under fixed geometry. Tests with hydrogen were stable, whereas ethylene, methane, and JP-8 were increasingly unstable. A novel method for obtaining large amounts of stability data under variable resonance conditions in a single test was demonstrated. The continuously variable resonance combustor (CVRC) incorporates a traversing choked axial oxidizer inlet to vary the overall combustion system resonance. The CVRC experiment successfully demonstrates different level of instability, transitions between stability levels, and identifies the most stable and unstable geometric combination. Pressure oscillation amplitudes ranged from less than 10% of mean pressure to greater than 60%. At low amplitudes, measured resonant frequency changed with inlet location but at high amplitude the measured resonance frequency matched the frequency of the combustion chamber. As the system transitions from linear to non-linear instability, the higher harmonics of the fundamental resonant mode appear nearly simultaneously. Transient, high-amplitude, broadband noise, at lower frequencies (on the order of 200 Hz) are also observed. Conversely, as the system transitions back to a more linear stability regime, the higher harmonics disappear sequentially, led by the highest order. Good agreements between analytical and experimental results are attained by treating the experiment as quasi-stationary. The stability characteristics from the high frequency measurements are further analyzed using filtered pressure traces, spectrograms, power spectral density plots, and oscillation decrements. Future works recommended include: direct measurements, such as chemiluminescence or high-speed imaging to examine the unsteady combustion processes; three-way comparisons between the acoustic-based, linear Euler-based, and non-linear Euler/RANS model; use the high fidelity computation to investigate the forcing terms modeled in the acoustic-based model.
Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.; Boyle, Richard D.
2014-01-01
Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.
Moerbeek, Mirjam; van Schie, Sander
2016-07-11
The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.
Multiplicative Forests for Continuous-Time Processes
Weiss, Jeremy C.; Natarajan, Sriraam; Page, David
2013-01-01
Learning temporal dependencies between variables over continuous time is an important and challenging task. Continuous-time Bayesian networks effectively model such processes but are limited by the number of conditional intensity matrices, which grows exponentially in the number of parents per variable. We develop a partition-based representation using regression trees and forests whose parameter spaces grow linearly in the number of node splits. Using a multiplicative assumption we show how to update the forest likelihood in closed form, producing efficient model updates. Our results show multiplicative forests can be learned from few temporal trajectories with large gains in performance and scalability. PMID:25284967
Multiplicative Forests for Continuous-Time Processes.
Weiss, Jeremy C; Natarajan, Sriraam; Page, David
2012-01-01
Learning temporal dependencies between variables over continuous time is an important and challenging task. Continuous-time Bayesian networks effectively model such processes but are limited by the number of conditional intensity matrices, which grows exponentially in the number of parents per variable. We develop a partition-based representation using regression trees and forests whose parameter spaces grow linearly in the number of node splits. Using a multiplicative assumption we show how to update the forest likelihood in closed form, producing efficient model updates. Our results show multiplicative forests can be learned from few temporal trajectories with large gains in performance and scalability.
Shuryak, Igor; Loucas, Bradford D.; Cornforth, Michael N.
2017-01-01
Recent technological advances allow precise radiation delivery to tumor targets. As opposed to more conventional radiotherapy—where multiple small fractions are given—in some cases, the preferred course of treatment may involve only a few (or even one) large dose(s) per fraction. Under these conditions, the choice of appropriate radiobiological model complicates the tasks of predicting radiotherapy outcomes and designing new treatment regimens. The most commonly used model for this purpose is the venerable linear-quadratic (LQ) formalism as it applies to cell survival. However, predictions based on the LQ model are frequently at odds with data following very high acute doses. In particular, although the LQ predicts a continuously bending dose–response relationship for the logarithm of cell survival, empirical evidence over the high-dose region suggests that the survival response is instead log-linear with dose. Here, we show that the distribution of lethal chromosomal lesions among individual human cells (lymphocytes and fibroblasts) exposed to gamma rays and X rays is somewhat overdispersed, compared with the Poisson distribution. Further, we show that such overdispersion affects the predicted dose response for cell survival (the fraction of cells with zero lethal lesions). This causes the dose response to approximate log-linear behavior at high doses, even when the mean number of lethal lesions per cell is well fitted by the continuously curving LQ model. Accounting for overdispersion of lethal lesions provides a novel, mechanistically based explanation for the observed shapes of cell survival dose responses that, in principle, may offer a tractable and clinically useful approach for modeling the effects of high doses per fraction. PMID:29312888
Anderson, Carl A; McRae, Allan F; Visscher, Peter M
2006-07-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.
Quantifying and visualizing variations in sets of images using continuous linear optimal transport
NASA Astrophysics Data System (ADS)
Kolouri, Soheil; Rohde, Gustavo K.
2014-03-01
Modern advancements in imaging devices have enabled us to explore the subcellular structure of living organisms and extract vast amounts of information. However, interpreting the biological information mined in the captured images is not a trivial task. Utilizing predetermined numerical features is usually the only hope for quantifying this information. Nonetheless, direct visual or biological interpretation of results obtained from these selected features is non-intuitive and difficult. In this paper, we describe an automatic method for modeling visual variations in a set of images, which allows for direct visual interpretation of the most significant differences, without the need for predefined features. The method is based on a linearized version of the continuous optimal transport (OT) metric, which provides a natural linear embedding for the image data set, in which linear combination of images leads to a visually meaningful image. This enables us to apply linear geometric data analysis techniques such as principal component analysis and linear discriminant analysis in the linearly embedded space and visualize the most prominent modes, as well as the most discriminant modes of variations, in the dataset. Using the continuous OT framework, we are able to analyze variations in shape and texture in a set of images utilizing each image at full resolution, that otherwise cannot be done by existing methods. The proposed method is applied to a set of nuclei images segmented from Feulgen stained liver tissues in order to investigate the major visual differences in chromatin distribution of Fetal-Type Hepatoblastoma (FHB) cells compared to the normal cells.
Constructivist Approach to Teacher Education: An Integrative Model for Reflective Teaching
ERIC Educational Resources Information Center
Vijaya Kumari, S. N.
2014-01-01
The theory of constructivism states that learning is non-linear, recursive, continuous, complex and relational--Despite the difficulty of deducing constructivist pedagogy from constructivist theories, there are models and common elements to consider in planning new program. Reflective activities are a common feature of all the programs of…
We compared two regression models, which are based on the Weibull and probit functions, for the analysis of pesticide toxicity data from laboratory studies on Illinois crop and native plant species. Both mathematical models are continuous, differentiable, strictly positive, and...
NASA Astrophysics Data System (ADS)
Grobbelaar-Van Dalsen, Marié
2015-08-01
This article is a continuation of our earlier work in Grobbelaar-Van Dalsen (Z Angew Math Phys 63:1047-1065, 2012) on the polynomial stabilization of a linear model for the magnetoelastic interactions in a two-dimensional electrically conducting Mindlin-Timoshenko plate. We introduce nonlinear damping that is effective only in a small portion of the interior of the plate. It turns out that the model is uniformly exponentially stable when the function , that represents the locally distributed damping, behaves linearly near the origin. However, the use of Mindlin-Timoshenko plate theory in the model enforces a restriction on the region occupied by the plate.
Water pollution and income relationships: A seemingly unrelated partially linear analysis
NASA Astrophysics Data System (ADS)
Pandit, Mahesh; Paudel, Krishna P.
2016-10-01
We used a seemingly unrelated partially linear model (SUPLM) to address a potential correlation between pollutants (nitrogen, phosphorous, dissolved oxygen and mercury) in an environmental Kuznets curve study. Simulation studies show that the SUPLM performs well to address potential correlation among pollutants. We find that the relationship between income and pollution follows an inverted U-shaped curve for nitrogen and dissolved oxygen and a cubic shaped curve for mercury. Model specification tests suggest that a SUPLM is better specified compared to a parametric model to study the income-pollution relationship. Results suggest a need to continually assess policy effectiveness of pollution reduction as income increases.
Vazquez-Leal, H.; Jimenez-Fernandez, V. M.; Benhammouda, B.; Filobello-Nino, U.; Sarmiento-Reyes, A.; Ramirez-Pinero, A.; Marin-Hernandez, A.; Huerta-Chua, J.
2014-01-01
We present a homotopy continuation method (HCM) for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL) representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation. PMID:25184157
Raabe, Joshua K.; Gardner, Beth; Hightower, Joseph E.
2013-01-01
We developed a spatial capture–recapture model to evaluate survival and activity centres (i.e., mean locations) of tagged individuals detected along a linear array. Our spatially explicit version of the Cormack–Jolly–Seber model, analyzed using a Bayesian framework, correlates movement between periods and can incorporate environmental or other covariates. We demonstrate the model using 2010 data for anadromous American shad (Alosa sapidissima) tagged with passive integrated transponders (PIT) at a weir near the mouth of a North Carolina river and passively monitored with an upstream array of PIT antennas. The river channel constrained migrations, resulting in linear, one-dimensional encounter histories that included both weir captures and antenna detections. Individual activity centres in a given time period were a function of the individual’s previous estimated location and the river conditions (i.e., gage height). Model results indicate high within-river spawning mortality (mean weekly survival = 0.80) and more extensive movements during elevated river conditions. This model is applicable for any linear array (e.g., rivers, shorelines, and corridors), opening new opportunities to study demographic parameters, movement or migration, and habitat use.
Casellas, J; Bach, R
2012-06-01
Lambing interval is a relevant reproductive indicator for sheep populations under continuous mating systems, although there is a shortage of selection programs accounting for this trait in the sheep industry. Both the historical assumption of small genetic background and its unorthodox distribution pattern have limited its implementation as a breeding objective. In this manuscript, statistical performances of 3 alternative parametrizations [i.e., symmetric Gaussian mixed linear (GML) model, skew-Gaussian mixed linear (SGML) model, and piecewise Weibull proportional hazard (PWPH) model] have been compared to elucidate the preferred methodology to handle lambing interval data. More specifically, flock-by-flock analyses were performed on 31,986 lambing interval records (257.3 ± 0.2 d) from 6 purebred Ripollesa flocks. Model performances were compared in terms of deviance information criterion (DIC) and Bayes factor (BF). For all flocks, PWPH models were clearly preferred; they generated a reduction of 1,900 or more DIC units and provided BF estimates larger than 100 (i.e., PWPH models against linear models). These differences were reduced when comparing PWPH models with different number of change points for the baseline hazard function. In 4 flocks, only 2 change points were required to minimize the DIC, whereas 4 and 6 change points were needed for the 2 remaining flocks. These differences demonstrated a remarkable degree of heterogeneity across sheep flocks that must be properly accounted for in genetic evaluation models to avoid statistical biases and suboptimal genetic trends. Within this context, all 6 Ripollesa flocks revealed substantial genetic background for lambing interval with heritabilities ranging between 0.13 and 0.19. This study provides the first evidence of the suitability of PWPH models for lambing interval analysis, clearly discarding previous parametrizations focused on mixed linear models.
Stochastic Dynamic Mixed-Integer Programming (SD-MIP)
2015-05-05
stochastic linear programming ( SLP ) problems. By using a combination of ideas from cutting plane theory of deterministic MIP (especially disjunctive...developed to date. b) As part of this project, we have also developed tools for very large scale Stochastic Linear Programming ( SLP ). There are...several reasons for this. First, SLP models continue to challenge many of the fastest computers to date, and many applications within the DoD (e.g
Formulation of the linear model from the nonlinear simulation for the F18 HARV
NASA Technical Reports Server (NTRS)
Hall, Charles E., Jr.
1991-01-01
The F-18 HARV is a modified F-18 Aircraft which is capable of flying in the post-stall regime in order to achieve superagility. The onset of aerodynamic stall, and continued into the post-stall region, is characterized by nonlinearities in the aerodynamic coefficients. These aerodynamic coefficients are not expressed as analytic functions, but rather in the form of tabular data. The nonlinearities in the aerodynamic coefficients yield a nonlinear model of the aircraft's dynamics. Nonlinear system theory has made many advances, but this area is not sufficiently developed to allow its application to this problem, since many of the theorems are existance theorems and that the systems are composed of analytic functions. Thus, the feedback matrices and the state estimators are obtained from linear system theory techniques. It is important, in order to obtain the correct feedback matrices and state estimators, that the linear description of the nonlinear flight dynamics be as accurate as possible. A nonlinear simulation is run under the Advanced Continuous Simulation Language (ACSL). The ACSL simulation uses FORTRAN subroutines to interface to the look-up tables for the aerodynamic data. ACSL has commands to form the linear representation for the system. Other aspects of this investigation are discussed.
On the kinetics of anaerobic power
2012-01-01
Background This study investigated two different mathematical models for the kinetics of anaerobic power. Model 1 assumes that the work power is linear with the work rate, while Model 2 assumes a linear relationship between the alactic anaerobic power and the rate of change of the aerobic power. In order to test these models, a cross country skier ran with poles on a treadmill at different exercise intensities. The aerobic power, based on the measured oxygen uptake, was used as input to the models, whereas the simulated blood lactate concentration was compared with experimental results. Thereafter, the metabolic rate from phosphocreatine break down was calculated theoretically. Finally, the models were used to compare phosphocreatine break down during continuous and interval exercises. Results Good similarity was found between experimental and simulated blood lactate concentration during steady state exercise intensities. The measured blood lactate concentrations were lower than simulated for intensities above the lactate threshold, but higher than simulated during recovery after high intensity exercise when the simulated lactate concentration was averaged over the whole lactate space. This fit was improved when the simulated lactate concentration was separated into two compartments; muscles + internal organs and blood. Model 2 gave a better behavior of alactic energy than Model 1 when compared against invasive measurements presented in the literature. During continuous exercise, Model 2 showed that the alactic energy storage decreased with time, whereas Model 1 showed a minimum value when steady state aerobic conditions were achieved. During interval exercise the two models showed similar patterns of alactic energy. Conclusions The current study provides useful insight on the kinetics of anaerobic power. Overall, our data indicate that blood lactate levels can be accurately modeled during steady state, and suggests a linear relationship between the alactic anaerobic power and the rate of change of the aerobic power. PMID:22830586
Francis, Maureen D; Wieland, Mark L; Drake, Sean; Gwisdalla, Keri Lyn; Julian, Katherine A; Nabors, Christopher; Pereira, Anne; Rosenblum, Michael; Smith, Amy; Sweet, David; Thomas, Kris; Varney, Andrew; Warm, Eric; Wininger, David; Francis, Mark L
2015-03-01
Many internal medicine (IM) programs have reorganized their resident continuity clinics to improve trainees' ambulatory experience. Downstream effects on continuity of care and other clinical and educational metrics are unclear. This multi-institutional, cross-sectional study included 713 IM residents from 12 programs. Continuity was measured using the usual provider of care method (UPC) and the continuity for physician method (PHY). Three clinic models (traditional, block, and combination) were compared using analysis of covariance. Multivariable linear regression analysis was used to analyze the effect of practice metrics and clinic model on continuity. UPC, reflecting continuity from the patient perspective, was significantly different, and was highest in the block model, midrange in combination model, and lowest in the traditional model programs. PHY, reflecting continuity from the perspective of the resident provider, was significantly lower in the block model than in combination and traditional programs. Panel size, ambulatory workload, utilization, number of clinics attended in the study period, and clinic model together accounted for 62% of the variation found in UPC and 26% of the variation found in PHY. Clinic model appeared to have a significant effect on continuity measured from both the patient and resident perspectives. Continuity requires balance between provider availability and demand for services. Optimizing this balance to maximize resident education, and the health of the population served, will require consideration of relevant local factors and priorities in addition to the clinic model.
Francis, Maureen D.; Wieland, Mark L.; Drake, Sean; Gwisdalla, Keri Lyn; Julian, Katherine A.; Nabors, Christopher; Pereira, Anne; Rosenblum, Michael; Smith, Amy; Sweet, David; Thomas, Kris; Varney, Andrew; Warm, Eric; Wininger, David; Francis, Mark L.
2015-01-01
Background Many internal medicine (IM) programs have reorganized their resident continuity clinics to improve trainees' ambulatory experience. Downstream effects on continuity of care and other clinical and educational metrics are unclear. Methods This multi-institutional, cross-sectional study included 713 IM residents from 12 programs. Continuity was measured using the usual provider of care method (UPC) and the continuity for physician method (PHY). Three clinic models (traditional, block, and combination) were compared using analysis of covariance. Multivariable linear regression analysis was used to analyze the effect of practice metrics and clinic model on continuity. Results UPC, reflecting continuity from the patient perspective, was significantly different, and was highest in the block model, midrange in combination model, and lowest in the traditional model programs. PHY, reflecting continuity from the perspective of the resident provider, was significantly lower in the block model than in combination and traditional programs. Panel size, ambulatory workload, utilization, number of clinics attended in the study period, and clinic model together accounted for 62% of the variation found in UPC and 26% of the variation found in PHY. Conclusions Clinic model appeared to have a significant effect on continuity measured from both the patient and resident perspectives. Continuity requires balance between provider availability and demand for services. Optimizing this balance to maximize resident education, and the health of the population served, will require consideration of relevant local factors and priorities in addition to the clinic model. PMID:26217420
An analysis of hypercritical states in elastic and inelastic systems
NASA Astrophysics Data System (ADS)
Kowalczk, Maciej
The author raises a wide range of problems whose common characteristic is an analysis of hypercritical states in elastic and inelastic systems. the article consists of two basic parts. The first part primarily discusses problems of modelling hypercritical states, while the second analyzes numerical methods (so-called continuation methods) used to solve non-linear problems. The original approaches for modelling hypercritical states found in this article include the combination of plasticity theory and an energy condition for cracking, accounting for the variability and cyclical nature of the forms of fracture of a brittle material under a die, and the combination of plasticity theory and a simplified description of the phenomenon of localization along a discontinuity line. The author presents analytical solutions of three non-linear problems for systems made of elastic/brittle/plastic and elastic/ideally plastic materials. The author proceeds to discuss the analytical basics of continuation methods and analyzes the significance of the parameterization of non-linear problems, provides a method for selecting control parameters based on an analysis of the rank of a rectangular matrix of a uniform system of increment equations, and also provides a new method for selecting an equilibrium path originating from a bifurcation point. The author provides a general outline of continuation methods based on an analysis of the rank of a matrix of a corrective system of equations. The author supplements his theoretical solutions with numerical solutions of non-linear problems for rod systems and problems of the plastic disintegration of a notched rectangular plastic plate.
A sequential linear optimization approach for controller design
NASA Technical Reports Server (NTRS)
Horta, L. G.; Juang, J.-N.; Junkins, J. L.
1985-01-01
A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.
Nicholas A. Povak; Paul F. Hessburg; Todd C. McDonnell; Keith M. Reynolds; Timothy J. Sullivan; R. Brion Salter; Bernard J. Crosby
2014-01-01
Accurate estimates of soil mineral weathering are required for regional critical load (CL) modeling to identify ecosystems at risk of the deleterious effects from acidification. Within a correlative modeling framework, we used modeled catchment-level base cation weathering (BCw) as the response variable to identify key environmental correlates and predict a continuous...
NASA Astrophysics Data System (ADS)
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-12-01
Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.
Kim, J; Nagano, Y; Furumai, H
2012-01-01
Easy-to-measure surrogate parameters for water quality indicators are needed for real time monitoring as well as for generating data for model calibration and validation. In this study, a novel linear regression model for estimating total nitrogen (TN) based on two surrogate parameters is proposed based on evaluation of pollutant loads flowing into a eutrophic lake. Based on their runoff characteristics during wet weather, electric conductivity (EC) and turbidity were selected as surrogates for particulate nitrogen (PN) and dissolved nitrogen (DN), respectively. Strong linear relationships were established between PN and turbidity and DN and EC, and both models subsequently combined for estimation of TN. This model was evaluated by comparison of estimated and observed TN runoff loads during rainfall events. This analysis showed that turbidity and EC are viable surrogates for PN and DN, respectively, and that the linear regression model for TN concentration was successful in estimating TN runoff loads during rainfall events and also under dry weather conditions.
Kinjo, Ken; Uchibe, Eiji; Doya, Kenji
2013-01-01
Linearly solvable Markov Decision Process (LMDP) is a class of optimal control problem in which the Bellman's equation can be converted into a linear equation by an exponential transformation of the state value function (Todorov, 2009b). In an LMDP, the optimal value function and the corresponding control policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunction problem in a continuous state using the knowledge of the system dynamics and the action, state, and terminal cost functions. In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in which the dynamics of the body and the environment have to be learned from experience. We first perform a simulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynamics model on the derived the action policy. The result shows that a crude linear approximation of the non-linear dynamics can still allow solution of the task, despite with a higher total cost. We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robot platform. The state is given by the position and the size of a battery in its camera view and two neck joint angles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servo controller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state cost functions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics model performed equivalently with the optimal linear quadratic regulator (LQR). In the non-quadratic task, the LMDP controller with a linear dynamics model showed the best performance. The results demonstrate the usefulness of the LMDP framework in real robot control even when simple linear models are used for dynamics learning.
On the form of ROCs constructed from confidence ratings.
Malmberg, Kenneth J
2002-03-01
A classical question for memory researchers is whether memories vary in an all-or-nothing, discrete manner (e.g., stored vs. not stored, recalled vs. not recalled), or whether they vary along a continuous dimension (e.g., strength, similarity, or familiarity). For yes-no classification tasks, continuous- and discrete-state models predict nonlinear and linear receiver operating characteristics (ROCs), respectively (D. M. Green & J. A. Swets, 1966; N. A. Macmillan & C. D. Creelman, 1991). Recently, several authors have assumed that these predictions are generalizable to confidence ratings tasks (J. Qin, C. L. Raye, M. K. Johnson, & K. J. Mitchell, 2001; S. D. Slotnick, S. A. Klein, C. S. Dodson, & A. P. Shimamura, 2000, and A. P. Yonelinas, 1999). This assumption is shown to be unwarranted by showing that discrete-state ratings models predict both linear and nonlinear ROCs. The critical factor determining the form of the discrete-state ROC is the response strategy adopted by the classifier.
Gollee, Henrik; Gawthrop, Peter J; Lakie, Martin; Loram, Ian D
2017-11-01
A human controlling an external system is described most easily and conventionally as linearly and continuously translating sensory input to motor output, with the inevitable output remnant, non-linearly related to the input, attributed to sensorimotor noise. Recent experiments show sustained manual tracking involves repeated refractoriness (insensitivity to sensory information for a certain duration), with the temporary 200-500 ms periods of irresponsiveness to sensory input making the control process intrinsically non-linear. This evidence calls for re-examination of the extent to which random sensorimotor noise is required to explain the non-linear remnant. This investigation of manual tracking shows how the full motor output (linear component and remnant) can be explained mechanistically by aperiodic sampling triggered by prediction error thresholds. Whereas broadband physiological noise is general to all processes, aperiodic sampling is associated with sensorimotor decision making within specific frontal, striatal and parietal networks; we conclude that manual tracking utilises such slow serial decision making pathways up to several times per second. The human operator is described adequately by linear translation of sensory input to motor output. Motor output also always includes a non-linear remnant resulting from random sensorimotor noise from multiple sources, and non-linear input transformations, for example thresholds or refractory periods. Recent evidence showed that manual tracking incurs substantial, serial, refractoriness (insensitivity to sensory information of 350 and 550 ms for 1st and 2nd order systems respectively). Our two questions are: (i) What are the comparative merits of explaining the non-linear remnant using noise or non-linear transformations? (ii) Can non-linear transformations represent serial motor decision making within the sensorimotor feedback loop intrinsic to tracking? Twelve participants (instructed to act in three prescribed ways) manually controlled two systems (1st and 2nd order) subject to a periodic multi-sine disturbance. Joystick power was analysed using three models, continuous-linear-control (CC), continuous-linear-control with calculated noise spectrum (CCN), and intermittent control with aperiodic sampling triggered by prediction error thresholds (IC). Unlike the linear mechanism, the intermittent control mechanism explained the majority of total power (linear and remnant) (77-87% vs. 8-48%, IC vs. CC). Between conditions, IC used thresholds and distributions of open loop intervals consistent with, respectively, instructions and previous measured, model independent values; whereas CCN required changes in noise spectrum deviating from broadband, signal dependent noise. We conclude that manual tracking uses open loop predictive control with aperiodic sampling. Because aperiodic sampling is inherent to serial decision making within previously identified, specific frontal, striatal and parietal networks we suggest that these structures are intimately involved in visuo-manual tracking. © 2017 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.
Durand, Casey P
2013-01-01
Statistical interactions are a common component of data analysis across a broad range of scientific disciplines. However, the statistical power to detect interactions is often undesirably low. One solution is to elevate the Type 1 error rate so that important interactions are not missed in a low power situation. To date, no study has quantified the effects of this practice on power in a linear regression model. A Monte Carlo simulation study was performed. A continuous dependent variable was specified, along with three types of interactions: continuous variable by continuous variable; continuous by dichotomous; and dichotomous by dichotomous. For each of the three scenarios, the interaction effect sizes, sample sizes, and Type 1 error rate were varied, resulting in a total of 240 unique simulations. In general, power to detect the interaction effect was either so low or so high at α = 0.05 that raising the Type 1 error rate only served to increase the probability of including a spurious interaction in the model. A small number of scenarios were identified in which an elevated Type 1 error rate may be justified. Routinely elevating Type 1 error rate when testing interaction effects is not an advisable practice. Researchers are best served by positing interaction effects a priori and accounting for them when conducting sample size calculations.
NASA Astrophysics Data System (ADS)
Rahmouni, A.; Beidouri, Z.; Benamar, R.
2013-09-01
The purpose of the present paper was the development of a physically discrete model for geometrically nonlinear free transverse constrained vibrations of beams, which may replace, if sufficient degrees of freedom are used, the previously developed continuous nonlinear beam constrained vibration models. The discrete model proposed is an N-Degrees of Freedom (N-dof) system made of N masses placed at the ends of solid bars connected by torsional springs, presenting the beam flexural rigidity. The large transverse displacements of the bar ends induce a variation in their lengths giving rise to axial forces modelled by longitudinal springs. The calculations made allowed application of the semi-analytical model developed previously for nonlinear structural vibration involving three tensors, namely the mass tensor mij, the linear rigidity tensor kij and the nonlinearity tensor bijkl. By application of Hamilton's principle and spectral analysis, the nonlinear vibration problem is reduced to a nonlinear algebraic system, examined for increasing numbers of dof. The results obtained by the physically discrete model showed a good agreement and a quick convergence to the equivalent continuous beam model, for various fixed boundary conditions, for both the linear frequencies and the nonlinear backbone curves, and also for the corresponding mode shapes. The model, validated here for the simply supported and clamped ends, may be used in further works to present the flexural linear and nonlinear constrained vibrations of beams with various types of discontinuities in the mass or in the elasticity distributions. The development of an adequate discrete model including the effect of the axial strains induced by large displacement amplitudes, which is predominant in geometrically nonlinear transverse constrained vibrations of beams [1]. The investigation of the results such a discrete model may lead to in the case of nonlinear free vibrations. The development of the analogy between the previously developed models of geometrically nonlinear vibrations of Euler-Bernoulli continuous beams, and multidof system models made of N masses placed at the end of elastic bars connected by linear spiral springs, presenting the beam flexural rigidity. The validation of the new model via the analysis of the convergence conditions of the nonlinear frequencies obtained by the N-dof system, when N increases, and those obtained in previous works using a continuous description of the beam. In addition to the above points, the models developed in the present work, may constitute, in our opinion, a good illustration, from the didactic point of view, of the origin of the geometrical nonlinearity induced by large transverse vibration amplitudes of constrained continuous beams, which may appear as a Pythagorean Theorem effect. The first step of the work presented here was the formulation of the problem of nonlinear vibrations of the discrete system shown in Fig. 1 in terms of the semi-analytical method, denoted as SAA, developed in the early 90's by Benamar and coauthors [3], and discussed for example in [6,7]. This method has been applied successfully to various types of geometrically nonlinear problems of structural dynamics [1-3,6-8,10-12] and the objective here was to use it in order to develop a flexible discrete nonlinear model which may be useful for presenting in further works geometrically nonlinear vibrations of real beams with discontinuities in the mass, the section, or the stiffness distributions. The purpose in the present work was restricted to developing and validating the model, via comparison of the obtained dependence of the resonance frequencies of such a system on the amplitude of vibration, with the results obtained previously by continuous beams nonlinear models. In the SAA method, the dynamic system under consideration is described by the mass matrix [M], the rigidity matrix [K], and the nonlinear rigidity matrix [B], which depends on the amplitude of vibration, and involves a fourth-order nonlinearity tensor bijkl. Details are given below, corresponding to the definition of the tensors mentioned above. The analogy between the classical continuous Euler-Bernoulli model of beams and the present discrete model is developed, leading to the expressions for the equivalent spiral and axial stiffness, in terms of the continuous beam geometrical and mechanical characteristics. Some numerical results are also given, showing the amplitude dependence of the frequencies on the amplitude of vibration, and compared to the backbone curves obtained previously by the continuous nonlinear classical beam theory, presented for example in [3,5,8,15-22]. A convergence study is performed by increasing the number of masses and bars, showing a good convergence to the theoretical values of continuous beams.
Artificial equilibrium points in binary asteroid systems with continuous low-thrust
NASA Astrophysics Data System (ADS)
Bu, Shichao; Li, Shuang; Yang, Hongwei
2017-08-01
The positions and dynamical characteristics of artificial equilibrium points (AEPs) in the vicinity of a binary asteroid with continuous low-thrust are studied. The restricted ellipsoid-ellipsoid model of binary system is employed for the binary asteroid system. The positions of AEPs are obtained by this model. It is found that the set of the point L1 or L2 forms a shape of an ellipsoid while the set of the point L3 forms a shape like a "banana". The effect of the continuous low-thrust on the feasible region of motion is analyzed by zero velocity curves. Because of using the low-thrust, the unreachable region can become reachable. The linearized equations of motion are derived for stability's analysis. Based on the characteristic equation of the linearized equations, the stability conditions are derived. The stable regions of AEPs are investigated by a parametric analysis. The effect of the mass ratio and ellipsoid parameters on stable region is also discussed. The results show that the influence of the mass ratio on the stable regions is more significant than the parameters of ellipsoid.
NASA Astrophysics Data System (ADS)
Hsieh, Chang-Yu; Cao, Jianshu
2018-01-01
We use the "generalized hierarchical equation of motion" proposed in Paper I [C.-Y. Hsieh and J. Cao, J. Chem. Phys. 148, 014103 (2018)] to study decoherence in a system coupled to a spin bath. The present methodology allows a systematic incorporation of higher-order anharmonic effects of the bath in dynamical calculations. We investigate the leading order corrections to the linear response approximations for spin bath models. Two kinds of spin-based environments are considered: (1) a bath of spins discretized from a continuous spectral density and (2) a bath of localized nuclear or electron spins. The main difference resides with how the bath frequency and the system-bath coupling parameters are distributed in an environment. When discretized from a continuous spectral density, the system-bath coupling typically scales as ˜1 /√{NB } where NB is the number of bath spins. This scaling suppresses the non-Gaussian characteristics of the spin bath and justifies the linear response approximations in the thermodynamic limit. For the nuclear/electron spin bath models, system-bath couplings are directly deduced from spin-spin interactions and do not necessarily obey the 1 /√{NB } scaling. It is not always possible to justify the linear response approximations in this case. Furthermore, if the spin-spin Hamiltonian is highly symmetrical, there exist additional constraints that generate highly non-Markovian and persistent dynamics that is beyond the linear response treatments.
Wang, Yubo; Veluvolu, Kalyana C
2017-06-14
It is often difficult to analyze biological signals because of their nonlinear and non-stationary characteristics. This necessitates the usage of time-frequency decomposition methods for analyzing the subtle changes in these signals that are often connected to an underlying phenomena. This paper presents a new approach to analyze the time-varying characteristics of such signals by employing a simple truncated Fourier series model, namely the band-limited multiple Fourier linear combiner (BMFLC). In contrast to the earlier designs, we first identified the sparsity imposed on the signal model in order to reformulate the model to a sparse linear regression model. The coefficients of the proposed model are then estimated by a convex optimization algorithm. The performance of the proposed method was analyzed with benchmark test signals. An energy ratio metric is employed to quantify the spectral performance and results show that the proposed method Sparse-BMFLC has high mean energy (0.9976) ratio and outperforms existing methods such as short-time Fourier transfrom (STFT), continuous Wavelet transform (CWT) and BMFLC Kalman Smoother. Furthermore, the proposed method provides an overall 6.22% in reconstruction error.
Foster, Guy M.; Graham, Jennifer L.
2016-04-06
The Kansas River is a primary source of drinking water for about 800,000 people in northeastern Kansas. Source-water supplies are treated by a combination of chemical and physical processes to remove contaminants before distribution. Advanced notification of changing water-quality conditions and cyanobacteria and associated toxin and taste-and-odor compounds provides drinking-water treatment facilities time to develop and implement adequate treatment strategies. The U.S. Geological Survey (USGS), in cooperation with the Kansas Water Office (funded in part through the Kansas State Water Plan Fund), and the City of Lawrence, the City of Topeka, the City of Olathe, and Johnson County Water One, began a study in July 2012 to develop statistical models at two Kansas River sites located upstream from drinking-water intakes. Continuous water-quality monitors have been operated and discrete-water quality samples have been collected on the Kansas River at Wamego (USGS site number 06887500) and De Soto (USGS site number 06892350) since July 2012. Continuous and discrete water-quality data collected during July 2012 through June 2015 were used to develop statistical models for constituents of interest at the Wamego and De Soto sites. Logistic models to continuously estimate the probability of occurrence above selected thresholds were developed for cyanobacteria, microcystin, and geosmin. Linear regression models to continuously estimate constituent concentrations were developed for major ions, dissolved solids, alkalinity, nutrients (nitrogen and phosphorus species), suspended sediment, indicator bacteria (Escherichia coli, fecal coliform, and enterococci), and actinomycetes bacteria. These models will be used to provide real-time estimates of the probability that cyanobacteria and associated compounds exceed thresholds and of the concentrations of other water-quality constituents in the Kansas River. The models documented in this report are useful for characterizing changes in water-quality conditions through time, characterizing potentially harmful cyanobacterial events, and indicating changes in water-quality conditions that may affect drinking-water treatment processes.
Abdelnour, A. Farras; Huppert, Theodore
2009-01-01
Near-infrared spectroscopy is a non-invasive neuroimaging method which uses light to measure changes in cerebral blood oxygenation associated with brain activity. In this work, we demonstrate the ability to record and analyze images of brain activity in real-time using a 16-channel continuous wave optical NIRS system. We propose a novel real-time analysis framework using an adaptive Kalman filter and a state–space model based on a canonical general linear model of brain activity. We show that our adaptive model has the ability to estimate single-trial brain activity events as we apply this method to track and classify experimental data acquired during an alternating bilateral self-paced finger tapping task. PMID:19457389
Meshless analysis of shear deformable shells: the linear model
NASA Astrophysics Data System (ADS)
Costa, Jorge C.; Tiago, Carlos M.; Pimenta, Paulo M.
2013-10-01
This work develops a kinematically linear shell model departing from a consistent nonlinear theory. The initial geometry is mapped from a flat reference configuration by a stress-free finite deformation, after which, the actual shell motion takes place. The model maintains the features of a complete stress-resultant theory with Reissner-Mindlin kinematics based on an inextensible director. A hybrid displacement variational formulation is presented, where the domain displacements and kinematic boundary reactions are independently approximated. The resort to a flat reference configuration allows the discretization using 2-D Multiple Fixed Least-Squares (MFLS) on the domain. The consistent definition of stress resultants and consequent plane stress assumption led to a neat formulation for the analysis of shells. The consistent linear approximation, combined with MFLS, made possible efficient computations with a desired continuity degree, leading to smooth results for the displacement, strain and stress fields, as shown by several numerical examples.
A linear polarization converter with near unity efficiency in microwave regime
NASA Astrophysics Data System (ADS)
Xu, Peng; Wang, Shen-Yun; Geyi, Wen
2017-04-01
In this paper, we present a linear polarization converter in the reflective mode with near unity conversion efficiency. The converter is designed in an array form on the basis of a pair of orthogonally arranged three-dimensional split-loop resonators sharing a common terminal coaxial port and a continuous metallic ground slab. It converts the linearly polarized incident electromagnetic wave at resonance to its orthogonal counterpart upon the reflection mode. The conversion mechanism is explained by an equivalent circuit model, and the conversion efficiency can be tuned by changing the impedance of the terminal port. Such a scheme of the linear polarization converter has potential applications in microwave communications, remote sensing, and imaging.
Linear theory for filtering nonlinear multiscale systems with model error
Berry, Tyrus; Harlim, John
2014-01-01
In this paper, we study filtering of multiscale dynamical systems with model error arising from limitations in resolving the smaller scale processes. In particular, the analysis assumes the availability of continuous-time noisy observations of all components of the slow variables. Mathematically, this paper presents new results on higher order asymptotic expansion of the first two moments of a conditional measure. In particular, we are interested in the application of filtering multiscale problems in which the conditional distribution is defined over the slow variables, given noisy observation of the slow variables alone. From the mathematical analysis, we learn that for a continuous time linear model with Gaussian noise, there exists a unique choice of parameters in a linear reduced model for the slow variables which gives the optimal filtering when only the slow variables are observed. Moreover, these parameters simultaneously give the optimal equilibrium statistical estimates of the underlying system, and as a consequence they can be estimated offline from the equilibrium statistics of the true signal. By examining a nonlinear test model, we show that the linear theory extends in this non-Gaussian, nonlinear configuration as long as we know the optimal stochastic parametrization and the correct observation model. However, when the stochastic parametrization model is inappropriate, parameters chosen for good filter performance may give poor equilibrium statistical estimates and vice versa; this finding is based on analytical and numerical results on our nonlinear test model and the two-layer Lorenz-96 model. Finally, even when the correct stochastic ansatz is given, it is imperative to estimate the parameters simultaneously and to account for the nonlinear feedback of the stochastic parameters into the reduced filter estimates. In numerical experiments on the two-layer Lorenz-96 model, we find that the parameters estimated online, as part of a filtering procedure, simultaneously produce accurate filtering and equilibrium statistical prediction. In contrast, an offline estimation technique based on a linear regression, which fits the parameters to a training dataset without using the filter, yields filter estimates which are worse than the observations or even divergent when the slow variables are not fully observed. This finding does not imply that all offline methods are inherently inferior to the online method for nonlinear estimation problems, it only suggests that an ideal estimation technique should estimate all parameters simultaneously whether it is online or offline. PMID:25002829
ERIC Educational Resources Information Center
Hannan, Michael T.
This document is part of a series of chapters described in SO 011 759. Addressing the question of effective models to measure change and the change process, the author suggests that linear structural equation systems may be viewed as steady state outcomes of continuous-change models and have rich sociological grounding. Two interpretations of the…
NASA Astrophysics Data System (ADS)
Mueller, Ulf Philipp; Wienholt, Lukas; Kleinhans, David; Cussmann, Ilka; Bunke, Wolf-Dieter; Pleßmann, Guido; Wendiggensen, Jochen
2018-02-01
There are several power grid modelling approaches suitable for simulations in the field of power grid planning. The restrictive policies of grid operators, regulators and research institutes concerning their original data and models lead to an increased interest in open source approaches of grid models based on open data. By including all voltage levels between 60 kV (high voltage) and 380kV (extra high voltage), we dissolve the common distinction between transmission and distribution grid in energy system models and utilize a single, integrated model instead. An open data set for primarily Germany, which can be used for non-linear, linear and linear-optimal power flow methods, was developed. This data set consists of an electrically parameterised grid topology as well as allocated generation and demand characteristics for present and future scenarios at high spatial and temporal resolution. The usability of the grid model was demonstrated by the performance of exemplary power flow optimizations. Based on a marginal cost driven power plant dispatch, being subject to grid restrictions, congested power lines were identified. Continuous validation of the model is nescessary in order to reliably model storage and grid expansion in progressing research.
Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data.
Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard
2017-04-01
To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision.
Li, Liang; Wang, Yiying; Xu, Jiting; Flora, Joseph R V; Hoque, Shamia; Berge, Nicole D
2018-08-01
Hydrothermal carbonization (HTC) is a wet, low temperature thermal conversion process that continues to gain attention for the generation of hydrochar. The importance of specific process conditions and feedstock properties on hydrochar characteristics is not well understood. To evaluate this, linear and non-linear models were developed to describe hydrochar characteristics based on data collected from HTC-related literature. A Sobol analysis was subsequently conducted to identify parameters that most influence hydrochar characteristics. Results from this analysis indicate that for each investigated hydrochar property, the model fit and predictive capability associated with the random forest models is superior to both the linear and regression tree models. Based on results from the Sobol analysis, the feedstock properties and process conditions most influential on hydrochar yield, carbon content, and energy content were identified. In addition, a variational process parameter sensitivity analysis was conducted to determine how feedstock property importance changes with process conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Modeling Pan Evaporation for Kuwait by Multiple Linear Regression
Almedeij, Jaber
2012-01-01
Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values. PMID:23226984
An electron beam linear scanning mode for industrial limited-angle nano-computed tomography.
Wang, Chengxiang; Zeng, Li; Yu, Wei; Zhang, Lingli; Guo, Yumeng; Gong, Changcheng
2018-01-01
Nano-computed tomography (nano-CT), which utilizes X-rays to research the inner structure of some small objects and has been widely utilized in biomedical research, electronic technology, geology, material sciences, etc., is a high spatial resolution and non-destructive research technique. A traditional nano-CT scanning model with a very high mechanical precision and stability of object manipulator, which is difficult to reach when the scanned object is continuously rotated, is required for high resolution imaging. To reduce the scanning time and attain a stable and high resolution imaging in industrial non-destructive testing, we study an electron beam linear scanning mode of nano-CT system that can avoid mechanical vibration and object movement caused by the continuously rotated object. Furthermore, to further save the scanning time and study how small the scanning range could be considered with acceptable spatial resolution, an alternating iterative algorithm based on ℓ 0 minimization is utilized to limited-angle nano-CT reconstruction problem with the electron beam linear scanning mode. The experimental results confirm the feasibility of the electron beam linear scanning mode of nano-CT system.
An electron beam linear scanning mode for industrial limited-angle nano-computed tomography
NASA Astrophysics Data System (ADS)
Wang, Chengxiang; Zeng, Li; Yu, Wei; Zhang, Lingli; Guo, Yumeng; Gong, Changcheng
2018-01-01
Nano-computed tomography (nano-CT), which utilizes X-rays to research the inner structure of some small objects and has been widely utilized in biomedical research, electronic technology, geology, material sciences, etc., is a high spatial resolution and non-destructive research technique. A traditional nano-CT scanning model with a very high mechanical precision and stability of object manipulator, which is difficult to reach when the scanned object is continuously rotated, is required for high resolution imaging. To reduce the scanning time and attain a stable and high resolution imaging in industrial non-destructive testing, we study an electron beam linear scanning mode of nano-CT system that can avoid mechanical vibration and object movement caused by the continuously rotated object. Furthermore, to further save the scanning time and study how small the scanning range could be considered with acceptable spatial resolution, an alternating iterative algorithm based on ℓ0 minimization is utilized to limited-angle nano-CT reconstruction problem with the electron beam linear scanning mode. The experimental results confirm the feasibility of the electron beam linear scanning mode of nano-CT system.
Solitary Waves of a $$\\mathcal {P}$$ $$\\mathcal {T}$$-Symmetric Nonlinear Dirac Equation
Cuevas-Maraver, Jesus; Kevrekidis, Panayotis G.; Saxena, Avadh; ...
2015-10-06
In our study we consider we consider a prototypical example of a mathcalP mathcalT-symmetric Dirac model. We discuss the underlying linear limit of the model and identify the threshold of the mathcalP mathcalT -phase transition in an analytical form. We then focus on the examination of the nonlinear model. We consider the continuation in the mathcalP mathcalT -symmetric model of the solutions of the corresponding Hamiltonian model and find that the solutions can be continued robustly as stable ones all the way up to the mathcalP mathcalT-transition threshold. In the latter, they degenerate into linear waves. We also examine themore » dynamics of the model. Given the stability of the waveforms in the mathcalP mathcalT-exact phase, we consider them as initial conditions for parameters outside of that phase. We also find that both oscillatory dynamics and exponential growth may arise, depending on the size of the corresponding “quench”. The former can be characterized by an interesting form of bifrequency solutions that have been predicted on the basis of the SU symmetry. Finally, we explore some special, analytically tractable, but not mathcalP mathcalT-symmetric solutions in the massless limit of t- e model.« less
International Geomagnetic Reference Field: the third generation.
Peddie, N.W.
1982-01-01
In August 1981 the International Association of Geomagnetism and Aeronomy revised the International Geomagnetic Reference Field (IGRF). It is the second revision since the inception of the IGRF in 1968. The revision extends the earlier series of IGRF models from 1980 to 1985, introduces a new series of definitive models for 1965-1976, and defines a provisional reference field for 1975- 1980. The revision consists of: 1) a model of the main geomagnetic field at 1980.0, not continuous with the earlier series of IGRF models together with a forecast model of the secular variation of the main field during 1980-1985; 2) definitive models of the main field at 1965.0, 1970.0, and 1975.0, with linear interpolation of the model coefficients specified for intervening dates; and 3) a provisional reference field for 1975-1980, defined as the linear interpolation of the 1975 and 1980 main-field models.-from Author
NASA Astrophysics Data System (ADS)
Bildirici, Melike; Sonustun, Fulya Ozaksoy; Sonustun, Bahri
2018-01-01
In the regards of chaos theory, new concepts such as complexity, determinism, quantum mechanics, relativity, multiple equilibrium, complexity, (continuously) instability, nonlinearity, heterogeneous agents, irregularity were widely questioned in economics. It is noticed that linear models are insufficient for analyzing unpredictable, irregular and noncyclical oscillations of economies, and for predicting bubbles, financial crisis, business cycles in financial markets. Therefore, economists gave great consequence to use appropriate tools for modelling non-linear dynamical structures and chaotic behaviors of the economies especially in macro and the financial economy. In this paper, we aim to model the chaotic structure of exchange rates (USD-TL and EUR-TL). To determine non-linear patterns of the selected time series, daily returns of the exchange rates were tested by BDS during the period from January 01, 2002 to May 11, 2017 which covers after the era of the 2001 financial crisis. After specifying the non-linear structure of the selected time series, it was aimed to examine the chaotic characteristic for the selected time period by Lyapunov Exponents. The findings verify the existence of the chaotic structure of the exchange rate returns in the analyzed time period.
Continuous-time discrete-space models for animal movement
Hanks, Ephraim M.; Hooten, Mevin B.; Alldredge, Mat W.
2015-01-01
The processes influencing animal movement and resource selection are complex and varied. Past efforts to model behavioral changes over time used Bayesian statistical models with variable parameter space, such as reversible-jump Markov chain Monte Carlo approaches, which are computationally demanding and inaccessible to many practitioners. We present a continuous-time discrete-space (CTDS) model of animal movement that can be fit using standard generalized linear modeling (GLM) methods. This CTDS approach allows for the joint modeling of location-based as well as directional drivers of movement. Changing behavior over time is modeled using a varying-coefficient framework which maintains the computational simplicity of a GLM approach, and variable selection is accomplished using a group lasso penalty. We apply our approach to a study of two mountain lions (Puma concolor) in Colorado, USA.
A guidance and navigation system for continuous low-thrust vehicles. M.S. Thesis
NASA Technical Reports Server (NTRS)
Jack-Chingtse, C.
1973-01-01
A midcourse guidance and navigation system for continuous low thrust vehicles was developed. The equinoctial elements are the state variables. Uncertainties are modelled statistically by random vector and stochastic processes. The motion of the vehicle and the measurements are described by nonlinear stochastic differential and difference equations respectively. A minimum time trajectory is defined; equations of motion and measurements are linearized about this trajectory. An exponential cost criterion is constructed and a linear feedback quidance law is derived. An extended Kalman filter is used for state estimation. A short mission using this system is simulated. It is indicated that this system is efficient for short missions, but longer missions require accurate trajectory and ground based measurements.
A continuum theory for multicomponent chromatography modeling.
Pfister, David; Morbidelli, Massimo; Nicoud, Roger-Marc
2016-05-13
A continuum theory is proposed for modeling multicomponent chromatographic systems under linear conditions. The model is based on the description of complex mixtures, possibly involving tens or hundreds of solutes, by a continuum. The present approach is shown to be very efficient when dealing with a large number of similar components presenting close elution behaviors and whose individual analytical characterization is impossible. Moreover, approximating complex mixtures by continuous distributions of solutes reduces the required number of model parameters to the few ones specific to the characterization of the selected continuous distributions. Therefore, in the frame of the continuum theory, the simulation of large multicomponent systems gets simplified and the computational effectiveness of the chromatographic model is thus dramatically improved. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Made Tirta, I.; Anggraeni, Dian
2018-04-01
Statistical models have been developed rapidly into various directions to accommodate various types of data. Data collected from longitudinal, repeated measured, clustered data (either continuous, binary, count, or ordinal), are more likely to be correlated. Therefore statistical model for independent responses, such as Generalized Linear Model (GLM), Generalized Additive Model (GAM) are not appropriate. There are several models available to apply for correlated responses including GEEs (Generalized Estimating Equations), for marginal model and various mixed effect model such as GLMM (Generalized Linear Mixed Models) and HGLM (Hierarchical Generalized Linear Models) for subject spesific models. These models are available on free open source software R, but they can only be accessed through command line interface (using scrit). On the othe hand, most practical researchers very much rely on menu based or Graphical User Interface (GUI). We develop, using Shiny framework, standard pull down menu Web-GUI that unifies most models for correlated responses. The Web-GUI has accomodated almost all needed features. It enables users to do and compare various modeling for repeated measure data (GEE, GLMM, HGLM, GEE for nominal responses) much more easily trough online menus. This paper discusses the features of the Web-GUI and illustrates the use of them. In General we find that GEE, GLMM, HGLM gave very closed results.
The morphing of geographical features by Fourier transformation.
Li, Jingzhong; Liu, Pengcheng; Yu, Wenhao; Cheng, Xiaoqiang
2018-01-01
This paper presents a morphing model of vector geographical data based on Fourier transformation. This model involves three main steps. They are conversion from vector data to Fourier series, generation of intermediate function by combination of the two Fourier series concerning a large scale and a small scale, and reverse conversion from combination function to vector data. By mirror processing, the model can also be used for morphing of linear features. Experimental results show that this method is sensitive to scale variations and it can be used for vector map features' continuous scale transformation. The efficiency of this model is linearly related to the point number of shape boundary and the interceptive value n of Fourier expansion. The effect of morphing by Fourier transformation is plausible and the efficiency of the algorithm is acceptable.
Integrated Modeling Activities for the James Webb Space Telescope: Optical Jitter Analysis
NASA Technical Reports Server (NTRS)
Hyde, T. Tupper; Ha, Kong Q.; Johnston, John D.; Howard, Joseph M.; Mosier, Gary E.
2004-01-01
This is a continuation of a series of papers on the integrated modeling activities for the James Webb Space Telescope(JWST). Starting with the linear optical model discussed in part one, and using the optical sensitivities developed in part two, we now assess the optical image motion and wavefront errors from the structural dynamics. This is often referred to as "jitter: analysis. The optical model is combined with the structural model and the control models to create a linear structural/optical/control model. The largest jitter is due to spacecraft reaction wheel assembly disturbances which are harmonic in nature and will excite spacecraft and telescope structural. The structural/optic response causes image quality degradation due to image motion (centroid error) as well as dynamic wavefront error. Jitter analysis results are used to predict imaging performance, improve the structural design, and evaluate the operational impact of the disturbance sources.
Polar versus Cartesian velocity models for maneuvering target tracking with IMM
NASA Astrophysics Data System (ADS)
Laneuville, Dann
This paper compares various model sets in different IMM filters for the maneuvering target tracking problem. The aim is to see whether we can improve the tracking performance of what is certainly the most widely used model set in the literature for the maneuvering target tracking problem: a Nearly Constant Velocity model and a Nearly Coordinated Turn model. Our new challenger set consists of a mixed Cartesian position and polar velocity state vector to describe the uniform motion segments and is augmented with the turn rate to obtain the second model for the maneuvering segments. This paper also gives a general procedure to discretize up to second order any non-linear continuous time model with linear diffusion. Comparative simulations on an air defence scenario with a 2D radar, show that this new approach improves significantly the tracking performance in this case.
Chaotic Motions in the Real Fuzzy Electronic Circuits
2012-12-30
field of secure communications, the original source should be blended with other complex signals. Chaotic signals are one of the good sources to be...Takagi-Sugeno (T-S) fuzzy chaotic systems on electronic circuit. In the research field of secure communications, the original source should be blended ...model. The overall fuzzy model of the system is achieved by fuzzy blending of the linear system models. Consider a continuous-time nonlinear dynamic
NASA Astrophysics Data System (ADS)
Li, Chengcheng; Li, Yuefeng; Wang, Guanglin
2017-07-01
The work presented in this paper seeks to address the tracking problem for uncertain continuous nonlinear systems with external disturbances. The objective is to obtain a model that uses a reference-based output feedback tracking control law. The control scheme is based on neural networks and a linear difference inclusion (LDI) model, and a PDC structure and H∞ performance criterion are used to attenuate external disturbances. The stability of the whole closed-loop model is investigated using the well-known quadratic Lyapunov function. The key principles of the proposed approach are as follows: neural networks are first used to approximate nonlinearities, to enable a nonlinear system to then be represented as a linearised LDI model. An LMI (linear matrix inequality) formula is obtained for uncertain and disturbed linear systems. This formula enables a solution to be obtained through an interior point optimisation method for some nonlinear output tracking control problems. Finally, simulations and comparisons are provided on two practical examples to illustrate the validity and effectiveness of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hacke, Peter; Spataru, Sergiu; Terwilliger, Kent
2015-06-14
An acceleration model based on the Peck equation was applied to power performance of crystalline silicon cell modules as a function of time and of temperature and humidity, the two main environmental stress factors that promote potential-induced degradation. This model was derived from module power degradation data obtained semi-continuously and statistically by in-situ dark current-voltage measurements in an environmental chamber. The modeling enables prediction of degradation rates and times as functions of temperature and humidity. Power degradation could be modeled linearly as a function of time to the second power; additionally, we found that coulombs transferred from the active cellmore » circuit to ground during the stress test is approximately linear with time. Therefore, the power loss could be linearized as a function of coulombs squared. With this result, we observed that when the module face was completely grounded with a condensed phase conductor, leakage current exceeded the anticipated corresponding degradation rate relative to the other tests performed in damp heat.« less
Design Methodology of a Dual-Halbach Array Linear Actuator with Thermal-Electromagnetic Coupling
Eckert, Paulo Roberto; Flores Filho, Aly Ferreira; Perondi, Eduardo; Ferri, Jeferson; Goltz, Evandro
2016-01-01
This paper proposes a design methodology for linear actuators, considering thermal and electromagnetic coupling with geometrical and temperature constraints, that maximizes force density and minimizes force ripple. The method allows defining an actuator for given specifications in a step-by-step way so that requirements are met and the temperature within the device is maintained under or equal to its maximum allowed for continuous operation. According to the proposed method, the electromagnetic and thermal models are built with quasi-static parametric finite element models. The methodology was successfully applied to the design of a linear cylindrical actuator with a dual quasi-Halbach array of permanent magnets and a moving-coil. The actuator can produce an axial force of 120 N and a stroke of 80 mm. The paper also presents a comparative analysis between results obtained considering only an electromagnetic model and the thermal-electromagnetic coupled model. This comparison shows that the final designs for both cases differ significantly, especially regarding its active volume and its electrical and magnetic loading. Although in this paper the methodology was employed to design a specific actuator, its structure can be used to design a wide range of linear devices if the parametric models are adjusted for each particular actuator. PMID:26978370
Design Methodology of a Dual-Halbach Array Linear Actuator with Thermal-Electromagnetic Coupling.
Eckert, Paulo Roberto; Flores Filho, Aly Ferreira; Perondi, Eduardo; Ferri, Jeferson; Goltz, Evandro
2016-03-11
This paper proposes a design methodology for linear actuators, considering thermal and electromagnetic coupling with geometrical and temperature constraints, that maximizes force density and minimizes force ripple. The method allows defining an actuator for given specifications in a step-by-step way so that requirements are met and the temperature within the device is maintained under or equal to its maximum allowed for continuous operation. According to the proposed method, the electromagnetic and thermal models are built with quasi-static parametric finite element models. The methodology was successfully applied to the design of a linear cylindrical actuator with a dual quasi-Halbach array of permanent magnets and a moving-coil. The actuator can produce an axial force of 120 N and a stroke of 80 mm. The paper also presents a comparative analysis between results obtained considering only an electromagnetic model and the thermal-electromagnetic coupled model. This comparison shows that the final designs for both cases differ significantly, especially regarding its active volume and its electrical and magnetic loading. Although in this paper the methodology was employed to design a specific actuator, its structure can be used to design a wide range of linear devices if the parametric models are adjusted for each particular actuator.
A necessary condition for dispersal driven growth of populations with discrete patch dynamics.
Guiver, Chris; Packman, David; Townley, Stuart
2017-07-07
We revisit the question of when can dispersal-induced coupling between discrete sink populations cause overall population growth? Such a phenomenon is called dispersal driven growth and provides a simple explanation of how dispersal can allow populations to persist across discrete, spatially heterogeneous, environments even when individual patches are adverse or unfavourable. For two classes of mathematical models, one linear and one non-linear, we provide necessary conditions for dispersal driven growth in terms of the non-existence of a common linear Lyapunov function, which we describe. Our approach draws heavily upon the underlying positive dynamical systems structure. Our results apply to both discrete- and continuous-time models. The theory is illustrated with examples and both biological and mathematical conclusions are drawn. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Non-linear controls influence functions in an aircraft dynamics simulator
NASA Technical Reports Server (NTRS)
Guerreiro, Nelson M.; Hubbard, James E., Jr.; Motter, Mark A.
2006-01-01
In the development and testing of novel structural and controls concepts, such as morphing aircraft wings, appropriate models are needed for proper system characterization. In most instances, available system models do not provide the required additional degrees of freedom for morphing structures but may be modified to some extent to achieve a compatible system. The objective of this study is to apply wind tunnel data collected for an Unmanned Air Vehicle (UAV), that implements trailing edge morphing, to create a non-linear dynamics simulator, using well defined rigid body equations of motion, where the aircraft stability derivatives change with control deflection. An analysis of this wind tunnel data, using data extraction algorithms, was performed to determine the reference aerodynamic force and moment coefficients for the aircraft. Further, non-linear influence functions were obtained for each of the aircraft s control surfaces, including the sixteen trailing edge flap segments. These non-linear controls influence functions are applied to the aircraft dynamics to produce deflection-dependent aircraft stability derivatives in a non-linear dynamics simulator. Time domain analysis of the aircraft motion, trajectory, and state histories can be performed using these nonlinear dynamics and may be visualized using a 3-dimensional aircraft model. Linear system models can be extracted to facilitate frequency domain analysis of the system and for control law development. The results of this study are useful in similar projects where trailing edge morphing is employed and will be instrumental in the University of Maryland s continuing study of active wing load control.
NASA Astrophysics Data System (ADS)
Zharinov, V. V.
2013-02-01
We propose a formal construction generalizing the classic de Rham complex to a wide class of models in mathematical physics and analysis. The presentation is divided into a sequence of definitions and elementary, easily verified statements; proofs are therefore given only in the key case. Linear operations are everywhere performed over a fixed number field {F} = {R},{C}. All linear spaces, algebras, and modules, although not stipulated explicitly, are by definition or by construction endowed with natural locally convex topologies, and their morphisms are continuous.
Røislien, Jo; Lossius, Hans Morten; Kristiansen, Thomas
2015-01-01
Background Trauma is a leading global cause of death. Trauma mortality rates are higher in rural areas, constituting a challenge for quality and equality in trauma care. The aim of the study was to explore population density and transport time to hospital care as possible predictors of geographical differences in mortality rates, and to what extent choice of statistical method might affect the analytical results and accompanying clinical conclusions. Methods Using data from the Norwegian Cause of Death registry, deaths from external causes 1998–2007 were analysed. Norway consists of 434 municipalities, and municipality population density and travel time to hospital care were entered as predictors of municipality mortality rates in univariate and multiple regression models of increasing model complexity. We fitted linear regression models with continuous and categorised predictors, as well as piecewise linear and generalised additive models (GAMs). Models were compared using Akaike's information criterion (AIC). Results Population density was an independent predictor of trauma mortality rates, while the contribution of transport time to hospital care was highly dependent on choice of statistical model. A multiple GAM or piecewise linear model was superior, and similar, in terms of AIC. However, while transport time was statistically significant in multiple models with piecewise linear or categorised predictors, it was not in GAM or standard linear regression. Conclusions Population density is an independent predictor of trauma mortality rates. The added explanatory value of transport time to hospital care is marginal and model-dependent, highlighting the importance of exploring several statistical models when studying complex associations in observational data. PMID:25972600
Using Confidence as Feedback in Multi-Sized Learning Environments
ERIC Educational Resources Information Center
Hench, Thomas L.
2014-01-01
This paper describes the use of existing confidence and performance data to provide feedback by first demonstrating the data's fit to a simple linear model. The paper continues by showing how the model's use as a benchmark provides feedback to allow current or future students to infer either the difficulty or the degree of under or over…
Chen, Xinguang; Lunn, Sonja; Harris, Carole; Li, Xiaoming; Deveaux, Lynette; Marshall, Sharon; Cottrell, Leslie; Stanton, Bonita
2010-10-01
Behavioral research and prevention intervention science efforts have largely been based on hypotheses of linear or rational behavior change. Additional advances in the field may result from the integration of quantum behavior change and catastrophe models. Longitudinal data from a randomized trial for 1241 pre-adolescents 9-12 years old who self-described as virgin were analyzed. Data for 469 virgins in the control group were included for linear and cusp catastrophe models to describe sexual initiation; data for the rest in the intervention group were added for program effect assessment. Self-reported likelihood to have sex was positively associated with actual initiation of sex (OR = 1.72, 95% CI: 1.43-2.06, R² = 0.13). Receipt of a behavioral prevention intervention based on a cognitive model prevented 15.6% (33.0% vs. 48.6%, OR = 0.52, 95% CI: 0.24-1.11) of the participants from initiating sex among only those who reported 'very likely to have sex.' The beta coefficients for the cubic term of the usp assessing three bifurcating variables (planning to have sex, intrinsic rewards from sex and self-efficacy for abstinence) were 0.0726, 0.1116 and 0.1069 respectively; R² varied from 0.49 to 0.54 (p < 0.001 for all). Although an intervention based on a model of continuous behavior change did produce a modest impact on sexual initiation, quantum change has contributed more than continuous change in describing sexual initiation among young adolescents, suggesting the need for quantum change and chaotic models to advance behavioral prevention of HIV/AIDS.
Asymptotic aspect of derivations in Banach algebras.
Roh, Jaiok; Chang, Ick-Soon
2017-01-01
We prove that every approximate linear left derivation on a semisimple Banach algebra is continuous. Also, we consider linear derivations on Banach algebras and we first study the conditions for a linear derivation on a Banach algebra. Then we examine the functional inequalities related to a linear derivation and their stability. We finally take central linear derivations with radical ranges on semiprime Banach algebras and a continuous linear generalized left derivation on a semisimple Banach algebra.
NASA Technical Reports Server (NTRS)
Childs, D. W.; Moyer, D. S.
1984-01-01
Attention is given to rotor dynamic problems that have been encountered and eliminated in the course of Space Shuttle Main Engine (SSME) development, as well as continuing, subsynchronous problems which are being encountered in the development of a 109-percent power level engine. The basic model for the SSME's High Pressure Oxygen Turbopump (HPOTP) encompasses a structural dynamic model for the rotor and housing, and component models for the liquid and gas seals, turbine clearance excitation forces, and impeller diffuser forces. Linear model results are used to examine the synchronous response and stability characteristics of the HPOTP, with attention to bearing load and stability problems associated with the second critical speed. Differences between linear and nonlinear model results are discussed and explained in terms of simple models. Simulation results indicate that while synchronous bearing loads can be reduced, subsynchronous motion is not eliminated by seal modifications.
Multi-water-bag models of ion temperature gradient instability in cylindrical geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coulette, David; Besse, Nicolas
2013-05-15
Ion temperature gradient instabilities play a major role in the understanding of anomalous transport in core fusion plasmas. In the considered cylindrical geometry, ion dynamics is described using a drift-kinetic multi-water-bag model for the parallel velocity dependency of the ion distribution function. In a first stage, global linear stability analysis is performed. From the obtained normal modes, parametric dependencies of the main spectral characteristics of the instability are then examined. Comparison of the multi-water-bag results with a reference continuous Maxwellian case allows us to evaluate the effects of discrete parallel velocity sampling induced by the Multi-Water-Bag model. Differences between themore » global model and local models considered in previous works are discussed. Using results from linear, quasilinear, and nonlinear numerical simulations, an analysis of the first stage saturation dynamics of the instability is proposed, where the divergence between the three models is examined.« less
The Hindmarsh-Rose neuron model: bifurcation analysis and piecewise-linear approximations.
Storace, Marco; Linaro, Daniele; de Lange, Enno
2008-09-01
This paper provides a global picture of the bifurcation scenario of the Hindmarsh-Rose model. A combination between simulations and numerical continuations is used to unfold the complex bifurcation structure. The bifurcation analysis is carried out by varying two bifurcation parameters and evidence is given that the structure that is found is universal and appears for all combinations of bifurcation parameters. The information about the organizing principles and bifurcation diagrams are then used to compare the dynamics of the model with that of a piecewise-linear approximation, customized for circuit implementation. A good match between the dynamical behaviors of the models is found. These results can be used both to design a circuit implementation of the Hindmarsh-Rose model mimicking the diversity of neural response and as guidelines to predict the behavior of the model as well as its circuit implementation as a function of parameters. (c) 2008 American Institute of Physics.
Chen, Rui; Hyrien, Ollivier
2011-01-01
This article deals with quasi- and pseudo-likelihood estimation in a class of continuous-time multi-type Markov branching processes observed at discrete points in time. “Conventional” and conditional estimation are discussed for both approaches. We compare their properties and identify situations where they lead to asymptotically equivalent estimators. Both approaches possess robustness properties, and coincide with maximum likelihood estimation in some cases. Quasi-likelihood functions involving only linear combinations of the data may be unable to estimate all model parameters. Remedial measures exist, including the resort either to non-linear functions of the data or to conditioning the moments on appropriate sigma-algebras. The method of pseudo-likelihood may also resolve this issue. We investigate the properties of these approaches in three examples: the pure birth process, the linear birth-and-death process, and a two-type process that generalizes the previous two examples. Simulations studies are conducted to evaluate performance in finite samples. PMID:21552356
Optimisation of substrate blends in anaerobic co-digestion using adaptive linear programming.
García-Gen, Santiago; Rodríguez, Jorge; Lema, Juan M
2014-12-01
Anaerobic co-digestion of multiple substrates has the potential to enhance biogas productivity by making use of the complementary characteristics of different substrates. A blending strategy based on a linear programming optimisation method is proposed aiming at maximising COD conversion into methane, but simultaneously maintaining a digestate and biogas quality. The method incorporates experimental and heuristic information to define the objective function and the linear restrictions. The active constraints are continuously adapted (by relaxing the restriction boundaries) such that further optimisations in terms of methane productivity can be achieved. The feasibility of the blends calculated with this methodology was previously tested and accurately predicted with an ADM1-based co-digestion model. This was validated in a continuously operated pilot plant, treating for several months different mixtures of glycerine, gelatine and pig manure at organic loading rates from 1.50 to 4.93 gCOD/Ld and hydraulic retention times between 32 and 40 days at mesophilic conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data
Ying, Gui-shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard
2017-01-01
Purpose To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. Methods We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field data in the elderly. Results When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI −0.03 to 0.32D, P=0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, P=0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller P-values, while analysis of the worse eye provided larger P-values than mixed effects models and marginal models. Conclusion In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision. PMID:28102741
Multiscale functions, scale dynamics, and applications to partial differential equations
NASA Astrophysics Data System (ADS)
Cresson, Jacky; Pierret, Frédéric
2016-05-01
Modeling phenomena from experimental data always begins with a choice of hypothesis on the observed dynamics such as determinism, randomness, and differentiability. Depending on these choices, different behaviors can be observed. The natural question associated to the modeling problem is the following: "With a finite set of data concerning a phenomenon, can we recover its underlying nature? From this problem, we introduce in this paper the definition of multi-scale functions, scale calculus, and scale dynamics based on the time scale calculus [see Bohner, M. and Peterson, A., Dynamic Equations on Time Scales: An Introduction with Applications (Springer Science & Business Media, 2001)] which is used to introduce the notion of scale equations. These definitions will be illustrated on the multi-scale Okamoto's functions. Scale equations are analysed using scale regimes and the notion of asymptotic model for a scale equation under a particular scale regime. The introduced formalism explains why a single scale equation can produce distinct continuous models even if the equation is scale invariant. Typical examples of such equations are given by the scale Euler-Lagrange equation. We illustrate our results using the scale Newton's equation which gives rise to a non-linear diffusion equation or a non-linear Schrödinger equation as asymptotic continuous models depending on the particular fractional scale regime which is considered.
Formal methods for modeling and analysis of hybrid systems
NASA Technical Reports Server (NTRS)
Tiwari, Ashish (Inventor); Lincoln, Patrick D. (Inventor)
2009-01-01
A technique based on the use of a quantifier elimination decision procedure for real closed fields and simple theorem proving to construct a series of successively finer qualitative abstractions of hybrid automata is taught. The resulting abstractions are always discrete transition systems which can then be used by any traditional analysis tool. The constructed abstractions are conservative and can be used to establish safety properties of the original system. The technique works on linear and non-linear polynomial hybrid systems: the guards on discrete transitions and the continuous flows in all modes can be specified using arbitrary polynomial expressions over the continuous variables. An exemplar tool in the SAL environment built over the theorem prover PVS is detailed. The technique scales well to large and complex hybrid systems.
Small area estimation for semicontinuous data.
Chandra, Hukum; Chambers, Ray
2016-03-01
Survey data often contain measurements for variables that are semicontinuous in nature, i.e. they either take a single fixed value (we assume this is zero) or they have a continuous, often skewed, distribution on the positive real line. Standard methods for small area estimation (SAE) based on the use of linear mixed models can be inefficient for such variables. We discuss SAE techniques for semicontinuous variables under a two part random effects model that allows for the presence of excess zeros as well as the skewed nature of the nonzero values of the response variable. In particular, we first model the excess zeros via a generalized linear mixed model fitted to the probability of a nonzero, i.e. strictly positive, value being observed, and then model the response, given that it is strictly positive, using a linear mixed model fitted on the logarithmic scale. Empirical results suggest that the proposed method leads to efficient small area estimates for semicontinuous data of this type. We also propose a parametric bootstrap method to estimate the MSE of the proposed small area estimator. These bootstrap estimates of the MSE are compared to the true MSE in a simulation study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A New SEYHAN's Approach in Case of Heterogeneity of Regression Slopes in ANCOVA.
Ankarali, Handan; Cangur, Sengul; Ankarali, Seyit
2018-06-01
In this study, when the assumptions of linearity and homogeneity of regression slopes of conventional ANCOVA are not met, a new approach named as SEYHAN has been suggested to use conventional ANCOVA instead of robust or nonlinear ANCOVA. The proposed SEYHAN's approach involves transformation of continuous covariate into categorical structure when the relationship between covariate and dependent variable is nonlinear and the regression slopes are not homogenous. A simulated data set was used to explain SEYHAN's approach. In this approach, we performed conventional ANCOVA in each subgroup which is constituted according to knot values and analysis of variance with two-factor model after MARS method was used for categorization of covariate. The first model is a simpler model than the second model that includes interaction term. Since the model with interaction effect has more subjects, the power of test also increases and the existing significant difference is revealed better. We can say that linearity and homogeneity of regression slopes are not problem for data analysis by conventional linear ANCOVA model by helping this approach. It can be used fast and efficiently for the presence of one or more covariates.
The morphing of geographical features by Fourier transformation
Liu, Pengcheng; Yu, Wenhao; Cheng, Xiaoqiang
2018-01-01
This paper presents a morphing model of vector geographical data based on Fourier transformation. This model involves three main steps. They are conversion from vector data to Fourier series, generation of intermediate function by combination of the two Fourier series concerning a large scale and a small scale, and reverse conversion from combination function to vector data. By mirror processing, the model can also be used for morphing of linear features. Experimental results show that this method is sensitive to scale variations and it can be used for vector map features’ continuous scale transformation. The efficiency of this model is linearly related to the point number of shape boundary and the interceptive value n of Fourier expansion. The effect of morphing by Fourier transformation is plausible and the efficiency of the algorithm is acceptable. PMID:29351344
NASA Astrophysics Data System (ADS)
Beardsell, Alec; Collier, William; Han, Tao
2016-09-01
There is a trend in the wind industry towards ever larger and more flexible turbine blades. Blade tip deflections in modern blades now commonly exceed 10% of blade length. Historically, the dynamic response of wind turbine blades has been analysed using linear models of blade deflection which include the assumption of small deflections. For modern flexible blades, this assumption is becoming less valid. In order to continue to simulate dynamic turbine performance accurately, routine use of non-linear models of blade deflection may be required. This can be achieved by representing the blade as a connected series of individual flexible linear bodies - referred to in this paper as the multi-part approach. In this paper, Bladed is used to compare load predictions using single-part and multi-part blade models for several turbines. The study examines the impact on fatigue and extreme loads and blade deflection through reduced sets of load calculations based on IEC 61400-1 ed. 3. Damage equivalent load changes of up to 16% and extreme load changes of up to 29% are observed at some turbine load locations. It is found that there is no general pattern in the loading differences observed between single-part and multi-part blade models. Rather, changes in fatigue and extreme loads with a multi-part blade model depend on the characteristics of the individual turbine and blade. Key underlying causes of damage equivalent load change are identified as differences in edgewise- torsional coupling between the multi-part and single-part models, and increased edgewise rotor mode damping in the multi-part model. Similarly, a causal link is identified between torsional blade dynamics and changes in ultimate load results.
A Spatially Continuous Model of Carbohydrate Digestion and Transport Processes in the Colon
Moorthy, Arun S.; Brooks, Stephen P. J.; Kalmokoff, Martin; Eberl, Hermann J.
2015-01-01
A spatially continuous mathematical model of transport processes, anaerobic digestion and microbial complexity as would be expected in the human colon is presented. The model is a system of first-order partial differential equations with context determined number of dependent variables, and stiff, non-linear source terms. Numerical simulation of the model is used to elucidate information about the colon-microbiota complex. It is found that the composition of materials on outflow of the model does not well-describe the composition of material in other model locations, and inferences using outflow data varies according to model reactor representation. Additionally, increased microbial complexity allows the total microbial community to withstand major system perturbations in diet and community structure. However, distribution of strains and functional groups within the microbial community can be modified depending on perturbation length and microbial kinetic parameters. Preliminary model extensions and potential investigative opportunities using the computational model are discussed. PMID:26680208
ERIC Educational Resources Information Center
Molenaar, Dylan; Dolan, Conor V.; de Boeck, Paul
2012-01-01
The Graded Response Model (GRM; Samejima, "Estimation of ability using a response pattern of graded scores," Psychometric Monograph No. 17, Richmond, VA: The Psychometric Society, 1969) can be derived by assuming a linear regression of a continuous variable, Z, on the trait, [theta], to underlie the ordinal item scores (Takane & de Leeuw in…
Regression dilution bias: tools for correction methods and sample size calculation.
Berglund, Lars
2012-08-01
Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.
Multi-objective possibilistic model for portfolio selection with transaction cost
NASA Astrophysics Data System (ADS)
Jana, P.; Roy, T. K.; Mazumder, S. K.
2009-06-01
In this paper, we introduce the possibilistic mean value and variance of continuous distribution, rather than probability distributions. We propose a multi-objective Portfolio based model and added another entropy objective function to generate a well diversified asset portfolio within optimal asset allocation. For quantifying any potential return and risk, portfolio liquidity is taken into account and a multi-objective non-linear programming model for portfolio rebalancing with transaction cost is proposed. The models are illustrated with numerical examples.
Traveling-wave piezoelectric linear motor part II: experiment and performance evaluation.
Ting, Yung; Li, Chun-Chung; Chen, Liang-Chiang; Yang, Chieh-Min
2007-04-01
This article continues the discussion of a traveling-wave piezoelectric linear motor. Part I of this article dealt with the design and analysis of the stator of a traveling-wave piezoelectric linear motor. In this part, the discussion focuses on the structure and modeling of the contact layer and the carriage. In addition, the performance analysis and evaluation of the linear motor also are dealt with in this study. The traveling wave is created by stator, which is constructed by a series of bimorph actuators arranged in a line and connected to form a meander-line structure. Analytical and experimental results of the performance are presented and shown to be almost in agreement. Power losses due to friction and transmission are studied and found to be significant. Compared with other types of linear motors, the motor in this study is capable of supporting heavier loads and provides a larger thrust force.
Certification of a hybrid parameter model of the fully flexible Shuttle Remote Manipulator System
NASA Technical Reports Server (NTRS)
Barhorst, Alan A.
1995-01-01
The development of high fidelity models of mechanical systems with flexible components is in flux. Many working models of these devices assume the elastic motion is small and can be superimposed on the overall rigid body motion. A drawback associated with this type of modeling technique is that it is required to regenerate the linear modal model of the device if the elastic motion is sufficiently far from the base rigid motion. An advantage to this type of modeling is that it uses NASTRAN modal data which is the NASA standard means of modal information exchange. A disadvantage to the linear modeling is that it fails to accurately represent large motion of the system, unless constant modal updates are performed. In this study, which is a continuation of a project started last year, the drawback of the currently used modal snapshot modeling technique is addressed in a rigorous fashion by novel and easily applied means.
Gagnon, B; Abrahamowicz, M; Xiao, Y; Beauchamp, M-E; MacDonald, N; Kasymjanova, G; Kreisman, H; Small, D
2010-03-30
C-reactive protein (CRP) is gaining credibility as a prognostic factor in different cancers. Cox's proportional hazard (PH) model is usually used to assess prognostic factors. However, this model imposes a priori assumptions, which are rarely tested, that (1) the hazard ratio associated with each prognostic factor remains constant across the follow-up (PH assumption) and (2) the relationship between a continuous predictor and the logarithm of the mortality hazard is linear (linearity assumption). We tested these two assumptions of the Cox's PH model for CRP, using a flexible statistical model, while adjusting for other known prognostic factors, in a cohort of 269 patients newly diagnosed with non-small cell lung cancer (NSCLC). In the Cox's PH model, high CRP increased the risk of death (HR=1.11 per each doubling of CRP value, 95% CI: 1.03-1.20, P=0.008). However, both the PH assumption (P=0.033) and the linearity assumption (P=0.015) were rejected for CRP, measured at the initiation of chemotherapy, which kept its prognostic value for approximately 18 months. Our analysis shows that flexible modeling provides new insights regarding the value of CRP as a prognostic factor in NSCLC and that Cox's PH model underestimates early risks associated with high CRP.
Inferring Action Structure and Causal Relationships in Continuous Sequences of Human Action
2014-01-01
language processing literature (e.g., Brent, 1999; Venkataraman , 2001), and which were also used by Goldwater et al. (2009). Precision (P) is the...trees in oriented linear graphs. Simon Stevin: Wis-en Natuurkundig Tijdschrift, 28 , 203. Venkataraman , A. (2001). A statistical model for word discovery
NASA Astrophysics Data System (ADS)
Goswami, M.; O'Connor, K. M.; Shamseldin, A. Y.
The "Galway Real-Time River Flow Forecasting System" (GFFS) is a software pack- age developed at the Department of Engineering Hydrology, of the National University of Ireland, Galway, Ireland. It is based on a selection of lumped black-box and con- ceptual rainfall-runoff models, all developed in Galway, consisting primarily of both the non-parametric (NP) and parametric (P) forms of two black-box-type rainfall- runoff models, namely, the Simple Linear Model (SLM-NP and SLM-P) and the seasonally-based Linear Perturbation Model (LPM-NP and LPM-P), together with the non-parametric wetness-index-based Linearly Varying Gain Factor Model (LVGFM), the black-box Artificial Neural Network (ANN) Model, and the conceptual Soil Mois- ture Accounting and Routing (SMAR) Model. Comprised of the above suite of mod- els, the system enables the user to calibrate each model individually, initially without updating, and it is capable also of producing combined (i.e. consensus) forecasts us- ing the Simple Average Method (SAM), the Weighted Average Method (WAM), or the Artificial Neural Network Method (NNM). The updating of each model output is achieved using one of four different techniques, namely, simple Auto-Regressive (AR) updating, Linear Transfer Function (LTF) updating, Artificial Neural Network updating (NNU), and updating by the Non-linear Auto-Regressive Exogenous-input method (NARXM). The models exhibit a considerable range of variation in degree of complexity of structure, with corresponding degrees of complication in objective func- tion evaluation. Operating in continuous river-flow simulation and updating modes, these models and techniques have been applied to two Irish catchments, namely, the Fergus and the Brosna. A number of performance evaluation criteria have been used to comparatively assess the model discharge forecast efficiency.
Stochastic and deterministic model of microbial heat inactivation.
Corradini, Maria G; Normand, Mark D; Peleg, Micha
2010-03-01
Microbial inactivation is described by a model based on the changing survival probabilities of individual cells or spores. It is presented in a stochastic and discrete form for small groups, and as a continuous deterministic model for larger populations. If the underlying mortality probability function remains constant throughout the treatment, the model generates first-order ("log-linear") inactivation kinetics. Otherwise, it produces survival patterns that include Weibullian ("power-law") with upward or downward concavity, tailing with a residual survival level, complete elimination, flat "shoulder" with linear or curvilinear continuation, and sigmoid curves. In both forms, the same algorithm or model equation applies to isothermal and dynamic heat treatments alike. Constructing the model does not require assuming a kinetic order or knowledge of the inactivation mechanism. The general features of its underlying mortality probability function can be deduced from the experimental survival curve's shape. Once identified, the function's coefficients, the survival parameters, can be estimated directly from the experimental survival ratios by regression. The model is testable in principle but matching the estimated mortality or inactivation probabilities with those of the actual cells or spores can be a technical challenge. The model is not intended to replace current models to calculate sterility. Its main value, apart from connecting the various inactivation patterns to underlying probabilities at the cellular level, might be in simulating the irregular survival patterns of small groups of cells and spores. In principle, it can also be used for nonthermal methods of microbial inactivation and their combination with heat.
Remontet, Laurent; Uhry, Zoé; Bossard, Nadine; Iwaz, Jean; Belot, Aurélien; Danieli, Coraline; Charvat, Hadrien; Roche, Laurent
2018-01-01
Cancer survival trend analyses are essential to describe accurately the way medical practices impact patients' survival according to the year of diagnosis. To this end, survival models should be able to account simultaneously for non-linear and non-proportional effects and for complex interactions between continuous variables. However, in the statistical literature, there is no consensus yet on how to build such models that should be flexible but still provide smooth estimates of survival. In this article, we tackle this challenge by smoothing the complex hypersurface (time since diagnosis, age at diagnosis, year of diagnosis, and mortality hazard) using a multidimensional penalized spline built from the tensor product of the marginal bases of time, age, and year. Considering this penalized survival model as a Poisson model, we assess the performance of this approach in estimating the net survival with a comprehensive simulation study that reflects simple and complex realistic survival trends. The bias was generally small and the root mean squared error was good and often similar to that of the true model that generated the data. This parametric approach offers many advantages and interesting prospects (such as forecasting) that make it an attractive and efficient tool for survival trend analyses.
FINITE ELEMENT MODEL FOR TIDAL AND RESIDUAL CIRCULATION.
Walters, Roy A.
1986-01-01
Harmonic decomposition is applied to the shallow water equations, thereby creating a system of equations for the amplitude of the various tidal constituents and for the residual motions. The resulting equations are elliptic in nature, are well posed and in practice are shown to be numerically well-behaved. There are a number of strategies for choosing elements: the two extremes are to use a few high-order elements with continuous derivatives, or to use a large number of simpler linear elements. In this paper simple linear elements are used and prove effective.
NASA Astrophysics Data System (ADS)
Jonrinaldi; Rahman, T.; Henmaidi; Wirdianto, E.; Zhang, D. Z.
2018-03-01
This paper proposed a mathematical model for multiple items Economic Production and Order Quantity (EPQ/EOQ) with considering continuous and discrete demand simultaneously in a system consisting of a vendor and multiple buyers. This model is used to investigate the optimal production lot size of the vendor and the number of shipments policy of orders to multiple buyers. The model considers the multiple buyers’ holding cost as well as transportation cost, which minimize the total production and inventory costs of the system. The continuous demand from any other customers can be fulfilled anytime by the vendor while the discrete demand from multiple buyers can be fulfilled by the vendor using the multiple delivery policy with a number of shipments of items in the production cycle time. A mathematical model is developed to illustrate the system based on EPQ and EOQ model. Solution procedures are proposed to solve the model using a Mixed Integer Non Linear Programming (MINLP) and algorithm methods. Then, the numerical example is provided to illustrate the system and results are discussed.
A Comparison of Multivariable Control Design Techniques for a Turbofan Engine Control
NASA Technical Reports Server (NTRS)
Garg, Sanjay; Watts, Stephen R.
1995-01-01
This paper compares two previously published design procedures for two different multivariable control design techniques for application to a linear engine model of a jet engine. The two multivariable control design techniques compared were the Linear Quadratic Gaussian with Loop Transfer Recovery (LQG/LTR) and the H-Infinity synthesis. The two control design techniques were used with specific previously published design procedures to synthesize controls which would provide equivalent closed loop frequency response for the primary control loops while assuring adequate loop decoupling. The resulting controllers were then reduced in order to minimize the programming and data storage requirements for a typical implementation. The reduced order linear controllers designed by each method were combined with the linear model of an advanced turbofan engine and the system performance was evaluated for the continuous linear system. Included in the performance analysis are the resulting frequency and transient responses as well as actuator usage and rate capability for each design method. The controls were also analyzed for robustness with respect to structured uncertainties in the unmodeled system dynamics. The two controls were then compared for performance capability and hardware implementation issues.
Heating and Acceleration of Charged Particles by Weakly Compressible Magnetohydrodynamic Turbulence
NASA Astrophysics Data System (ADS)
Lynn, Jacob William
We investigate the interaction between low-frequency magnetohydrodynamic (MHD) turbulence and a distribution of charged particles. Understanding this physics is central to understanding the heating of the solar wind, as well as the heating and acceleration of other collisionless plasmas. Our central method is to simulate weakly compressible MHD turbulence using the Athena code, along with a distribution of test particles which feel the electromagnetic fields of the turbulence. We also construct analytic models of transit-time damping (TTD), which results from the mirror force caused by compressible (fast or slow) MHD waves. Standard linear-theory models in the literature require an exact resonance between particle and wave velocities to accelerate particles. The models developed in this thesis go beyond standard linear theory to account for the fact that wave-particle interactions decorrelate over a short time, which allows particles with velocities off resonance to undergo acceleration and velocity diffusion. We use the test particle simulation results to calibrate and distinguish between different models for this velocity diffusion. Test particle heating is larger than the linear theory prediction, due to continued acceleration of particles with velocities off-resonance. We also include an artificial pitch-angle scattering to the test particle motion, representing the effect of high-frequency waves or velocity-space instabilities. For low scattering rates, we find that the scattering enforces isotropy and enhances heating by a modest factor. For much higher scattering rates, the acceleration is instead due to a non-resonant effect, as particles "frozen" into the fluid adiabatically gain and lose energy as eddies expand and contract. Lastly, we generalize our calculations to allow for relativistic test particles. Linear theory predicts that relativistic particles with velocities much higher than the speed of waves comprising the turbulence would undergo no acceleration; resonance-broadening modifies this conclusion and allows for a continued Fermi-like acceleration process. This may affect the observed spectra of black hole accretion disks by accelerating relativistic particles into a quasi-powerlaw tail.
Design and experimental validation of linear and nonlinear vehicle steering control strategies
NASA Astrophysics Data System (ADS)
Menhour, Lghani; Lechner, Daniel; Charara, Ali
2012-06-01
This paper proposes the design of three control laws dedicated to vehicle steering control, two based on robust linear control strategies and one based on nonlinear control strategies, and presents a comparison between them. The two robust linear control laws (indirect and direct methods) are built around M linear bicycle models, each of these control laws is composed of two M proportional integral derivative (PID) controllers: one M PID controller to control the lateral deviation and the other M PID controller to control the vehicle yaw angle. The indirect control law method is designed using an oscillation method and a nonlinear optimisation subject to H ∞ constraint. The direct control law method is designed using a linear matrix inequality optimisation in order to achieve H ∞ performances. The nonlinear control method used for the correction of the lateral deviation is based on a continuous first-order sliding-mode controller. The different methods are designed using a linear bicycle vehicle model with variant parameters, but the aim is to simulate the nonlinear vehicle behaviour under high dynamic demands with a four-wheel vehicle model. These steering vehicle controls are validated experimentally using the data acquired using a laboratory vehicle, Peugeot 307, developed by National Institute for Transport and Safety Research - Department of Accident Mechanism Analysis Laboratory's (INRETS-MA) and their performance results are compared. Moreover, an unknown input sliding-mode observer is introduced to estimate the road bank angle.
NASA Technical Reports Server (NTRS)
Nobbs, Steven G.
1995-01-01
An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.
Techniques for the Enhancement of Linear Predictive Speech Coding in Adverse Conditions
NASA Astrophysics Data System (ADS)
Wrench, Alan A.
Available from UMI in association with The British Library. Requires signed TDF. The Linear Prediction model was first applied to speech two and a half decades ago. Since then it has been the subject of intense research and continues to be one of the principal tools in the analysis of speech. Its mathematical tractability makes it a suitable subject for study and its proven success in practical applications makes the study worthwhile. The model is known to be unsuited to speech corrupted by background noise. This has led many researchers to investigate ways of enhancing the speech signal prior to Linear Predictive analysis. In this thesis this body of work is extended. The chosen application is low bit-rate (2.4 kbits/sec) speech coding. For this task the performance of the Linear Prediction algorithm is crucial because there is insufficient bandwidth to encode the error between the modelled speech and the original input. A review of the fundamentals of Linear Prediction and an independent assessment of the relative performance of methods of Linear Prediction modelling are presented. A new method is proposed which is fast and facilitates stability checking, however, its stability is shown to be unacceptably poorer than existing methods. A novel supposition governing the positioning of the analysis frame relative to a voiced speech signal is proposed and supported by observation. The problem of coding noisy speech is examined. Four frequency domain speech processing techniques are developed and tested. These are: (i) Combined Order Linear Prediction Spectral Estimation; (ii) Frequency Scaling According to an Aural Model; (iii) Amplitude Weighting Based on Perceived Loudness; (iv) Power Spectrum Squaring. These methods are compared with the Recursive Linearised Maximum a Posteriori method. Following on from work done in the frequency domain, a time domain implementation of spectrum squaring is developed. In addition, a new method of power spectrum estimation is developed based on the Minimum Variance approach. This new algorithm is shown to be closely related to Linear Prediction but produces slightly broader spectral peaks. Spectrum squaring is applied to both the new algorithm and standard Linear Prediction and their relative performance is assessed. (Abstract shortened by UMI.).
Gutman, Boris; Leonardo, Cassandra; Jahanshad, Neda; Hibar, Derrek; Eschen-burg, Kristian; Nir, Talia; Villalon, Julio; Thompson, Paul
2014-01-01
We present a framework for registering cortical surfaces based on tractography-informed structural connectivity. We define connectivity as a continuous kernel on the product space of the cortex, and develop a method for estimating this kernel from tractography fiber models. Next, we formulate the kernel registration problem, and present a means to non-linearly register two brains’ continuous connectivity profiles. We apply theoretical results from operator theory to develop an algorithm for decomposing the connectome into its shared and individual components. Lastly, we extend two discrete connectivity measures to the continuous case, and apply our framework to 98 Alzheimer’s patients and controls. Our measures show significant differences between the two groups. PMID:25320795
Chemical reactions simulated by ground-water-quality models
Grove, David B.; Stollenwerk, Kenneth G.
1987-01-01
Recent literature concerning the modeling of chemical reactions during transport in ground water is examined with emphasis on sorption reactions. The theory of transport and reactions in porous media has been well documented. Numerous equations have been developed from this theory, to provide both continuous and sequential or multistep models, with the water phase considered for both mobile and immobile phases. Chemical reactions can be either equilibrium or non-equilibrium, and can be quantified in linear or non-linear mathematical forms. Non-equilibrium reactions can be separated into kinetic and diffusional rate-limiting mechanisms. Solutions to the equations are available by either analytical expressions or numerical techniques. Saturated and unsaturated batch, column, and field studies are discussed with one-dimensional, laboratory-column experiments predominating. A summary table is presented that references the various kinds of models studied and their applications in predicting chemical concentrations in ground waters.
Pretest Predictions for Ventilation Tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Y. Sun; H. Yang; H.N. Kalia
The objective of this calculation is to predict the temperatures of the ventilating air, waste package surface, concrete pipe walls, and insulation that will be developed during the ventilation tests involving various test conditions. The results will be used as input to the following three areas: (1) Decisions regarding testing set-up and performance. (2) Assessing how best to scale the test phenomena measured. (3) Validating numerical approach for modeling continuous ventilation. The scope of the calculation is to identify the physical mechanisms and parameters related to thermal response in the ventilation tests, and develop and describe numerical methods that canmore » be used to calculate the effects of continuous ventilation. Sensitivity studies to assess the impact of variation of linear power densities (linear heat loads) and ventilation air flow rates are included. The calculation is limited to thermal effect only.« less
Time and frequency domain analysis of sampled data controllers via mixed operation equations
NASA Technical Reports Server (NTRS)
Frisch, H. P.
1981-01-01
Specification of the mathematical equations required to define the dynamic response of a linear continuous plant, subject to sampled data control, is complicated by the fact that the digital components of the control system cannot be modeled via linear ordinary differential equations. This complication can be overcome by introducing two new mathematical operations; namely, the operation of zero order hold and digial delay. It is shown that by direct utilization of these operations, a set of linear mixed operation equations can be written and used to define the dynamic response characteristics of the controlled system. It also is shown how these linear mixed operation equations lead, in an automatable manner, directly to a set of finite difference equations which are in a format compatible with follow on time and frequency domain analysis methods.
NASA Astrophysics Data System (ADS)
Hapugoda, J. C.; Sooriyarachchi, M. R.
2017-09-01
Survival time of patients with a disease and the incidence of that particular disease (count) is frequently observed in medical studies with the data of a clustered nature. In many cases, though, the survival times and the count can be correlated in a way that, diseases that occur rarely could have shorter survival times or vice versa. Due to this fact, joint modelling of these two variables will provide interesting and certainly improved results than modelling these separately. Authors have previously proposed a methodology using Generalized Linear Mixed Models (GLMM) by joining the Discrete Time Hazard model with the Poisson Regression model to jointly model survival and count model. As Aritificial Neural Network (ANN) has become a most powerful computational tool to model complex non-linear systems, it was proposed to develop a new joint model of survival and count of Dengue patients of Sri Lanka by using that approach. Thus, the objective of this study is to develop a model using ANN approach and compare the results with the previously developed GLMM model. As the response variables are continuous in nature, Generalized Regression Neural Network (GRNN) approach was adopted to model the data. To compare the model fit, measures such as root mean square error (RMSE), absolute mean error (AME) and correlation coefficient (R) were used. The measures indicate the GRNN model fits the data better than the GLMM model.
NASA Astrophysics Data System (ADS)
Marras, Simone; Kopera, Michal A.; Constantinescu, Emil M.; Suckale, Jenny; Giraldo, Francis X.
2018-04-01
The high-order numerical solution of the non-linear shallow water equations is susceptible to Gibbs oscillations in the proximity of strong gradients. In this paper, we tackle this issue by presenting a shock capturing model based on the numerical residual of the solution. Via numerical tests, we demonstrate that the model removes the spurious oscillations in the proximity of strong wave fronts while preserving their strength. Furthermore, for coarse grids, it prevents energy from building up at small wave-numbers. When applied to the continuity equation to stabilize the water surface, the addition of the shock capturing scheme does not affect mass conservation. We found that our model improves the continuous and discontinuous Galerkin solutions alike in the proximity of sharp fronts propagating on wet surfaces. In the presence of wet/dry interfaces, however, the model needs to be enhanced with the addition of an inundation scheme which, however, we do not address in this paper.
NASA Astrophysics Data System (ADS)
Hirakawa, E. T.; Ezzedine, S. M.; Petersson, A.; Sjogreen, B.; Vorobiev, O.; Pitarka, A.; Antoun, T.; Walter, W. R.
2016-12-01
Motions from underground explosions are governed by non-linear hydrodynamic response of material. However, the numerical calculation of this non-linear constitutive behavior is computationally intensive in contrast to the elastic and acoustic linear wave propagation solvers. Here, we develop a hybrid modeling approach with one-way hydrodynamic-to-elastic coupling in three dimensions in order to propagate explosion generated ground motions from the non-linear near-source region to the far-field. Near source motions are computed using GEODYN-L, a Lagrangian hydrodynamics code for high-energy loading of earth materials. Motions on a dense grid of points sampled on two nested shells located beyond the non-linear damaged zone are saved, and then passed to SW4, an anelastic anisotropic fourth order finite difference code for seismic wave modeling. Our coupling strategy is based on the decomposition and uniqueness theorems where motions are introduced into SW4 as a boundary source and continue to propagate as elastic waves at a much lower computational cost than by using GEODYN-L to cover the entire near- and the far-field domain. The accuracy of the numerical calculations and the coupling strategy is demonstrated in cases with a purely elastic medium as well as non-linear medium. Our hybrid modeling approach is applied to SPE-4' and SPE-5 which are the most recent underground chemical explosions conducted at the Nevada National Security Site (NNSS) where the Source Physics Experiments (SPE) are performed. Our strategy by design is capable of incorporating complex non-linear effects near the source as well as volumetric and topographic material heterogeneity along the propagation path to receiver, and provides new prospects for modeling and understanding explosion generated seismic waveforms. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-698608.
Implementation of Dryden Continuous Turbulence Model into Simulink for LSA-02 Flight Test Simulation
NASA Astrophysics Data System (ADS)
Ichwanul Hakim, Teuku Mohd; Arifianto, Ony
2018-04-01
Turbulence is a movement of air on small scale in the atmosphere that caused by instabilities of pressure and temperature distribution. Turbulence model is integrated into flight mechanical model as an atmospheric disturbance. Common turbulence model used in flight mechanical model are Dryden and Von Karman model. In this minor research, only Dryden continuous turbulence model were made. Dryden continuous turbulence model has been implemented, it refers to the military specification MIL-HDBK-1797. The model was implemented into Matlab Simulink. The model will be integrated with flight mechanical model to observe response of the aircraft when it is flight through turbulence field. The turbulence model is characterized by multiplying the filter which are generated from power spectral density with band-limited Gaussian white noise input. In order to ensure that the model provide a good result, model verification has been done by comparing the implemented model with the similar model that is provided in aerospace blockset. The result shows that there are some difference for 2 linear velocities (vg and wg), and 3 angular rate (pg, qg and rg). The difference is instantly caused by different determination of turbulence scale length which is used in aerospace blockset. With the adjustment of turbulence length in the implemented model, both model result the similar output.
Modeling of testosterone regulation by pulse-modulated feedback: An experimental data study
NASA Astrophysics Data System (ADS)
Mattsson, Per; Medvedev, Alexander
2013-10-01
The continuous part of a hybrid (pulse-modulated) model of testosterone feedback regulation is extended with infinite-dimensional and nonlinear dynamics, to better explain the testosterone concentration profiles observed in clinical data. A linear least-squares based optimization algorithm is developed for the purpose of detecting impulses of gonadotropin-realsing hormone from measured concentration of luteinizing hormone. The parameters in the model are estimated from hormone concentration measured in human males, and simulation results from the full closed-loop system are provided.
NASA Astrophysics Data System (ADS)
Stankiewicz, Witold; Morzyński, Marek; Kotecki, Krzysztof; Noack, Bernd R.
2017-04-01
We present a low-dimensional Galerkin model with state-dependent modes capturing linear and nonlinear dynamics. Departure point is a direct numerical simulation of the three-dimensional incompressible flow around a sphere at Reynolds numbers 400. This solution starts near the unstable steady Navier-Stokes solution and converges to a periodic limit cycle. The investigated Galerkin models are based on the dynamic mode decomposition (DMD) and derive the dynamical system from first principles, the Navier-Stokes equations. A DMD model with training data from the initial linear transient fails to predict the limit cycle. Conversely, a model from limit-cycle data underpredicts the initial growth rate roughly by a factor 5. Key enablers for uniform accuracy throughout the transient are a continuous mode interpolation between both oscillatory fluctuations and the addition of a shift mode. This interpolated model is shown to capture both the transient growth of the oscillation and the limit cycle.
Linear regression models and k-means clustering for statistical analysis of fNIRS data.
Bonomini, Viola; Zucchelli, Lucia; Re, Rebecca; Ieva, Francesca; Spinelli, Lorenzo; Contini, Davide; Paganoni, Anna; Torricelli, Alessandro
2015-02-01
We propose a new algorithm, based on a linear regression model, to statistically estimate the hemodynamic activations in fNIRS data sets. The main concern guiding the algorithm development was the minimization of assumptions and approximations made on the data set for the application of statistical tests. Further, we propose a K-means method to cluster fNIRS data (i.e. channels) as activated or not activated. The methods were validated both on simulated and in vivo fNIRS data. A time domain (TD) fNIRS technique was preferred because of its high performances in discriminating cortical activation and superficial physiological changes. However, the proposed method is also applicable to continuous wave or frequency domain fNIRS data sets.
Linear regression models and k-means clustering for statistical analysis of fNIRS data
Bonomini, Viola; Zucchelli, Lucia; Re, Rebecca; Ieva, Francesca; Spinelli, Lorenzo; Contini, Davide; Paganoni, Anna; Torricelli, Alessandro
2015-01-01
We propose a new algorithm, based on a linear regression model, to statistically estimate the hemodynamic activations in fNIRS data sets. The main concern guiding the algorithm development was the minimization of assumptions and approximations made on the data set for the application of statistical tests. Further, we propose a K-means method to cluster fNIRS data (i.e. channels) as activated or not activated. The methods were validated both on simulated and in vivo fNIRS data. A time domain (TD) fNIRS technique was preferred because of its high performances in discriminating cortical activation and superficial physiological changes. However, the proposed method is also applicable to continuous wave or frequency domain fNIRS data sets. PMID:25780751
Tori and chaos in a simple C1-system
NASA Astrophysics Data System (ADS)
Roessler, O. E.; Kahiert, C.; Ughleke, B.
A piecewise-linear autonomous 3-variable ordinary differential equation is presented which permits analytical modeling of chaotic attractors. A once-differentiable system of equations is defined which consists of two linear half-systems which meet along a threshold plane. The trajectories described by each equation is thereby continuous along the divide, forming a one-parameter family of invariant tori. The addition of a damping term produces a system of equations for various chaotic attractors. Extension of the system by means of a 4-variable generalization yields hypertori and hyperchaos. It is noted that the hierarchy established is amenable to analysis by the use of Poincare half-maps. Applications of the systems of ordinary differential equations to modeling turbulent flows are discussed.
Iterative LQG Controller Design Through Closed-Loop Identification
NASA Technical Reports Server (NTRS)
Hsiao, Min-Hung; Huang, Jen-Kuang; Cox, David E.
1996-01-01
This paper presents an iterative Linear Quadratic Gaussian (LQG) controller design approach for a linear stochastic system with an uncertain open-loop model and unknown noise statistics. This approach consists of closed-loop identification and controller redesign cycles. In each cycle, the closed-loop identification method is used to identify an open-loop model and a steady-state Kalman filter gain from closed-loop input/output test data obtained by using a feedback LQG controller designed from the previous cycle. Then the identified open-loop model is used to redesign the state feedback. The state feedback and the identified Kalman filter gain are used to form an updated LQC controller for the next cycle. This iterative process continues until the updated controller converges. The proposed controller design is demonstrated by numerical simulations and experiments on a highly unstable large-gap magnetic suspension system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, J.S.; Abrahmson, S.; Bender, M.A.
1993-10-01
This report is a revision of NUREG/CR-4214, Rev. 1, Part 1 (1990), Health Effects Models for Nuclear Power Plant Accident Consequence Analysis. This revision has been made to incorporate changes to the Health Effects Models recommended in two addenda to the NUREG/CR-4214, Rev. 1, Part 11, 1989 report. The first of these addenda provided recommended changes to the health effects models for low-LET radiations based on recent reports from UNSCEAR, ICRP and NAS/NRC (BEIR V). The second addendum presented changes needed to incorporate alpha-emitting radionuclides into the accident exposure source term. As in the earlier version of this report, modelsmore » are provided for early and continuing effects, cancers and thyroid nodules, and genetic effects. Weibull dose-response functions are recommended for evaluating the risks of early and continuing health effects. Three potentially lethal early effects -- the hematopoietic, pulmonary, and gastrointestinal syndromes are considered. Linear and linear-quadratic models are recommended for estimating the risks of seven types of cancer in adults - leukemia, bone, lung, breast, gastrointestinal, thyroid, and ``other``. For most cancers, both incidence and mortality are addressed. Five classes of genetic diseases -- dominant, x-linked, aneuploidy, unbalanced translocations, and multifactorial diseases are also considered. Data are provided that should enable analysts to consider the timing and severity of each type of health risk.« less
Least Squares Metric, Unidimensional Scaling of Multivariate Linear Models.
ERIC Educational Resources Information Center
Poole, Keith T.
1990-01-01
A general approach to least-squares unidimensional scaling is presented. Ordering information contained in the parameters is used to transform the standard squared error loss function into a discrete rather than continuous form. Monte Carlo tests with 38,094 ratings of 261 senators, and 1,258 representatives demonstrate the procedure's…
Hierarchical Linear Modeling Meta-Analysis of Single-Subject Design Research
ERIC Educational Resources Information Center
Gage, Nicholas A.; Lewis, Timothy J.
2014-01-01
The identification of evidence-based practices continues to provoke issues of disagreement across multiple fields. One area of contention is the role of single-subject design (SSD) research in providing scientific evidence. The debate about SSD's utility centers on three issues: sample size, effect size, and serial dependence. One potential…
Sufficient Dimension Reduction for Longitudinally Measured Predictors
Pfeiffer, Ruth M.; Forzani, Liliana; Bura, Efstathia
2013-01-01
We propose a method to combine several predictors (markers) that are measured repeatedly over time into a composite marker score without assuming a model and only requiring a mild condition on the predictor distribution. Assuming that the first and second moments of the predictors can be decomposed into a time and a marker component via a Kronecker product structure, that accommodates the longitudinal nature of the predictors, we develop first moment sufficient dimension reduction techniques to replace the original markers with linear transformations that contain sufficient information for the regression of the predictors on the outcome. These linear combinations can then be combined into a score that has better predictive performance than the score built under a general model that ignores the longitudinal structure of the data. Our methods can be applied to either continuous or categorical outcome measures. In simulations we focus on binary outcomes and show that our method outperforms existing alternatives using the AUC, the area under the receiver-operator characteristics (ROC) curve, as a summary measure of the discriminatory ability of a single continuous diagnostic marker for binary disease outcomes. PMID:22161635
NASA Astrophysics Data System (ADS)
Minderhoud, Philip S. J.; Cohen, Kim M.; Toonen, Willem. H. J.; Erkens, Gilles; Hoek, Wim Z.
2017-04-01
Lacustrine fills, including those of oxbow lakes in river floodplains, often hold valuable sedimentary and biological proxy records of palaeo-environmental change. Precise dating of accumulated sediments at levels throughout these records is crucial for interpretation and correlation of (proxy) data existing within the fills. Typically, dates are gathered from multiple sampled levels and their results are combined in age-depth models to estimate the ages of events identified between the datings. In this paper, a method of age-depth modelling is presented that varies the vertical accumulation rate of the lake fill based on continuous sedimentary data. In between Bayesian calibrated radiocarbon dates, this produces a modified non-linear age-depth relation based on sedimentology rather than linear or spline interpolation. The method is showcased on a core of an infilled palaeomeander at the floodplain edge of the river Rhine near Rheinberg (Germany). The sequence spans from 4.7 to 2.9 ka cal BP and consists of 5.5 meters of laminated lacustrine, organo-clastic mud, covered by 1 meter of peaty clay. Four radiocarbon dates provide direct dating control, mapping and dating in the wider surroundings provide additional control. The laminated, organo-clastic facies of the oxbow fill contains a record of nearby fluvial-geomorphological activity, including meander reconfiguration events and passage of rare large floods, recognized as fluctuations in coarseness and amount of allochthonous clastic sediment input. Continuous along-core sampling and measurement of loss-on-ignition (LOI) provided a fast way of expressing the variation in clastic sedimentation influx from the nearby river versus autochthonous organic deposition derived from biogenic production in the lake itself. This low-cost sedimentary proxy data feeds into the age-depth modelling. The sedimentology-modelled age-depth relation (re)produces the distinct lithological boundaries in the fill as marked changes in sedimentation rate. Especially the organo-clastic muddy facies subdivides in centennial intervals of relative faster and slower accumulation. For such intervals, sedimentation rates are produced that deviate 10 to 20% from that in simpler stepped linear age-models. For irregularly laminated muddy intervals of the oxbow fill - from which meaningful sampling for radiocarbon dating is more difficult than from peaty or slowly accumulating organic lake sediments - supplementing spotty radiocarbon sampling with continuous sedimentary proxy data creates more realistic age-depth modelling results.
Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li
2014-01-01
Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158
Finite element analyses of a linear-accelerator electron gun
NASA Astrophysics Data System (ADS)
Iqbal, M.; Wasy, A.; Islam, G. U.; Zhou, Z.
2014-02-01
Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.
Finite element analyses of a linear-accelerator electron gun.
Iqbal, M; Wasy, A; Islam, G U; Zhou, Z
2014-02-01
Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.
Extended Kalman Doppler tracking and model determination for multi-sensor short-range radar
NASA Astrophysics Data System (ADS)
Mittermaier, Thomas J.; Siart, Uwe; Eibert, Thomas F.; Bonerz, Stefan
2016-09-01
A tracking solution for collision avoidance in industrial machine tools based on short-range millimeter-wave radar Doppler observations is presented. At the core of the tracking algorithm there is an Extended Kalman Filter (EKF) that provides dynamic estimation and localization in real-time. The underlying sensor platform consists of several homodyne continuous wave (CW) radar modules. Based on In-phase-Quadrature (IQ) processing and down-conversion, they provide only Doppler shift information about the observed target. Localization with Doppler shift estimates is a nonlinear problem that needs to be linearized before the linear KF can be applied. The accuracy of state estimation depends highly on the introduced linearization errors, the initialization and the models that represent the true physics as well as the stochastic properties. The important issue of filter consistency is addressed and an initialization procedure based on data fitting and maximum likelihood estimation is suggested. Models for both, measurement and process noise are developed. Tracking results from typical three-dimensional courses of movement at short distances in front of a multi-sensor radar platform are presented.
A Lagrangian effective field theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vlah, Zvonimir; White, Martin; Aviles, Alejandro
We have continued the development of Lagrangian, cosmological perturbation theory for the low-order correlators of the matter density field. We provide a new route to understanding how the effective field theory (EFT) of large-scale structure can be formulated in the Lagrandian framework and a new resummation scheme, comparing our results to earlier work and to a series of high-resolution N-body simulations in both Fourier and configuration space. The `new' terms arising from EFT serve to tame the dependence of perturbation theory on small-scale physics and improve agreement with simulations (though with an additional free parameter). We find that all ofmore » our models fare well on scales larger than about two to three times the non-linear scale, but fail as the non-linear scale is approached. This is slightly less reach than has been seen previously. At low redshift the Lagrangian model fares as well as EFT in its Eulerian formulation, but at higher z the Eulerian EFT fits the data to smaller scales than resummed, Lagrangian EFT. Furthermore, all the perturbative models fare better than linear theory.« less
A Lagrangian effective field theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vlah, Zvonimir; White, Martin; Aviles, Alejandro, E-mail: zvlah@stanford.edu, E-mail: mwhite@berkeley.edu, E-mail: aviles@berkeley.edu
We have continued the development of Lagrangian, cosmological perturbation theory for the low-order correlators of the matter density field. We provide a new route to understanding how the effective field theory (EFT) of large-scale structure can be formulated in the Lagrandian framework and a new resummation scheme, comparing our results to earlier work and to a series of high-resolution N-body simulations in both Fourier and configuration space. The 'new' terms arising from EFT serve to tame the dependence of perturbation theory on small-scale physics and improve agreement with simulations (though with an additional free parameter). We find that all ofmore » our models fare well on scales larger than about two to three times the non-linear scale, but fail as the non-linear scale is approached. This is slightly less reach than has been seen previously. At low redshift the Lagrangian model fares as well as EFT in its Eulerian formulation, but at higher z the Eulerian EFT fits the data to smaller scales than resummed, Lagrangian EFT. All the perturbative models fare better than linear theory.« less
A Lagrangian effective field theory
Vlah, Zvonimir; White, Martin; Aviles, Alejandro
2015-09-02
We have continued the development of Lagrangian, cosmological perturbation theory for the low-order correlators of the matter density field. We provide a new route to understanding how the effective field theory (EFT) of large-scale structure can be formulated in the Lagrandian framework and a new resummation scheme, comparing our results to earlier work and to a series of high-resolution N-body simulations in both Fourier and configuration space. The `new' terms arising from EFT serve to tame the dependence of perturbation theory on small-scale physics and improve agreement with simulations (though with an additional free parameter). We find that all ofmore » our models fare well on scales larger than about two to three times the non-linear scale, but fail as the non-linear scale is approached. This is slightly less reach than has been seen previously. At low redshift the Lagrangian model fares as well as EFT in its Eulerian formulation, but at higher z the Eulerian EFT fits the data to smaller scales than resummed, Lagrangian EFT. Furthermore, all the perturbative models fare better than linear theory.« less
Modeling the vestibulo-ocular reflex of the squirrel monkey during eccentric rotation and roll tilt
NASA Technical Reports Server (NTRS)
Merfeld, D. M.; Paloski, W. H. (Principal Investigator)
1995-01-01
Model simulations of the squirrel monkey vestibulo-ocular reflex (VOR) are presented for two motion paradigms: constant velocity eccentric rotation and roll tilt about a naso-occipital axis. The model represents the implementation of three hypotheses: the "internal model" hypothesis, the "gravito-inertial force (GIF) resolution" hypothesis, and the "compensatory VOR" hypothesis. The internal model hypothesis is based on the idea that the nervous system knows the dynamics of the sensory systems and implements this knowledge as an internal dynamic model. The GIF resolution hypothesis is based on the idea that the nervous system knows that gravity minus linear acceleration equals GIF and implements this knowledge by resolving the otolith measurement of GIF into central estimates of gravity and linear acceleration, such that the central estimate of gravity minus the central estimate of acceleration equals the otolith measurement of GIF. The compensatory VOR hypothesis is based on the idea that the VOR compensates for the central estimates of angular velocity and linear velocity, which sum in a near-linear manner. During constant velocity eccentric rotation, the model correctly predicts that: (1) the peak horizontal response is greater while "facing-motion" than with "back-to-motion"; (2) the axis of eye rotation shifts toward alignment with GIF; and (3) a continuous vertical response, slow phase downward, exists prior to deceleration. The model also correctly predicts that a torsional response during the roll rotation is the only velocity response observed during roll rotations about a naso-occipital axis. The success of this model in predicting the observed experimental responses suggests that the model captures the essence of the complex sensory interactions engendered by eccentric rotation and roll tilt.
NASA Astrophysics Data System (ADS)
Nandy, Atanu; Pal, Biplab; Chakrabarti, Arunava
2016-08-01
It is shown that an entire class of off-diagonally disordered linear lattices composed of two basic building blocks and described within a tight-binding model can be tailored to generate absolutely continuous energy bands. It can be achieved if linear atomic clusters of an appropriate size are side-coupled to a suitable subset of sites in the backbone, and if the nearest-neighbor hopping integrals, in the backbone and in the side-coupled cluster, bear a certain ratio. We work out the precise relationship between the number of atoms in one of the building blocks in the backbone and that in the side attachment. In addition, we also evaluate the definite correlation between the numerical values of the hopping integrals at different subsections of the chain, that can convert an otherwise point spectrum (or a singular continuous one for deterministically disordered lattices) with exponentially (or power law) localized eigenfunctions to an absolutely continuous spectrum comprising one or more bands (subbands) populated by extended, totally transparent eigenstates. The results, which are analytically exact, put forward a non-trivial variation of the Anderson localization (Anderson P. W., Phys. Rev., 109 (1958) 1492), pointing towards its unusual sensitivity to the numerical values of the system parameters and, go well beyond the other related models such as the Random Dimer Model (RDM) (Dunlap D. H. et al., Phys. Rev. Lett., 65 (1990) 88).
RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.
Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na
2015-09-03
Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.
Terza, Joseph V; Bradford, W David; Dismuke, Clara E
2008-01-01
Objective To investigate potential bias in the use of the conventional linear instrumental variables (IV) method for the estimation of causal effects in inherently nonlinear regression settings. Data Sources Smoking Supplement to the 1979 National Health Interview Survey, National Longitudinal Alcohol Epidemiologic Survey, and simulated data. Study Design Potential bias from the use of the linear IV method in nonlinear models is assessed via simulation studies and real world data analyses in two commonly encountered regression setting: (1) models with a nonnegative outcome (e.g., a count) and a continuous endogenous regressor; and (2) models with a binary outcome and a binary endogenous regressor. Principle Findings The simulation analyses show that substantial bias in the estimation of causal effects can result from applying the conventional IV method in inherently nonlinear regression settings. Moreover, the bias is not attenuated as the sample size increases. This point is further illustrated in the survey data analyses in which IV-based estimates of the relevant causal effects diverge substantially from those obtained with appropriate nonlinear estimation methods. Conclusions We offer this research as a cautionary note to those who would opt for the use of linear specifications in inherently nonlinear settings involving endogeneity. PMID:18546544
Gagnon, B; Abrahamowicz, M; Xiao, Y; Beauchamp, M-E; MacDonald, N; Kasymjanova, G; Kreisman, H; Small, D
2010-01-01
Background: C-reactive protein (CRP) is gaining credibility as a prognostic factor in different cancers. Cox's proportional hazard (PH) model is usually used to assess prognostic factors. However, this model imposes a priori assumptions, which are rarely tested, that (1) the hazard ratio associated with each prognostic factor remains constant across the follow-up (PH assumption) and (2) the relationship between a continuous predictor and the logarithm of the mortality hazard is linear (linearity assumption). Methods: We tested these two assumptions of the Cox's PH model for CRP, using a flexible statistical model, while adjusting for other known prognostic factors, in a cohort of 269 patients newly diagnosed with non-small cell lung cancer (NSCLC). Results: In the Cox's PH model, high CRP increased the risk of death (HR=1.11 per each doubling of CRP value, 95% CI: 1.03–1.20, P=0.008). However, both the PH assumption (P=0.033) and the linearity assumption (P=0.015) were rejected for CRP, measured at the initiation of chemotherapy, which kept its prognostic value for approximately 18 months. Conclusion: Our analysis shows that flexible modeling provides new insights regarding the value of CRP as a prognostic factor in NSCLC and that Cox's PH model underestimates early risks associated with high CRP. PMID:20234363
Binquet, C; Abrahamowicz, M; Mahboubi, A; Jooste, V; Faivre, J; Bonithon-Kopp, C; Quantin, C
2008-12-30
Flexible survival models, which avoid assumptions about hazards proportionality (PH) or linearity of continuous covariates effects, bring the issues of model selection to a new level of complexity. Each 'candidate covariate' requires inter-dependent decisions regarding (i) its inclusion in the model, and representation of its effects on the log hazard as (ii) either constant over time or time-dependent (TD) and, for continuous covariates, (iii) either loglinear or non-loglinear (NL). Moreover, 'optimal' decisions for one covariate depend on the decisions regarding others. Thus, some efficient model-building strategy is necessary.We carried out an empirical study of the impact of the model selection strategy on the estimates obtained in flexible multivariable survival analyses of prognostic factors for mortality in 273 gastric cancer patients. We used 10 different strategies to select alternative multivariable parametric as well as spline-based models, allowing flexible modeling of non-parametric (TD and/or NL) effects. We employed 5-fold cross-validation to compare the predictive ability of alternative models.All flexible models indicated significant non-linearity and changes over time in the effect of age at diagnosis. Conventional 'parametric' models suggested the lack of period effect, whereas more flexible strategies indicated a significant NL effect. Cross-validation confirmed that flexible models predicted better mortality. The resulting differences in the 'final model' selected by various strategies had also impact on the risk prediction for individual subjects.Overall, our analyses underline (a) the importance of accounting for significant non-parametric effects of covariates and (b) the need for developing accurate model selection strategies for flexible survival analyses. Copyright 2008 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Liu, Jian; Li, Baohe; Chen, Xiaosong
2018-02-01
The space-time coupled continuous time random walk model is a stochastic framework of anomalous diffusion with many applications in physics, geology and biology. In this manuscript the time averaged mean squared displacement and nonergodic property of a space-time coupled continuous time random walk model is studied, which is a prototype of the coupled continuous time random walk presented and researched intensively with various methods. The results in the present manuscript show that the time averaged mean squared displacements increase linearly with lag time which means ergodicity breaking occurs, besides, we find that the diffusion coefficient is intrinsically random which shows both aging and enhancement, the analysis indicates that the either aging or enhancement phenomena are determined by the competition between the correlation exponent γ and the waiting time's long-tailed index α.
"Analytic continuation" of = 2 minimal model
NASA Astrophysics Data System (ADS)
Sugawara, Yuji
2014-04-01
In this paper we discuss what theory should be identified as the "analytic continuation" with N rArr -N of the {mathcal N}=2 minimal model with the central charge hat {c} = 1 - frac {2}{N}. We clarify how the elliptic genus of the expected model is written in terms of holomorphic linear combinations of the "modular completions" introduced in [T. Eguchi and Y. Sugawara, JHEP 1103, 107 (2011)] in the SL(2)_{N+2}/U(1) supercoset theory. We further discuss how this model could be interpreted as a kind of model of the SL(2)_{N+2}/U(1) supercoset in the (widetilde {{R}},widetilde {R}) sector, in which only the discrete spectrum appears in the torus partition function and the potential IR divergence due to the non-compactness of the target space is removed. We also briefly discuss possible definitions of the sectors with other spin structures.
Dose Rate Effects in Linear Bipolar Transistors
NASA Technical Reports Server (NTRS)
Johnston, Allan; Swimm, Randall; Harris, R. D.; Thorbourn, Dennis
2011-01-01
Dose rate effects are examined in linear bipolar transistors at high and low dose rates. At high dose rates, approximately 50% of the damage anneals at room temperature, even though these devices exhibit enhanced damage at low dose rate. The unexpected recovery of a significant fraction of the damage after tests at high dose rate requires changes in existing test standards. Tests at low temperature with a one-second radiation pulse width show that damage continues to increase for more than 3000 seconds afterward, consistent with predictions of the CTRW model for oxides with a thickness of 700 nm.
Chen, Han; Wang, Chaolong; Conomos, Matthew P.; Stilp, Adrienne M.; Li, Zilin; Sofer, Tamar; Szpiro, Adam A.; Chen, Wei; Brehm, John M.; Celedón, Juan C.; Redline, Susan; Papanicolaou, George J.; Thornton, Timothy A.; Laurie, Cathy C.; Rice, Kenneth; Lin, Xihong
2016-01-01
Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM’s constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. PMID:27018471
Reply by the Authors to C. K. W. Tam
NASA Technical Reports Server (NTRS)
Morris, Philip J.; Farassat, F.
2002-01-01
The prediction of noise generation and radiation by turbulence has been the subject of continuous research for over fifty years. The essential problem is how to model the noise sources when one s knowledge of the detailed space-time properties of the turbulence is limited. We attempted to provide a comparison of models based on acoustic analogies and recent alternative models. Our goal was to demonstrate that the predictive capabilities of any model are based on the choice of the turbulence property that is modeled as a source of noise. Our general definition of an acoustic analogy is a rearrangement of the equations of motion into the form L(u) = Q, where L is a linear operator that reduces to an acoustic propagation operator outside a region upsilon; u is a variable that reduces to acoustic pressure (or a related linear acoustic variable) outside upsilon; and Q is a source term that can be meaningfully estimated without knowing u and tends to zero outside upsilon.
Slotnick, Scott D; Jeye, Brittany M; Dodson, Chad S
2016-01-01
Is recollection a continuous/graded process or a threshold/all-or-none process? Receiver operating characteristic (ROC) analysis can answer this question as the continuous model and the threshold model predict curved and linear recollection ROCs, respectively. As memory for plurality, an item's previous singular or plural form, is assumed to rely on recollection, the nature of recollection can be investigated by evaluating plurality memory ROCs. The present study consisted of four experiments. During encoding, words (singular or plural) or objects (single/singular or duplicate/plural) were presented. During retrieval, old items with the same plurality or different plurality were presented. For each item, participants made a confidence rating ranging from "very sure old", which was correct for same plurality items, to "very sure new", which was correct for different plurality items. Each plurality memory ROC was the proportion of same versus different plurality items classified as "old" (i.e., hits versus false alarms). Chi-squared analysis revealed that all of the plurality memory ROCs were adequately fit by the continuous unequal variance model, whereas none of the ROCs were adequately fit by the two-high threshold model. These plurality memory ROC results indicate recollection is a continuous process, which complements previous source memory and associative memory ROC findings.
Stochastic Stability of Nonlinear Sampled Data Systems with a Jump Linear Controller
NASA Technical Reports Server (NTRS)
Gonzalez, Oscar R.; Herencia-Zapana, Heber; Gray, W. Steven
2004-01-01
This paper analyzes the stability of a sampled- data system consisting of a deterministic, nonlinear, time- invariant, continuous-time plant and a stochastic, discrete- time, jump linear controller. The jump linear controller mod- els, for example, computer systems and communication net- works that are subject to stochastic upsets or disruptions. This sampled-data model has been used in the analysis and design of fault-tolerant systems and computer-control systems with random communication delays without taking into account the inter-sample response. To analyze stability, appropriate topologies are introduced for the signal spaces of the sampled- data system. With these topologies, the ideal sampling and zero-order-hold operators are shown to be measurable maps. This paper shows that the known equivalence between the stability of a deterministic, linear sampled-data system and its associated discrete-time representation as well as between a nonlinear sampled-data system and a linearized representation holds even in a stochastic framework.
Nonlinear versus Ordinary Adaptive Control of Continuous Stirred-Tank Reactor
Dostal, Petr
2015-01-01
Unfortunately, the major group of the systems in industry has nonlinear behavior and control of such processes with conventional control approaches with fixed parameters causes problems and suboptimal or unstable control results. An adaptive control is one way to how we can cope with nonlinearity of the system. This contribution compares classic adaptive control and its modification with Wiener system. This configuration divides nonlinear controller into the dynamic linear part and the static nonlinear part. The dynamic linear part is constructed with the use of polynomial synthesis together with the pole-placement method and the spectral factorization. The static nonlinear part uses static analysis of the controlled plant for introducing the mathematical nonlinear description of the relation between the controlled output and the change of the control input. Proposed controller is tested by the simulations on the mathematical model of the continuous stirred-tank reactor with cooling in the jacket as a typical nonlinear system. PMID:26346878
A guidance and navigation system for continuous low thrust vehicles. M.S. Thesis
NASA Technical Reports Server (NTRS)
Tse, C. J. C.
1973-01-01
A midcourse guidance and navigation system for continuous low thrust vehicles is described. A set of orbit elements, known as the equinoctial elements, are selected as the state variables. The uncertainties are modelled statistically by random vector and stochastic processes. The motion of the vehicle and the measurements are described by nonlinear stochastic differential and difference equations respectively. A minimum time nominal trajectory is defined and the equation of motion and the measurement equation are linearized about this nominal trajectory. An exponential cost criterion is constructed and a linear feedback guidance law is derived to control the thrusting direction of the engine. Using this guidance law, the vehicle will fly in a trajectory neighboring the nominal trajectory. The extended Kalman filter is used for state estimation. Finally a short mission using this system is simulated. The results indicate that this system is very efficient for short missions.
Influence of salinity and temperature on acute toxicity of cadmium to Mysidopsis bahia molenock
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voyer, R.A.; Modica, G.
1990-01-01
Acute toxicity tests were conducted to compare estimates of toxicity, as modified by salinity and temperature, based on response surface techniques with those derived using conventional test methods, and to compare effect of a single episodic exposure to cadmium as a function of salinity with that of continuous exposure. Regression analysis indicated that mortality following continuous 96-hr exposure is related to linear and quadratic effects of salinity and cadmium at 20 C, and to the linear and quadratic effects of cadmium only at 25C. LC50s decreased with increases in temperature and decreases in salinity. Based on the regression model developed,more » 96-hr LC50s ranged from 15.5 to 28.0 micro Cd/L at 10 and 30% salinities, respectively, at 25C; and from 47 to 85 microgram Cd/L at these salinities at 20C.« less
NASA Astrophysics Data System (ADS)
Rao, Zhiming; He, Zhifang; Du, Jianqiang; Zhang, Xinyou; Ai, Guoping; Zhang, Chunqiang; Wu, Tao
2012-03-01
This paper applied numerical simulation of temperature by using finite element analysis software Ansys to study a model of drilling on sticking plaster. The continuous CO2 laser doing uniform linear motion and doing uniform circular motion irradiated sticking plaster to vaporize. The sticking plaster material was chosen as the thermal conductivity, the heat capacity and the density. For temperatures above 450 °C, sticking plaster would be vaporized. Based on the mathematical model of heat transfer, the process of drilling sticking plaster by laser beams could be simulated by Ansys. The simulation results showed the distribution of the temperature at the surface of the sticking plaster with the time of vaporizing at CO2 laser to do uniform linear motion and to do uniform circular motion. The temperature of sticking plaster CO2 laser to do uniform linear motion was higher than CO2 laser to do uniform circular motion in the same condition.
NASA Technical Reports Server (NTRS)
Pfeil, W. H.; De Los Reyes, G.; Bobula, G. A.
1985-01-01
A power turbine governor was designed for a recent-technology turboshaft engine coupled to a modern, articulated rotor system using Linear Quadratic Regulator (LQR) and Kalman Filter (KF) techniques. A linear, state-space model of the engine and rotor system was derived for six engine power settings from flight idle to maximum continuous. An integrator was appended to the fuel flow input to reduce the steady-state governor error to zero. Feedback gains were calculated for the system states at each power setting using the LQR technique. The main rotor tip speed state is not measurable, so a Kalman Filter of the rotor was used to estimate this state. The crossover of the system was increased to 10 rad/s compared to 2 rad/sec for a current governor. Initial computer simulations with a nonlinear engine model indicate a significant decrease in power turbine speed variation with the LQR governor compared to a conventional governor.
NASA Technical Reports Server (NTRS)
Rudolph, T. H.; Perala, R. A.
1983-01-01
The objective of the work reported here is to develop a methodology by which electromagnetic measurements of inflight lightning strike data can be understood and extended to other aircraft. A linear and time invariant approach based on a combination of Fourier transform and three dimensional finite difference techniques is demonstrated. This approach can obtain the lightning channel current in the absence of the aircraft for given channel characteristic impedance and resistive loading. The model is applied to several measurements from the NASA F106B lightning research program. A non-linear three dimensional finite difference code has also been developed to study the response of the F106B to a lightning leader attachment. This model includes three species air chemistry and fluid continuity equations and can incorporate an experimentally based streamer formulation. Calculated responses are presented for various attachment locations and leader parameters. The results are compared qualitatively with measured inflight data.
NASA Astrophysics Data System (ADS)
Smith, Helen R.; Connolly, Paul J.; Webb, Ann R.; Baran, Anthony J.
2016-07-01
Ice clouds were generated in the Manchester Ice Cloud Chamber (MICC), and the backscattering linear depolarisation ratio, δ, was measured for a variety of habits. To create an assortment of particle morphologies, the humidity in the chamber was varied throughout each experiment, resulting in a range of habits from the pristine to the complex. This technique was repeated at three temperatures: -7 °C, -15 °C and -30 °C, in order to produce both solid and hollow columns, plates, sectored plates and dendrites. A linearly polarised 532 nm continuous wave diode laser was directed through a section of the cloud using a non-polarising 50:50 beam splitter. Measurements of the scattered light were taken at 178°, 179° and 180°, using a Glan-Taylor prism to separate the co- and cross-polarised components. The intensities of these components were measured using two amplified photodetectors and the ratio of the cross- to co-polarised intensities was measured to find the linear depolarisation ratio. In general, it was found that Ray Tracing over-predicts the linear depolarisation ratio. However, by creating more accurate particle models which better represent the internal structure of ice particles, discrepancies between measured and modelled results (based on Ray Tracing) were reduced.
Analytical Description of Ascending Motion of Rockets in the Atmosphere
ERIC Educational Resources Information Center
Rodrigues, H.; de Pinho, M. O.; Portes, D., Jr.; Santiago, A.
2009-01-01
In continuation of a previous work, we present an analytic study of ascending vertical motion of a rocket subjected to a quadratic drag for the case where the mass-variation law is a linear function of time. We discuss the detailed analytical solution of the model differential equations in closed form. Examples of application are presented and…
Testing Linear Temporal Logic Formulae on Finite Execution Traces
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Rosu, Grigore; Norvig, Peter (Technical Monitor)
2001-01-01
We present an algorithm for efficiently testing Linear Temporal Logic (LTL) formulae on finite execution traces. The standard models of LTL are infinite traces, reflecting the behavior of reactive and concurrent systems which conceptually may be continuously alive. In most past applications of LTL. theorem provers and model checkers have been used to formally prove that down-scaled models satisfy such LTL specifications. Our goal is instead to use LTL for up-scaled testing of real software applications. Such tests correspond to analyzing the conformance of finite traces against LTL formulae. We first describe what it means for a finite trace to satisfy an LTL property. We then suggest an optimized algorithm based on transforming LTL formulae. The work is done using the Maude rewriting system. which turns out to provide a perfect notation and an efficient rewriting engine for performing these experiments.
1983-12-01
34 M4 + + Z + + + E + ass + + Z + + + osi " " + + Z + + + + 9tr"- + t + Z + ++ +" + + L + Z + +0+ + : :L:+: • +: E. . :ce + L+ Z+ + E+ 9 " + L z + K...this guide.) The truth model description is identified by the heading "TRUTH MODELO . The matrices of the continuous-time system are listed first. The
Characterizing Sleep Structure Using the Hypnogram
Swihart, Bruce J.; Caffo, Brian; Bandeen-Roche, Karen; Punjabi, Naresh M.
2008-01-01
Objectives: Research on the effects of sleep-disordered breathing (SDB) on sleep structure has traditionally been based on composite sleep-stage summaries. The primary objective of this investigation was to demonstrate the utility of log-linear and multistate analysis of the sleep hypnogram in evaluating differences in nocturnal sleep structure in subjects with and without SDB. Methods: A community-based sample of middle-aged and older adults with and without SDB matched on age, sex, race, and body mass index was identified from the Sleep Heart Health Study. Sleep was assessed with home polysomnography and categorized into rapid eye movement (REM) and non-REM (NREM) sleep. Log-linear and multistate survival analysis models were used to quantify the frequency and hazard rates of transitioning, respectively, between wakefulness, NREM sleep, and REM sleep. Results: Whereas composite sleep-stage summaries were similar between the two groups, subjects with SDB had higher frequencies and hazard rates for transitioning between the three states. Specifically, log-linear models showed that subjects with SDB had more wake-to-NREM sleep and NREM sleep-to-wake transitions, compared with subjects without SDB. Multistate survival models revealed that subjects with SDB transitioned more quickly from wake-to-NREM sleep and NREM sleep-to-wake than did subjects without SDB. Conclusions: The description of sleep continuity with log-linear and multistate analysis of the sleep hypnogram suggests that such methods can identify differences in sleep structure that are not evident with conventional sleep-stage summaries. Detailed characterization of nocturnal sleep evolution with event history methods provides additional means for testing hypotheses on how specific conditions impact sleep continuity and whether sleep disruption is associated with adverse health outcomes. Citation: Swihart BJ; Caffo B; Bandeen-Roche K; Punjabi NM. Characterizing sleep structure using the hypnogram. J Clin Sleep Med 2008;4(4):349–355. PMID:18763427
Accuracy of active chirp linearization for broadband frequency modulated continuous wave ladar.
Barber, Zeb W; Babbitt, Wm Randall; Kaylor, Brant; Reibel, Randy R; Roos, Peter A
2010-01-10
As the bandwidth and linearity of frequency modulated continuous wave chirp ladar increase, the resulting range resolution, precisions, and accuracy are improved correspondingly. An analysis of a very broadband (several THz) and linear (<1 ppm) chirped ladar system based on active chirp linearization is presented. Residual chirp nonlinearity and material dispersion are analyzed as to their effect on the dynamic range, precision, and accuracy of the system. Measurement precision and accuracy approaching the part per billion level is predicted.
Finite element analyses of a linear-accelerator electron gun
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iqbal, M., E-mail: muniqbal.chep@pu.edu.pk, E-mail: muniqbal@ihep.ac.cn; Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049; Wasy, A.
Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gunmore » is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.« less
Chung, Yeonseung; Noh, Heesang; Honda, Yasushi; Hashizume, Masahiro; Bell, Michelle L; Guo, Yue-Liang Leon; Kim, Ho
2017-05-15
Understanding how the temperature-mortality association worldwide changes over time is crucial to addressing questions of human adaptation under climate change. Previous studies investigated the temporal changes in the association over a few discrete time frames or assumed a linear change. Also, most studies focused on attenuation of heat-related mortality and studied the United States or Europe. This research examined continuous temporal changes (potentially nonlinear) in mortality related to extreme temperature (both heat and cold) for 15 cities in Northeast Asia (1972-2009). We used a generalized linear model with splines to simultaneously capture 2 types of nonlinearity: nonlinear association between temperature and mortality and nonlinear change over time in the association. We combined city-specific results to generate country-specific results using Bayesian hierarchical modeling. Cold-related mortality remained roughly constant over decades and slightly increased in the late 2000s, with a larger increase for cardiorespiratory deaths than for deaths from other causes. Heat-related mortality rates have decreased continuously over time, with more substantial decrease in earlier decades, for older populations and for cardiorespiratory deaths. Our findings suggest that future assessment of health effects of climate change should account for the continuous changes in temperature-related health risk and variations by factors such as age, cause of death, and location. © Crown copyright 2017.
Non-linear 3-D Born shear waveform tomography in Southeast Asia
NASA Astrophysics Data System (ADS)
Panning, Mark P.; Cao, Aimin; Kim, Ahyi; Romanowicz, Barbara A.
2012-07-01
Southeast (SE) Asia is a tectonically complex region surrounded by many active source regions, thus an ideal test bed for developments in seismic tomography. Much recent development in tomography has been based on 3-D sensitivity kernels based on the first-order Born approximation, but there are potential problems with this approach when applied to waveform data. In this study, we develop a radially anisotropic model of SE Asia using long-period multimode waveforms. We use a theoretical 'cascade' approach, starting with a large-scale Eurasian model developed using 2-D Non-linear Asymptotic Coupling Theory (NACT) sensitivity kernels, and then using a modified Born approximation (nBorn), shown to be more accurate at modelling waveforms, to invert a subset of the data for structure in a subregion (longitude 75°-150° and latitude 0°-45°). In this subregion, the model is parametrized at a spherical spline level 6 (˜200 km). The data set is also inverted using NACT and purely linear 3-D Born kernels. All three final models fit the data well, with just under 80 per cent variance reduction as calculated using the corresponding theory, but the nBorn model shows more detailed structure than the NACT model throughout and has much better resolution at depths greater than 250 km. Based on variance analysis, the purely linear Born kernels do not provide as good a fit to the data due to deviations from linearity for the waveform data set used in this modelling. The nBorn isotropic model shows a stronger fast velocity anomaly beneath the Tibetan Plateau in the depth range of 150-250 km, which disappears at greater depth, consistent with other studies. It also indicates moderate thinning of the high-velocity plate in the middle of Tibet, consistent with a model where Tibet is underplated by Indian lithosphere from the south and Eurasian lithosphere from the north, in contrast to a model with continuous underplating by Indian lithosphere across the entire plateau. The nBorn anisotropic model detects negative ξ anomalies suggestive of vertical deformation associated with subducted slabs and convergent zones at the Himalayan front and Tien Shan at depths near 150 km.
Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations
NASA Astrophysics Data System (ADS)
Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.
2017-09-01
This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.
A continuous damage model based on stepwise-stress creep rupture tests
NASA Technical Reports Server (NTRS)
Robinson, D. N.
1985-01-01
A creep damage accumulation model is presented that makes use of the Kachanov damage rate concept with a provision accounting for damage that results from a variable stress history. This is accomplished through the introduction of an additional term in the Kachanov rate equation that is linear in the stress rate. Specification of the material functions and parameters in the model requires two types of constituting a data base: (1) standard constant-stress creep rupture tests, and (2) a sequence of two-step creep rupture tests.
Finite State Models of Manned Systems: Validation, Simplification, and Extension.
1979-11-01
model a time set is needed. A time set is some set T together with a binary relation defined on T which linearly orders the set. If "model time" is...discrete, so is T ; continuous time is represented by a set corresponding to a subset of the non-negative real numbers. In the following discussion time...defined as sequences, over time, of input and outIut values. The notion of sequences or trajectories is formalized as: AT = xx: T -- Al BT = tyIy: T -4BJ AT
Fukayama, Osamu; Taniguchi, Noriyuki; Suzuki, Takafumi; Mabuchi, Kunihiko
2006-01-01
We have developed a brain-machine interface (BMI) in the form of a small vehicle, which we call the RatCar. In this system, we implanted wire electrodes in the motor cortices of rat's brain to continuously record neural signals. We applied a linear model to estimate the locomotion state (e.g., speed and directions) of a rat using a weighted summation model for the neural firing rates. With this information, we then determined the approximate movement of a rat. Although the estimation is still imprecise, results suggest that our model is able to control the system to some degree. In this paper, we give an overview of our system and describe the methods used, which include continuous neural recording, spike detection and a discrimination algorithm, and a locomotion estimation model minimizes the square error of the locomotion speed and changes in direction.
Fuzzy model-based servo and model following control for nonlinear systems.
Ohtake, Hiroshi; Tanaka, Kazuo; Wang, Hua O
2009-12-01
This correspondence presents servo and nonlinear model following controls for a class of nonlinear systems using the Takagi-Sugeno fuzzy model-based control approach. First, the construction method of the augmented fuzzy system for continuous-time nonlinear systems is proposed by differentiating the original nonlinear system. Second, the dynamic fuzzy servo controller and the dynamic fuzzy model following controller, which can make outputs of the nonlinear system converge to target points and to outputs of the reference system, respectively, are introduced. Finally, the servo and model following controller design conditions are given in terms of linear matrix inequalities. Design examples illustrate the utility of this approach.
Heavner, Karyn; Newschaffer, Craig; Hertz-Picciotto, Irva; Bennett, Deborah; Burstyn, Igor
2014-05-01
The Early Autism Risk Longitudinal Investigation (EARLI), an ongoing study of a risk-enriched pregnancy cohort, examines genetic and environmental risk factors for autism spectrum disorders (ASDs). We simulated the potential effects of both measurement error (ME) in exposures and misclassification of ASD-related phenotype (assessed as Autism Observation Scale for Infants (AOSI) scores) on measures of association generated under this study design. We investigated the impact on the power to detect true associations with exposure and the false positive rate (FPR) for a non-causal correlate of exposure (X2, r=0.7) for continuous AOSI score (linear model) versus dichotomised AOSI (logistic regression) when the sample size (n), degree of ME in exposure, and strength of the expected (true) OR (eOR)) between exposure and AOSI varied. Exposure was a continuous variable in all linear models and dichotomised at one SD above the mean in logistic models. Simulations reveal complex patterns and suggest that: (1) There was attenuation of associations that increased with eOR and ME; (2) The FPR was considerable under many scenarios; and (3) The FPR has a complex dependence on the eOR, ME and model choice, but was greater for logistic models. The findings will stimulate work examining cost-effective strategies to reduce the impact of ME in realistic sample sizes and affirm the importance for EARLI of investment in biological samples that help precisely quantify a wide range of environmental exposures.
Shear rate analysis of water dynamic in the continuous stirred tank
NASA Astrophysics Data System (ADS)
Tulus; Mardiningsih; Sawaluddin; Sitompul, O. S.; Ihsan, A. K. A. M.
2018-02-01
Analysis of mixture in a continuous stirred tank reactor (CSTR) is an important part in some process of biogas production. This paper is a preliminary study of fluid dynamic phenomenon in a continuous stirred tank numerically. The tank is designed in the form of cylindrical tank equipped with a stirrer. In this study, it is considered that the tank is filled with water. Stirring is done with a stirring speed of 10rpm, 15rpm, 20rpm, and 25rpm. Mathematical modeling of stirred tank is derived. The model is calculated by using the finite element method that are calculated using CFD software. The result shows that the shear rate is high on the front end portion of the stirrer. The maximum shear rate tend to a stable behaviour after the stirring time of 2 second. The relation between the speed and the maximum shear rate is in the form of linear equation.
Transient rheology of the uppermost mantle beneath the Mojave Desert, California
Pollitz, F.F.
2003-01-01
Geodetic data indicate that the M7.1 Hector Mine, California, earthquake was followed by a brief period (a few weeks) of rapid deformation preceding a prolonged phase of slower deformation. We find that the signal contained in continuous and campaign global positioning system data for 2.5 years after the earthquake may be explained with a transient rheology. Quantitative modeling of these data with allowance for transient (linear biviscous) rheology in the lower crust and upper mantle demonstrates that transient rheology in the upper mantle is dominant, its material properties being typified by two characteristic relaxation times ???0.07 and ???2 years. The inferred mantle rheology is a Jeffreys solid in which the transient and steady-state shear moduli are equal. Consideration of a simpler viscoelastic model with a linear univiscous rheology (2 fewer parameters than a biviscous model) shows that it consistently underpredicts the amplitude of the first ???3 months signal, and allowance for a biviscous rheology is significant at the 99.0% confidence level. Another alternative model - deep postseismic afterslip beneath the coseismic rupture - predicts a vertical velocity pattern opposite to the observed pattern at all time periods considered. Despite its plausibility, the advocated biviscous rheology model is non-unique and should be regarded as a viable alternative to the non-linear mantle rheology model for governing postseismic flow beneath the Mojave Desert. Published by Elsevier B.V.
A 4-cylinder Stirling engine computer program with dynamic energy equations
NASA Technical Reports Server (NTRS)
Daniele, C. J.; Lorenzo, C. F.
1983-01-01
A computer program for simulating the steady state and transient performance of a four cylinder Stirling engine is presented. The thermodynamic model includes both continuity and energy equations and linear momentum terms (flow resistance). Each working space between the pistons is broken into seven control volumes. Drive dynamics and vehicle load effects are included. The model contains 70 state variables. Also included in the model are piston rod seal leakage effects. The computer program includes a model of a hydrogen supply system, from which hydrogen may be added to the system to accelerate the engine. Flow charts are provided.
NASA Astrophysics Data System (ADS)
Brümmer, C.; Moffat, A. M.; Huth, V.; Augustin, J.; Herbst, M.; Kutsch, W. L.
2016-12-01
Manual carbon dioxide flux measurements with closed chambers at scheduled campaigns are a versatile method to study management effects at small scales in multiple-plot experiments. The eddy covariance technique has the advantage of quasi-continuous measurements but requires large homogeneous areas of a few hectares. To evaluate the uncertainties associated with interpolating from individual campaigns to the whole vegetation period, we installed both techniques at an agricultural site in Northern Germany. The presented comparison covers two cropping seasons, winter oilseed rape in 2012/13 and winter wheat in 2013/14. Modeling half-hourly carbon fluxes from campaigns is commonly performed based on non-linear regressions for the light response and respiration. The daily averages of net CO2 modeled from chamber data deviated from eddy covariance measurements in the range of ± 5 g C m-2 day-1. To understand the observed differences and to disentangle the effects, we performed four additional setups (expert versus default settings of the non-linear regressions based algorithm, purely empirical modeling with artificial neural networks versus non-linear regressions, cross-validating using eddy covariance measurements as campaign fluxes, weekly versus monthly scheduling of campaigns) to model the half-hourly carbon fluxes for the whole vegetation period. The good agreement of the seasonal course of net CO2 at plot and field scale for our agricultural site demonstrates that both techniques are robust and yield consistent results at seasonal time scale even for a managed ecosystem with high temporal dynamics in the fluxes. This allows combining the respective advantages of factorial experiments at plot scale with dense time series data at field scale. Furthermore, the information from the quasi-continuous eddy covariance measurements can be used to derive vegetation proxies to support the interpolation of carbon fluxes in-between the manual chamber campaigns.
Minimal model for a hydrodynamic fingering instability in microroller suspensions
NASA Astrophysics Data System (ADS)
Delmotte, Blaise; Donev, Aleksandar; Driscoll, Michelle; Chaikin, Paul
2017-11-01
We derive a minimal continuum model to investigate the hydrodynamic mechanism behind the fingering instability recently discovered in a suspension of microrollers near a floor [M. Driscoll et al., Nat. Phys. 13, 375 (2017), 10.1038/nphys3970]. Our model, consisting of two continuous lines of rotlets, exhibits a linear instability driven only by hydrodynamic interactions and reproduces the length-scale selection observed in large-scale particle simulations and in experiments. By adjusting only one parameter, the distance between the two lines, our dispersion relation exhibits quantitative agreement with the simulations and qualitative agreement with experimental measurements. Our linear stability analysis indicates that this instability is caused by the combination of the advective and transverse flows generated by the microrollers near a no-slip surface. Our simple model offers an interesting formalism to characterize other hydrodynamic instabilities that have not been well understood, such as size scale selection in suspensions of particles sedimenting adjacent to a wall, or the recently observed formations of traveling phonons in systems of confined driven particles.
NASA Astrophysics Data System (ADS)
Juno, J.; Hakim, A.; TenBarge, J.; Dorland, W.
2015-12-01
We present for the first time results for the turbulence dissipation challenge, with specific focus on the linear wave portion of the challenge, using a variety of continuum kinetic models: hybrid Vlasov-Maxwell, gyrokinetic, and full Vlasov-Maxwell. As one of the goals of the wave problem as it is outlined is to identify how well various models capture linear physics, we compare our results to linear Vlasov and gyrokinetic theory. Preliminary gyrokinetic results match linear theory extremely well due to the geometry of the problem, which eliminates the dominant nonlinearity. With the non-reduced models, we explore how the subdominant nonlinearities manifest and affect the evolution of the turbulence and the energy budget. We also take advantage of employing continuum methods to study the dynamics of the distribution function, with particular emphasis on the full Vlasov results where a basic collision operator has been implemented. As the community prepares for the next stage of the turbulence dissipation challenge, where we hope to do large 3D simulations to inform the next generation of observational missions such as THOR (Turbulence Heating ObserveR), we argue for the consideration of hybrid Vlasov and full Vlasov as candidate models for these critical simulations. With the use of modern numerical algorithms, we demonstrate the competitiveness of our code with traditional particle-in-cell algorithms, with a clear plan for continued improvements and optimizations to further strengthen the code's viability as an option for the next stage of the challenge.
Binder, Harald; Sauerbrei, Willi; Royston, Patrick
2013-06-15
In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2) = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Bulut, Gökhan
2014-08-01
Stability of parametrically excited torsional vibrations of a shaft system composed of two torsionally elastic shafts interconnected through a Hooke's joint is studied. The shafts are considered to be continuous (distributed-parameter) systems and an approximate discrete model for the torsional vibrations of the shaft system is derived via a finite element scheme. The stability of the solutions of the linearized equations of motion, consisting of a set of Mathieu-Hill type equations, is examined by means of a monodromy matrix method and the results are presented in the form of a Strutt-Ince diagram visualizing the effects of the system parameters on the stability of the shaft system.
Bentein, Kathleen; Vandenberghe, Christian; Vandenberg, Robert; Stinglhamber, Florence
2005-05-01
Through the use of affective, normative, and continuance commitment in a multivariate 2nd-order factor latent growth modeling approach, the authors observed linear negative trajectories that characterized the changes in individuals across time in both affective and normative commitment. In turn, an individual's intention to quit the organization was characterized by a positive trajectory. A significant association was also found between the change trajectories such that the steeper the decline in an individual's affective and normative commitments across time, the greater the rate of increase in that individual's intention to quit, and, further, the greater the likelihood that the person actually left the organization over the next 9 months. Findings regarding continuance commitment and its components were mixed.
Compositional control of continuously graded anode functional layer
NASA Astrophysics Data System (ADS)
McCoppin, J.; Barney, I.; Mukhopadhyay, S.; Miller, R.; Reitz, T.; Young, D.
2012-10-01
In this work, solid oxide fuel cells (SOFC's) are fabricated with linear-compositionally graded anode functional layers (CGAFL) using a computer-controlled compound aerosol deposition (CCAD) system. Cells with different CGAFL thicknesses (30 um and 50 um) are prepared with a continuous compositionally graded interface deposited between the electrolyte and anode support current collecting regions. The compositional profile was characterized using energy dispersive X-ray spectroscopic mapping. An analytical model of the compound aerosol deposition was developed. The model predicted compositional profiles for both samples that closely matched the measured profiles, suggesting that aerosol-based deposition methods are capable of creating functional gradation on length scales suitable for solid oxide fuel cell structures. The electrochemical performances of the two cells are analyzed using electrochemical impedance spectroscopy (EIS).
A finite nonlinear hyper-viscoelastic model for soft biological tissues.
Panda, Satish Kumar; Buist, Martin Lindsay
2018-03-01
Soft tissues exhibit highly nonlinear rate and time-dependent stress-strain behaviour. Strain and strain rate dependencies are often modelled using a hyperelastic model and a discrete (standard linear solid) or continuous spectrum (quasi-linear) viscoelastic model, respectively. However, these models are unable to properly capture the materials characteristics because hyperelastic models are unsuited for time-dependent events, whereas the common viscoelastic models are insufficient for the nonlinear and finite strain viscoelastic tissue responses. The convolution integral based models can demonstrate a finite viscoelastic response; however, their derivations are not consistent with the laws of thermodynamics. The aim of this work was to develop a three-dimensional finite hyper-viscoelastic model for soft tissues using a thermodynamically consistent approach. In addition, a nonlinear function, dependent on strain and strain rate, was adopted to capture the nonlinear variation of viscosity during a loading process. To demonstrate the efficacy and versatility of this approach, the model was used to recreate the experimental results performed on different types of soft tissues. In all the cases, the simulation results were well matched (R 2 ⩾0.99) with the experimental data. Copyright © 2018 Elsevier Ltd. All rights reserved.
The rectilinear three-body problem as a basis for studying highly eccentric systems
NASA Astrophysics Data System (ADS)
Voyatzis, G.; Tsiganis, K.; Gaitanas, M.
2018-01-01
The rectilinear elliptic restricted three-body problem (TBP) is the limiting case of the elliptic restricted TBP when the motion of the primaries is described by a Keplerian ellipse with eccentricity e'=1, but the collision of the primaries is assumed to be a non-singular point. The rectilinear model has been proposed as a starting model for studying the dynamics of motion around highly eccentric binary systems. Broucke (AIAA J 7:1003-1009, 1969) explored the rectilinear problem and obtained isolated periodic orbits for mass parameter μ =0.5 (equal masses of the primaries). We found that all orbits obtained by Broucke are linearly unstable. We extend Broucke's computations by using a finer search for symmetric periodic orbits and computing their linear stability. We found a large number of periodic orbits, but only eight of them were found to be linearly stable and are associated with particular mean motion resonances. These stable orbits are used as generating orbits for continuation with respect to μ and e'<1. Also, continuation of periodic solutions with respect to the mass of the small body can be applied by using the general TBP. FLI maps of dynamical stability show that stable periodic orbits are surrounded in phase space with regions of regular orbits indicating that systems of very highly eccentric orbits can be found in stable resonant configurations. As an application we present a stability study for the planetary system HD7449.
CSOLNP: Numerical Optimization Engine for Solving Non-linearly Constrained Problems.
Zahery, Mahsa; Maes, Hermine H; Neale, Michael C
2017-08-01
We introduce the optimizer CSOLNP, which is a C++ implementation of the R package RSOLNP (Ghalanos & Theussl, 2012, Rsolnp: General non-linear optimization using augmented Lagrange multiplier method. R package version, 1) alongside some improvements. CSOLNP solves non-linearly constrained optimization problems using a Sequential Quadratic Programming (SQP) algorithm. CSOLNP, NPSOL (a very popular implementation of SQP method in FORTRAN (Gill et al., 1986, User's guide for NPSOL (version 4.0): A Fortran package for nonlinear programming (No. SOL-86-2). Stanford, CA: Stanford University Systems Optimization Laboratory), and SLSQP (another SQP implementation available as part of the NLOPT collection (Johnson, 2014, The NLopt nonlinear-optimization package. Retrieved from http://ab-initio.mit.edu/nlopt)) are three optimizers available in OpenMx package. These optimizers are compared in terms of runtimes, final objective values, and memory consumption. A Monte Carlo analysis of the performance of the optimizers was performed on ordinal and continuous models with five variables and one or two factors. While the relative difference between the objective values is less than 0.5%, CSOLNP is in general faster than NPSOL and SLSQP for ordinal analysis. As for continuous data, none of the optimizers performs consistently faster than the others. In terms of memory usage, we used Valgrind's heap profiler tool, called Massif, on one-factor threshold models. CSOLNP and NPSOL consume the same amount of memory, while SLSQP uses 71 MB more memory than the other two optimizers.
Identification of Linear and Nonlinear Aerodynamic Impulse Responses Using Digital Filter Techniques
NASA Technical Reports Server (NTRS)
Silva, Walter A.
1997-01-01
This paper discusses the mathematical existence and the numerically-correct identification of linear and nonlinear aerodynamic impulse response functions. Differences between continuous-time and discrete-time system theories, which permit the identification and efficient use of these functions, will be detailed. Important input/output definitions and the concept of linear and nonlinear systems with memory will also be discussed. It will be shown that indicial (step or steady) responses (such as Wagner's function), forced harmonic responses (such as Theodorsen's function or those from doublet lattice theory), and responses to random inputs (such as gusts) can all be obtained from an aerodynamic impulse response function. This paper establishes the aerodynamic impulse response function as the most fundamental, and, therefore, the most computationally efficient, aerodynamic function that can be extracted from any given discrete-time, aerodynamic system. The results presented in this paper help to unify the understanding of classical two-dimensional continuous-time theories with modern three-dimensional, discrete-time theories. First, the method is applied to the nonlinear viscous Burger's equation as an example. Next the method is applied to a three-dimensional aeroelastic model using the CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code and then to a two-dimensional model using the CFL3D Navier-Stokes code. Comparisons of accuracy and computational cost savings are presented. Because of its mathematical generality, an important attribute of this methodology is that it is applicable to a wide range of nonlinear, discrete-time problems.
Identification of Linear and Nonlinear Aerodynamic Impulse Responses Using Digital Filter Techniques
NASA Technical Reports Server (NTRS)
Silva, Walter A.
1997-01-01
This paper discusses the mathematical existence and the numerically-correct identification of linear and nonlinear aerodynamic impulse response functions. Differences between continuous-time and discrete-time system theories, which permit the identification and efficient use of these functions, will be detailed. Important input/output definitions and the concept of linear and nonlinear systems with memory will also be discussed. It will be shown that indicial (step or steady) responses (such as Wagner's function), forced harmonic responses (such as Tbeodorsen's function or those from doublet lattice theory), and responses to random inputs (such as gusts) can all be obtained from an aerodynamic impulse response function. This paper establishes the aerodynamic impulse response function as the most fundamental, and, therefore, the most computationally efficient, aerodynamic function that can be extracted from any given discrete-time, aerodynamic system. The results presented in this paper help to unify the understanding of classical two-dimensional continuous-time theories with modem three-dimensional, discrete-time theories. First, the method is applied to the nonlinear viscous Burger's equation as an example. Next the method is applied to a three-dimensional aeroelastic model using the CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code and then to a two-dimensional model using the CFL3D Navier-Stokes code. Comparisons of accuracy and computational cost savings are presented. Because of its mathematical generality, an important attribute of this methodology is that it is applicable to a wide range of nonlinear, discrete-time problems.
NASA Astrophysics Data System (ADS)
Schaedel, C.; Koven, C.; Celis, G.; Hutchings, J.; Lawrence, D. M.; Mauritz, M.; Pegoraro, E.; Salmon, V. G.; Taylor, M.; Wieder, W. R.; Schuur, E.
2017-12-01
Warming over the Arctic in the last decades has been twice as high as for the rest of the globe and has exposed large amounts of organic carbon to microbial decomposition in permafrost ecosystems. Continued warming and associated changes in soil moisture conditions not only lead to enhanced microbial decomposition from permafrost soil but also enhanced plant carbon uptake. Both processes impact the overall contribution of permafrost carbon dynamics to the global carbon cycle, yet field and modeling studies show large uncertainties in regard to both uptake and release mechanisms. Here, we compare variables associated with ecosystem carbon exchange (GPP: gross primary production; Reco: ecosystem respiration; and NEE: net ecosystem exchange) from eight years of experimental soil warming in moist acidic tundra with the same variables derived from an experimental model (Community Land Model version 4.5: CLM4.5) that simulates the same degree of arctic warming. While soil temperatures and thaw depths exhibited comparable increases with warming between field and model variables, carbon exchange related parameters showed divergent patterns. In the field non-linear responses to experimentally induced permafrost thaw were observed in GPP, Reco, and NEE. Indirect effects of continued soil warming and thaw created changes in soil moisture conditions causing ground surface subsidence and suppressing ecosystem carbon exchange over time. In contrast, the model predicted linear increases in GPP, Reco, and NEE with every year of warming turning the ecosystem into a net annual carbon sink. The field experiment revealed the importance of hydrology in carbon flux responses to permafrost thaw, a complexity that the model may fail to predict. Further parameterization of variables that drive GPP, Reco, and NEE in the model will help to inform and refine future model development.
NASA Technical Reports Server (NTRS)
Wong, R. C.; Owen, H. A., Jr.; Wilson, T. G.
1981-01-01
Small-signal models are derived for the power stage of the voltage step-up (boost) and the current step-up (buck) converters. The modeling covers operation in both the continuous-mmf mode and the discontinuous-mmf mode. The power stage in the regulated current step-up converter on board the Dynamics Explorer Satellite is used as an example to illustrate the procedures in obtaining the small-signal functions characterizing a regulated converter.
Finite-Length Line Source Superposition Model (FLLSSM)
NASA Astrophysics Data System (ADS)
1980-03-01
A linearized thermal conduction model was developed to economically determine media temperatures in geologic repositories for nuclear wastes. Individual canisters containing either high level waste or spent fuel assemblies were represented as finite length line sources in a continuous media. The combined effects of multiple canisters in a representative storage pattern were established at selected points of interest by superposition of the temperature rises calculated for each canister. The methodology is outlined and the computer code FLLSSM which performs required numerical integrations and superposition operations is described.
Computer Simulation Studies of the Tearing Mode Instability in a Field-Reversed Ion Layer.
1980-09-15
CLASSIFICATION OF THIS PAGE (when Date Entered) 20. (Abstract continued) .those of the linear theory. In addition, it has been demonstrated that when...However, all the results obtained so far are very encouraging. Using the energy prin- ciple Sudan and Rosenbluth5 have shown with a hybrid model that a...found that finite length layers are stable to tearing modes as a consequence of axial kinetic pressure. Using a hybrid model , in which the ion layer is
Continuing evaluation of bipolar linear devices for total dose bias dependency and ELDRS effects
NASA Technical Reports Server (NTRS)
McClure, Steven S.; Gorelick, Jerry L.; Yui, Candice; Rax, Bernard G.; Wiedeman, Michael D.
2003-01-01
We present results of continuing efforts to evaluate total dose bias dependency and ELDRS effects in bipolar linear microcircuits. Several devices were evaluated, each exhibiting moderate to significant bias and/or dose rate dependency.
Li, Ruiying; Ma, Wenting; Huang, Ning; Kang, Rui
2017-01-01
A sophisticated method for node deployment can efficiently reduce the energy consumption of a Wireless Sensor Network (WSN) and prolong the corresponding network lifetime. Pioneers have proposed many node deployment based lifetime optimization methods for WSNs, however, the retransmission mechanism and the discrete power control strategy, which are widely used in practice and have large effect on the network energy consumption, are often neglected and assumed as a continuous one, respectively, in the previous studies. In this paper, both retransmission and discrete power control are considered together, and a more realistic energy-consumption-based network lifetime model for linear WSNs is provided. Using this model, we then propose a generic deployment-based optimization model that maximizes network lifetime under coverage, connectivity and transmission rate success constraints. The more accurate lifetime evaluation conduces to a longer optimal network lifetime in the realistic situation. To illustrate the effectiveness of our method, both one-tiered and two-tiered uniformly and non-uniformly distributed linear WSNs are optimized in our case studies, and the comparisons between our optimal results and those based on relatively inaccurate lifetime evaluation show the advantage of our method when investigating WSN lifetime optimization problems.
Gapless topological order, gravity, and black holes
NASA Astrophysics Data System (ADS)
Rasmussen, Alex; Jermyn, Adam S.
2018-04-01
In this work we demonstrate that linearized gravity exhibits gapless topological order with an extensive ground state degeneracy. This phenomenon is closely related both to the topological order of the pyrochlore U (1 ) spin liquid and to recent work by Hawking and co-workers, who used the soft-photon and graviton theorems to demonstrate that the vacuum in linearized gravity is not unique. We first consider lattice models whose low-energy behavior is described by electromagnetism and linearized gravity, and then argue that the topological nature of these models carries over into the continuum. We demonstrate that these models can have many ground states without making assumptions about the topology of spacetime or about the high-energy nature of the theory, and show that the infinite family of symmetries described by Hawking and co-workers is simply the different topological sectors. We argue that in this context black holes appear as topological defects in the infrared theory, and that this suggests a potential approach to understanding both the firewall paradox and information encoding in gravitational theories. Finally, we use insights from the soft-boson theorems to make connections between deconfined gauge theories with continuous gauge groups and gapless topological order.
Airfoil stall interpreted through linear stability analysis
NASA Astrophysics Data System (ADS)
Busquet, Denis; Juniper, Matthew; Richez, Francois; Marquet, Olivier; Sipp, Denis
2017-11-01
Although airfoil stall has been widely investigated, the origin of this phenomenon, which manifests as a sudden drop of lift, is still not clearly understood. In the specific case of static stall, multiple steady solutions have been identified experimentally and numerically around the stall angle. We are interested here in investigating the stability of these steady solutions so as to first model and then control the dynamics. The study is performed on a 2D helicopter blade airfoil OA209 at low Mach number, M 0.2 and high Reynolds number, Re 1.8 ×106 . Steady RANS computation using a Spalart-Allmaras model is coupled with continuation methods (pseudo-arclength and Newton's method) to obtain steady states for several angles of incidence. The results show one upper branch (high lift), one lower branch (low lift) connected by a middle branch, characterizing an hysteresis phenomenon. A linear stability analysis performed around these equilibrium states highlights a mode responsible for stall, which starts with a low frequency oscillation. A bifurcation scenario is deduced from the behaviour of this mode. To shed light on the nonlinear behavior, a low order nonlinear model is created with the same linear stability behavior as that observed for that airfoil.
Mirror instability near the threshold: Hybrid simulations
NASA Astrophysics Data System (ADS)
Hellinger, P.; Trávníček, P.; Passot, T.; Sulem, P.; Kuznetsov, E. A.; Califano, F.
2007-12-01
Nonlinear behavior of the mirror instability near the threshold is investigated using 1-D hybrid simulations. The simulations demonstrate the presence of an early phase where quasi-linear effects dominate [ Shapiro and Shevchenko, 1964]. The quasi-linear diffusion is however not the main saturation mechanism. A second phase is observed where the mirror mode is linearly stable (the stability is evaluated using the instantaneous ion distribution function) but where the instability nevertheless continues to develop, leading to nonlinear coherent structures in the form of magnetic humps. This regime is well modeled by a nonlinear equation for the magnetic field evolution, derived from a reductive perturbative expansion of the Vlasov-Maxwell equations [ Kuznetsov et al., 2007] with a phenomenological term which represents local variations of the ion Larmor radius. In contrast with previous models where saturation is due to the cooling of a population of trapped particles, the resulting equation correctly reproduces the development of magnetic humps from an initial noise. References Kuznetsov, E., T. Passot and P. L. Sulem (2007), Dynamical model for nonlinear mirror modes near threshold, Phys. Rev. Lett., 98, 235003. Shapiro, V. D., and V. I. Shevchenko (1964), Sov. JETP, 18, 1109.
NASA Technical Reports Server (NTRS)
Park, K. C.; Belvin, W. Keith
1990-01-01
A general form for the first-order representation of the continuous second-order linear structural-dynamics equations is introduced to derive a corresponding form of first-order continuous Kalman filtering equations. Time integration of the resulting equations is carried out via a set of linear multistep integration formulas. It is shown that a judicious combined selection of computational paths and the undetermined matrices introduced in the general form of the first-order linear structural systems leads to a class of second-order discrete Kalman filtering equations involving only symmetric sparse N x N solution matrices.
NASA Technical Reports Server (NTRS)
Rudolph, Terence; Perala, Rodney A.; Easterbrook, Calvin C.; Parker, Steven L.
1986-01-01
Since 1980, NASA has been collecting direct strike lightning data by flying an instrumented F-106B aircraft into thunderstorms. The continuing effort to interpret the measured data is reported here. Both linear and nonlinear finite difference modeling techniques are applied to the problem of lightning triggered by an aircraft in a thunderstorm. Five different aircraft are analyzed to determine the effect of aircraft size and shape on lightning triggering. The effect of lightning channel impedance on aircraft response is investigated. The particle environment in thunderstorms and electric field enhancements by typical ice particles is also investigated.
Estimation for general birth-death processes
Crawford, Forrest W.; Minin, Vladimir N.; Suchard, Marc A.
2013-01-01
Birth-death processes (BDPs) are continuous-time Markov chains that track the number of “particles” in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution. PMID:25328261
Estimation for general birth-death processes.
Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A
2014-04-01
Birth-death processes (BDPs) are continuous-time Markov chains that track the number of "particles" in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution.
The isolation of spatial patterning modes in a mathematical model of juxtacrine cell signalling.
O'Dea, R D; King, J R
2013-06-01
Juxtacrine signalling mechanisms are known to be crucial in tissue and organ development, leading to spatial patterns in gene expression. We investigate the patterning behaviour of a discrete model of juxtacrine cell signalling due to Owen & Sherratt (1998, Mathematical modelling of juxtacrine cell signalling. Math. Biosci., 153, 125-150) in which ligand molecules, unoccupied receptors and bound ligand-receptor complexes are modelled. Feedback between the ligand and receptor production and the level of bound receptors is incorporated. By isolating two parameters associated with the feedback strength and employing numerical simulation, linear stability and bifurcation analysis, the pattern-forming behaviour of the model is analysed under regimes corresponding to lateral inhibition and induction. Linear analysis of this model fails to capture the patterning behaviour exhibited in numerical simulations. Via bifurcation analysis, we show that since the majority of periodic patterns fold subcritically from the homogeneous steady state, a wide variety of stable patterns exists at a given parameter set, providing an explanation for this failure. The dominant pattern is isolated via numerical simulation. Additionally, by sampling patterns of non-integer wavelength on a discrete mesh, we highlight a disparity between the continuous and discrete representations of signalling mechanisms: in the continuous case, patterns of arbitrary wavelength are possible, while sampling such patterns on a discrete mesh leads to longer wavelength harmonics being selected where the wavelength is rational; in the irrational case, the resulting aperiodic patterns exhibit 'local periodicity', being constructed from distorted stable shorter wavelength patterns. This feature is consistent with experimentally observed patterns, which typically display approximate short-range periodicity with defects.
Remontet, L; Bossard, N; Belot, A; Estève, J
2007-05-10
Relative survival provides a measure of the proportion of patients dying from the disease under study without requiring the knowledge of the cause of death. We propose an overall strategy based on regression models to estimate the relative survival and model the effects of potential prognostic factors. The baseline hazard was modelled until 10 years follow-up using parametric continuous functions. Six models including cubic regression splines were considered and the Akaike Information Criterion was used to select the final model. This approach yielded smooth and reliable estimates of mortality hazard and allowed us to deal with sparse data taking into account all the available information. Splines were also used to model simultaneously non-linear effects of continuous covariates and time-dependent hazard ratios. This led to a graphical representation of the hazard ratio that can be useful for clinical interpretation. Estimates of these models were obtained by likelihood maximization. We showed that these estimates could be also obtained using standard algorithms for Poisson regression. Copyright 2006 John Wiley & Sons, Ltd.
Generation of whistler waves by continuous HF heating of the upper ionosphere
NASA Astrophysics Data System (ADS)
Vartanyan, A.; Milikh, G. M.; Eliasson, B. E.; Sharma, A.; Chang, C.; Parrot, M.; Papadopoulos, K.
2013-12-01
We report observations of VLF waves by the DEMETER satellite overflying the HAARP facility during ionospheric heating experiments. The detected VLF waves were in the range 8-17 kHz and coincided with times of continuous heating. The experiments indicate whistler generation due to conversion of artificial lower hybrid waves to whistlers on small scale field-aligned plasma density striations. The observations are compared with theoretical models, taking into account both linear and nonlinear processes. Implications of the mode conversion technique on VLF generation with subsequent injection into the radiation belts to trigger particle precipitation are discussed.
Periodicity in Age-Resolved Populations
NASA Astrophysics Data System (ADS)
Esipov, Sergei
We discuss the interplay between the non-linear diffusion and age-resolved population dynamics. Depending on the age properties of collective migration the system may exhibit continuous joint expansion of all ages or continuous expansion with age segregation. Between these two obvious limiting regimes there is an interesting window of periodic expansion, which has been previously used by us in modeling bacterial colonies of Proteus mirabilis. In order to test whether the age-dependent collective migration leads to periodicity in other systems we performed a Fourier analysis of historical data on ethnic expansions and found multiple co-existing periods of activity.
A higher order panel method for linearized supersonic flow
NASA Technical Reports Server (NTRS)
Ehlers, F. E.; Epton, M. A.; Johnson, F. T.; Magnus, A. E.; Rubbert, P. E.
1979-01-01
The basic integral equations of linearized supersonic theory for an advanced supersonic panel method are derived. Methods using only linear varying source strength over each panel or only quadratic doublet strength over each panel gave good agreement with analytic solutions over cones and zero thickness cambered wings. For three dimensional bodies and wings of general shape, combined source and doublet panels with interior boundary conditions to eliminate the internal perturbations lead to a stable method providing good agreement experiment. A panel system with all edges contiguous resulted from dividing the basic four point non-planar panel into eight triangular subpanels, and the doublet strength was made continuous at all edges by a quadratic distribution over each subpanel. Superinclined panels were developed and tested on s simple nacelle and on an airplane model having engine inlets, with excellent results.
Computing the Evans function via solving a linear boundary value ODE
NASA Astrophysics Data System (ADS)
Wahl, Colin; Nguyen, Rose; Ventura, Nathaniel; Barker, Blake; Sandstede, Bjorn
2015-11-01
Determining the stability of traveling wave solutions to partial differential equations can oftentimes be computationally intensive but of great importance to understanding the effects of perturbations on the physical systems (chemical reactions, hydrodynamics, etc.) they model. For waves in one spatial dimension, one may linearize around the wave and form an Evans function - an analytic Wronskian-like function which has zeros that correspond in multiplicity to the eigenvalues of the linearized system. If eigenvalues with a positive real part do not exist, the traveling wave will be stable. Two methods exist for calculating the Evans function numerically: the exterior-product method and the method of continuous orthogonalization. The first is numerically expensive, and the second reformulates the originally linear system as a nonlinear system. We develop a new algorithm for computing the Evans function through appropriate linear boundary-value problems. This algorithm is cheaper than the previous methods, and we prove that it preserves analyticity of the Evans function. We also provide error estimates and implement it on some classical one- and two-dimensional systems, one being the Swift-Hohenberg equation in a channel, to show the advantages.
Fully automatic adjoints: a robust and efficient mechanism for generating adjoint ocean models
NASA Astrophysics Data System (ADS)
Ham, D. A.; Farrell, P. E.; Funke, S. W.; Rognes, M. E.
2012-04-01
The problem of generating and maintaining adjoint models is sufficiently difficult that typically only the most advanced and well-resourced community ocean models achieve it. There are two current technologies which each suffer from their own limitations. Algorithmic differentiation, also called automatic differentiation, is employed by models such as the MITGCM [2] and the Alfred Wegener Institute model FESOM [3]. This technique is very difficult to apply to existing code, and requires a major initial investment to prepare the code for automatic adjoint generation. AD tools may also have difficulty with code employing modern software constructs such as derived data types. An alternative is to formulate the adjoint differential equation and to discretise this separately. This approach, known as the continuous adjoint and employed in ROMS [4], has the disadvantage that two different model code bases must be maintained and manually kept synchronised as the model develops. The discretisation of the continuous adjoint is not automatically consistent with that of the forward model, producing an additional source of error. The alternative presented here is to formulate the flow model in the high level language UFL (Unified Form Language) and to automatically generate the model using the software of the FEniCS project. In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint [5]. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by libadjoint [1]. In contrast to the conventional algorithmic differentiation description of a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of the simulation process as a sequence of discrete equations which are assembled and solved. It is the coupling of the respective abstractions employed by libadjoint and the FEniCS project which produces the adjoint model automatically, without further intervention from the model developer. This presentation will demonstrate this new technology through linear and non-linear shallow water test cases. The exceptionally simple model syntax will be highlighted and the correctness of the resulting adjoint simulations will be demonstrated using rigorous convergence tests.
Chen, Han; Wang, Chaolong; Conomos, Matthew P; Stilp, Adrienne M; Li, Zilin; Sofer, Tamar; Szpiro, Adam A; Chen, Wei; Brehm, John M; Celedón, Juan C; Redline, Susan; Papanicolaou, George J; Thornton, Timothy A; Laurie, Cathy C; Rice, Kenneth; Lin, Xihong
2016-04-07
Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM's constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. Copyright © 2016 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Evaluating a linearized Euler equations model for strong turbulence effects on sound propagation.
Ehrhardt, Loïc; Cheinet, Sylvain; Juvé, Daniel; Blanc-Benon, Philippe
2013-04-01
Sound propagation outdoors is strongly affected by atmospheric turbulence. Under strongly perturbed conditions or long propagation paths, the sound fluctuations reach their asymptotic behavior, e.g., the intensity variance progressively saturates. The present study evaluates the ability of a numerical propagation model based on the finite-difference time-domain solving of the linearized Euler equations in quantitatively reproducing the wave statistics under strong and saturated intensity fluctuations. It is the continuation of a previous study where weak intensity fluctuations were considered. The numerical propagation model is presented and tested with two-dimensional harmonic sound propagation over long paths and strong atmospheric perturbations. The results are compared to quantitative theoretical or numerical predictions available on the wave statistics, including the log-amplitude variance and the probability density functions of the complex acoustic pressure. The match is excellent for the evaluated source frequencies and all sound fluctuations strengths. Hence, this model captures these many aspects of strong atmospheric turbulence effects on sound propagation. Finally, the model results for the intensity probability density function are compared with a standard fit by a generalized gamma function.
Yan, Shengjie; Wu, Xiaomei; Wang, Weiqi
2017-09-01
Radiofrequency (RF) energy is often used to create a linear lesion or discrete lesions for blocking the accessory conduction pathways for treating atrial fibrillation. By using finite element analysis, we study the ablation effect of amplitude control ablation mode (AcM) and bipolar ablation mode (BiM) in creating a linear lesion and discrete lesions in a 5-mm-thick atrial wall; particularly, the characteristic of lesion shape has been investigated in amplitude control ablation. Computer models of multipolar catheter were developed to study the lesion dimensions in atrial walls created through AcM, BiM and special electrodes activated ablation methods in AcM and BiM. To validate the theoretical results in this study, an in vitro experiment with porcine cardiac tissue was performed. At 40 V/20 V root mean squared (RMS) of the RF voltage for AcM, the continuous and transmural lesion was created by AcM-15s, AcM-5s and AcM-ad-20V ablation in 5-mm-thick atrial wall. At 20 V RMS for BiM, the continuous but not transmural lesion was created. AcM ablation yielded asymmetrical and discrete lesions shape, whereas the lesion shape turned to more symmetrical and continuous as the electrodes alternative activated period decreased from 15 s to 5 s. Two discrete lesions were created when using AcM, AcM-ad-40V, BiM-ad-20V and BiM-ad-40V. The experimental and computational thermal lesion shapes created in cardiac tissue were in agreement. Amplitude control ablation technology and bipolar ablation technology are feasible methods to create continuous lesion or discrete for pulmonary veins isolation.
Barkagan, Michael; Contreras-Valdes, Fernando M; Leshem, Eran; Buxton, Alfred E; Nakagawa, Hiroshi; Anter, Elad
2018-05-30
PV reconnection is often the result of catheter instability and tissue edema. High-power short-duration (HP-SD) ablation strategies have been shown to improve atrial linear continuity in acute pre-clinical models. This study compares the safety, efficacy and long-term durability of HP-SD ablation with conventional ablation. In 6 swine, 2 ablation lines were performed anterior and posterior to the crista terminalis, in the smooth and trabeculated right atrium, respectively; and the right superior PV was isolated. In 3 swine, ablation was performed using conventional parameters (THERMOCOOL-SMARTTOUCH ® SF; 30W/30 sec) and in 3 other swine using HP-SD parameters (QDOT-MICRO™, 90W/4 sec). After 30 days, linear integrity was examined by voltage mapping and pacing, and the heart and surrounding tissues were examined by histopathology. Acute line integrity was achieved with both ablation strategies; however, HP-SD ablation required 80% less RF time compared with conventional ablation (P≤0.01 for all lines). Chronic line integrity was higher with HP-SD ablation: all 3 posterior lines were continuous and transmural compared to only 1 line created by conventional ablation. In the trabeculated tissue, HP-SD ablation lesions were wider and of similar depth with 1 of 3 lines being continuous compared to 0 of 3 using conventional ablation. Chronic PVI without stenosis was evident in both groups. There were no steam-pops. Pleural markings were present in both strategies, but parenchymal lung injury was only evident with conventional ablation. HP-SD ablation strategy results in improved linear continuity, shorter ablation time, and a safety profile comparable to conventional ablation. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Tan, Ziwen; Qin, Guoyou; Zhou, Haibo
2016-01-01
Outcome-dependent sampling (ODS) designs have been well recognized as a cost-effective way to enhance study efficiency in both statistical literature and biomedical and epidemiologic studies. A partially linear additive model (PLAM) is widely applied in real problems because it allows for a flexible specification of the dependence of the response on some covariates in a linear fashion and other covariates in a nonlinear non-parametric fashion. Motivated by an epidemiological study investigating the effect of prenatal polychlorinated biphenyls exposure on children's intelligence quotient (IQ) at age 7 years, we propose a PLAM in this article to investigate a more flexible non-parametric inference on the relationships among the response and covariates under the ODS scheme. We propose the estimation method and establish the asymptotic properties of the proposed estimator. Simulation studies are conducted to show the improved efficiency of the proposed ODS estimator for PLAM compared with that from a traditional simple random sampling design with the same sample size. The data of the above-mentioned study is analyzed to illustrate the proposed method. PMID:27006375
Novel design solutions for fishing reel mechanisms
NASA Astrophysics Data System (ADS)
Lovasz, Erwin-Christian; Modler, Karl-Heinz; Neumann, Rudolf; Gruescu, Corina Mihaela; Perju, Dan; Ciupe, Valentin; Maniu, Inocentiu
2015-07-01
Currently, there are various reels on the market regarding the type of mechanism, which achieves the winding and unwinding of the line. The designers have the purpose of obtaining a linear transmission function, by means of a simple and small-sized mechanism. However, the present solutions are not satisfactory because of large deviations from linearity of the transmission function and complexity of mechanical schema. A novel solution for the reel spool mechanism is proposed. Its kinematic schema and synthesis method are described. The kinematic schema of the chosen mechanism is based on a noncircular gear in series with a scotch-yoke mechanism. The yoke is driven by a stud fixed on the driving noncircular gear. The drawbacks of other models regarding the effects occurring at the ends of the spool are eliminated through achieving an appropriate transmission function of the spool. The linear function approximation with curved end-arches appropriately computed to ensure mathematical continuity is very good. The experimental results on the mechanism model validate the theoretical approach. The developed mechanism solution is recorded under a reel spool mechanism patent.
Hood, Heather M.; Ocasio, Linda R.; Sachs, Matthew S.; Galagan, James E.
2013-01-01
The filamentous fungus Neurospora crassa played a central role in the development of twentieth-century genetics, biochemistry and molecular biology, and continues to serve as a model organism for eukaryotic biology. Here, we have reconstructed a genome-scale model of its metabolism. This model consists of 836 metabolic genes, 257 pathways, 6 cellular compartments, and is supported by extensive manual curation of 491 literature citations. To aid our reconstruction, we developed three optimization-based algorithms, which together comprise Fast Automated Reconstruction of Metabolism (FARM). These algorithms are: LInear MEtabolite Dilution Flux Balance Analysis (limed-FBA), which predicts flux while linearly accounting for metabolite dilution; One-step functional Pruning (OnePrune), which removes blocked reactions with a single compact linear program; and Consistent Reproduction Of growth/no-growth Phenotype (CROP), which reconciles differences between in silico and experimental gene essentiality faster than previous approaches. Against an independent test set of more than 300 essential/non-essential genes that were not used to train the model, the model displays 93% sensitivity and specificity. We also used the model to simulate the biochemical genetics experiments originally performed on Neurospora by comprehensively predicting nutrient rescue of essential genes and synthetic lethal interactions, and we provide detailed pathway-based mechanistic explanations of our predictions. Our model provides a reliable computational framework for the integration and interpretation of ongoing experimental efforts in Neurospora, and we anticipate that our methods will substantially reduce the manual effort required to develop high-quality genome-scale metabolic models for other organisms. PMID:23935467
NASA Astrophysics Data System (ADS)
Shimizu, K.; von Storch, J. S.; Haak, H.; Nakayama, K.; Marotzke, J.
2014-12-01
Surface wind stress is considered to be an important forcing of the seasonal and interannual variability of Atlantic Meridional Overturning Circulation (AMOC) volume transports. A recent study showed that even linear response to wind forcing captures observed features of the mean seasonal cycle. However, the study did not assess the contribution of wind-driven linear response in realistic conditions against the RAPID/MOCHA array observation or Ocean General Circulation Model (OGCM) simulations, because it applied a linear two-layer model to the Atlantic assuming constant upper layer thickness and density difference across the interface. Here, we quantify the contribution of wind-driven linear response to the seasonal and interannual variability of AMOC transports by comparing wind-driven linear simulations under realistic continuous stratification against the RAPID observation and OCGM (MPI-OM) simulations with 0.4º resolution (TP04) and 0.1º resolution (STORM). All the linear and MPI-OM simulations capture more than 60% of the variance in the observed mean seasonal cycle of the Upper Mid-Ocean (UMO) and Florida Strait (FS) transports, two components of the upper branch of the AMOC. The linear and TP04 simulations also capture 25-40% of the variance in the observed transport time series between Apr 2004 and Oct 2012; the STORM simulation does not capture the observed variance because of the stochastic signal in both datasets. Comparison of half-overlapping 12-month-long segments reveals some periods when the linear and TP04 simulations capture 40-60% of the observed variance, as well as other periods when the simulations capture only 0-20% of the variance. These results show that wind-driven linear response is a major contributor to the seasonal and interannual variability of the UMO and FS transports, and that its contribution varies in an interannual timescale, probably due to the variability of stochastic processes.
Combustion-acoustic stability analysis for premixed gas turbine combustors
NASA Technical Reports Server (NTRS)
Darling, Douglas; Radhakrishnan, Krishnan; Oyediran, Ayo; Cowan, Lizabeth
1995-01-01
Lean, prevaporized, premixed combustors are susceptible to combustion-acoustic instabilities. A model was developed to predict eigenvalues of axial modes for combustion-acoustic interactions in a premixed combustor. This work extends previous work by including variable area and detailed chemical kinetics mechanisms, using the code LSENS. Thus the acoustic equations could be integrated through the flame zone. Linear perturbations were made of the continuity, momentum, energy, chemical species, and state equations. The qualitative accuracy of our approach was checked by examining its predictions for various unsteady heat release rate models. Perturbations in fuel flow rate are currently being added to the model.
Final Report: Continuation Study: A Systems Approach to Understanding Post-Traumatic Stress Disorder
2017-01-31
Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Post Traumatic Stress Disorder, HPA-Circadian-metabolic pathway, methylation...17150 remaining probes were located in coding regions. Linear additive models were used to test the interactions among the quantitative loci and...SECURITY CLASSIFICATION OF: Post -Traumatic Stress Disorder (PTSD) is a complex anxiety disorder affecting many combat-exposed soldiers. Current
Rich or poor: Who should pay higher tax rates?
NASA Astrophysics Data System (ADS)
Murilo Castro de Oliveira, Paulo
2017-08-01
A dynamic agent model is introduced with an annual random wealth multiplicative process followed by taxes paid according to a linear wealth-dependent tax rate. If poor agents pay higher tax rates than rich agents, eventually all wealth becomes concentrated in the hands of a single agent. By contrast, if poor agents are subject to lower tax rates, the economic collective process continues forever.
ERIC Educational Resources Information Center
Miller, Jason W.; Stromeyer, William R.; Schwieterman, Matthew A.
2013-01-01
The past decade has witnessed renewed interest in the use of the Johnson-Neyman (J-N) technique for calculating the regions of significance for the simple slope of a focal predictor on an outcome variable across the range of a second, continuous independent variable. Although tools have been developed to apply this technique to probe 2- and 3-way…
Statistics of Macroturbulence from Flow Equations
NASA Astrophysics Data System (ADS)
Marston, Brad; Iadecola, Thomas; Qi, Wanming
2012-02-01
Probability distribution functions of stochastically-driven and frictionally-damped fluids are governed by a linear framework that resembles quantum many-body theory. Besides the Fokker-Planck approach, there is a closely related Hopf functional methodfootnotetextOokie Ma and J. B. Marston, J. Stat. Phys. Th. Exp. P10007 (2005).; in both formalisms, zero modes of linear operators describe the stationary non-equilibrium statistics. To access the statistics, we generalize the flow equation approachfootnotetextF. Wegner, Ann. Phys. 3, 77 (1994). (also known as the method of continuous unitary transformationsfootnotetextS. D. Glazek and K. G. Wilson, Phys. Rev. D 48, 5863 (1993); Phys. Rev. D 49, 4214 (1994).) to find the zero mode. We test the approach using a prototypical model of geophysical and astrophysical flows on a rotating sphere that spontaneously organizes into a coherent jet. Good agreement is found with low-order equal-time statistics accumulated by direct numerical simulation, the traditional method. Different choices for the generators of the continuous transformations, and for closure approximations of the operator algebra, are discussed.
Disformal invariance of continuous media with linear equation of state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Celoria, Marco; Matarrese, Sabino; Pilo, Luigi, E-mail: marco.celoria@gssi.infn.it, E-mail: sabino.matarrese@pd.infn.it, E-mail: luigi.pilo@aquila.infn.it
We show that the effective theory describing single component continuous media with a linear and constant equation of state of the form p = w ρ is invariant under a 1-parameter family of continuous disformal transformations. In the special case of w =1/3 (ultrarelativistic gas), such a family reduces to conformal transformations. As examples, perfect fluids, irrotational dust (mimetic matter) and homogeneous and isotropic solids are discussed.
Nichols, J.M.; Moniz, L.; Nichols, J.D.; Pecora, L.M.; Cooch, E.
2005-01-01
A number of important questions in ecology involve the possibility of interactions or ?coupling? among potential components of ecological systems. The basic question of whether two components are coupled (exhibit dynamical interdependence) is relevant to investigations of movement of animals over space, population regulation, food webs and trophic interactions, and is also useful in the design of monitoring programs. For example, in spatially extended systems, coupling among populations in different locations implies the existence of redundant information in the system and the possibility of exploiting this redundancy in the development of spatial sampling designs. One approach to the identification of coupling involves study of the purported mechanisms linking system components. Another approach is based on time series of two potential components of the same system and, in previous ecological work, has relied on linear cross-correlation analysis. Here we present two different attractor-based approaches, continuity and mutual prediction, for determining the degree to which two population time series (e.g., at different spatial locations) are coupled. Both approaches are demonstrated on a one-dimensional predator?prey model system exhibiting complex dynamics. Of particular interest is the spatial asymmetry introduced into the model as linearly declining resource for the prey over the domain of the spatial coordinate. Results from these approaches are then compared to the more standard cross-correlation analysis. In contrast to cross-correlation, both continuity and mutual prediction are clearly able to discern the asymmetry in the flow of information through this system.
A generic multi-flex-body dynamics, controls simulation tool for space station
NASA Technical Reports Server (NTRS)
London, Ken W.; Lee, John F.; Singh, Ramen P.; Schubele, Buddy
1991-01-01
An order (n) multiflex body Space Station simulation tool is introduced. The flex multibody modeling is generic enough to model all phases of Space Station from build up through to Assembly Complete configuration and beyond. Multibody subsystems such as the Mobile Servicing System (MSS) undergoing a prescribed translation and rotation are also allowed. The software includes aerodynamic, gravity gradient, and magnetic field models. User defined controllers can be discrete or continuous. Extensive preprocessing of 'body by body' NASTRAN flex data is built in. A significant aspect, too, is the integrated controls design capability which includes model reduction and analytic linearization.
An exploration of viscosity models in the realm of kinetic theory of liquids originated fluids
NASA Astrophysics Data System (ADS)
Hussain, Azad; Ghafoor, Saadia; Malik, M. Y.; Jamal, Sarmad
The preeminent perspective of this article is to study flow of an Eyring Powell fluid model past a penetrable plate. To find the effects of variable viscosity on fluid model, continuity, momentum and energy equations are elaborated. Here, viscosity is taken as function of temperature. To understand the phenomenon, Reynold and Vogel models of variable viscosity are incorporated. The highly non-linear partial differential equations are transfigured into ordinary differential equations with the help of suitable similarity transformations. The numerical solution of the problem is presented. Graphs are plotted to visualize the behavior of pertinent parameters on the velocity and temperature profiles.
Robust linear parameter-varying control of blood pressure using vasoactive drugs
NASA Astrophysics Data System (ADS)
Luspay, Tamas; Grigoriadis, Karolos
2015-10-01
Resuscitation of emergency care patients requires fast restoration of blood pressure to a target value to achieve hemodynamic stability and vital organ perfusion. A robust control design methodology is presented in this paper for regulating the blood pressure of hypotensive patients by means of the closed-loop administration of vasoactive drugs. To this end, a dynamic first-order delay model is utilised to describe the vasoactive drug response with varying parameters that represent intra-patient and inter-patient variability. The proposed framework consists of two components: first, an online model parameter estimation is carried out using a multiple-model extended Kalman-filter. Second, the estimated model parameters are used for continuously scheduling a robust linear parameter-varying (LPV) controller. The closed-loop behaviour is characterised by parameter-varying dynamic weights designed to regulate the mean arterial pressure to a target value. Experimental data of blood pressure response of anesthetised pigs to phenylephrine injection are used for validating the LPV blood pressure models. Simulation studies are provided to validate the online model estimation and the LPV blood pressure control using phenylephrine drug injection models representing patients showing sensitive, nominal and insensitive response to the drug.
Modeling Battery Behavior on Sensory Operations for Context-Aware Smartphone Sensing
Yurur, Ozgur; Liu, Chi Harold; Moreno, Wilfrido
2015-01-01
Energy consumption is a major concern in context-aware smartphone sensing. This paper first studies mobile device-based battery modeling, which adopts the kinetic battery model (KiBaM), under the scope of battery non-linearities with respect to variant loads. Second, this paper models the energy consumption behavior of accelerometers analytically and then provides extensive simulation results and a smartphone application to examine the proposed sensor model. Third, a Markov reward process is integrated to create energy consumption profiles, linking with sensory operations and their effects on battery non-linearity. Energy consumption profiles consist of different pairs of duty cycles and sampling frequencies during sensory operations. Furthermore, the total energy cost by each profile is represented by an accumulated reward in this process. Finally, three different methods are proposed on the evolution of the reward process, to present the linkage between different usage patterns on the accelerometer sensor through a smartphone application and the battery behavior. By doing this, this paper aims at achieving a fine efficiency in power consumption caused by sensory operations, while maintaining the accuracy of smartphone applications based on sensor usages. More importantly, this study intends that modeling the battery non-linearities together with investigating the effects of different usage patterns in sensory operations in terms of the power consumption and the battery discharge may lead to discovering optimal energy reduction strategies to extend the battery lifetime and help a continual improvement in context-aware mobile services. PMID:26016916
Modeling battery behavior on sensory operations for context-aware smartphone sensing.
Yurur, Ozgur; Liu, Chi Harold; Moreno, Wilfrido
2015-05-26
Energy consumption is a major concern in context-aware smartphone sensing. This paper first studies mobile device-based battery modeling, which adopts the kinetic battery model (KiBaM), under the scope of battery non-linearities with respect to variant loads. Second, this paper models the energy consumption behavior of accelerometers analytically and then provides extensive simulation results and a smartphone application to examine the proposed sensor model. Third, a Markov reward process is integrated to create energy consumption profiles, linking with sensory operations and their effects on battery non-linearity. Energy consumption profiles consist of different pairs of duty cycles and sampling frequencies during sensory operations. Furthermore, the total energy cost by each profile is represented by an accumulated reward in this process. Finally, three different methods are proposed on the evolution of the reward process, to present the linkage between different usage patterns on the accelerometer sensor through a smartphone application and the battery behavior. By doing this, this paper aims at achieving a fine efficiency in power consumption caused by sensory operations, while maintaining the accuracy of smartphone applications based on sensor usages. More importantly, this study intends that modeling the battery non-linearities together with investigating the effects of different usage patterns in sensory operations in terms of the power consumption and the battery discharge may lead to discovering optimal energy reduction strategies to extend the battery lifetime and help a continual improvement in context-aware mobile services.
Local numerical modelling of ultrasonic guided waves in linear and nonlinear media
NASA Astrophysics Data System (ADS)
Packo, Pawel; Radecki, Rafal; Kijanka, Piotr; Staszewski, Wieslaw J.; Uhl, Tadeusz; Leamy, Michael J.
2017-04-01
Nonlinear ultrasonic techniques provide improved damage sensitivity compared to linear approaches. The combination of attractive properties of guided waves, such as Lamb waves, with unique features of higher harmonic generation provides great potential for characterization of incipient damage, particularly in plate-like structures. Nonlinear ultrasonic structural health monitoring techniques use interrogation signals at frequencies other than the excitation frequency to detect changes in structural integrity. Signal processing techniques used in non-destructive evaluation are frequently supported by modeling and numerical simulations in order to facilitate problem solution. This paper discusses known and newly-developed local computational strategies for simulating elastic waves, and attempts characterization of their numerical properties in the context of linear and nonlinear media. A hybrid numerical approach combining advantages of the Local Interaction Simulation Approach (LISA) and Cellular Automata for Elastodynamics (CAFE) is proposed for unique treatment of arbitrary strain-stress relations. The iteration equations of the method are derived directly from physical principles employing stress and displacement continuity, leading to an accurate description of the propagation in arbitrarily complex media. Numerical analysis of guided wave propagation, based on the newly developed hybrid approach, is presented and discussed in the paper for linear and nonlinear media. Comparisons to Finite Elements (FE) are also discussed.
Computation of non-monotonic Lyapunov functions for continuous-time systems
NASA Astrophysics Data System (ADS)
Li, Huijuan; Liu, AnPing
2017-09-01
In this paper, we propose two methods to compute non-monotonic Lyapunov functions for continuous-time systems which are asymptotically stable. The first method is to solve a linear optimization problem on a compact and bounded set. The proposed linear programming based algorithm delivers a CPA1
Forcing, feedbacks and climate sensitivity in CMIP5 coupled atmosphere-ocean climate models
Andrews, Timothy; Gregory, Jonathan M.; Webb, Mark J.; ...
2012-05-15
We quantify forcing and feedbacks across available CMIP5 coupled atmosphere-ocean general circulation models (AOGCMs) by analysing simulations forced by an abrupt quadrupling of atmospheric carbon dioxide concentration. This is the first application of the linear forcing-feedback regression analysis of Gregory et al. (2004) to an ensemble of AOGCMs. The range of equilibrium climate sensitivity is 2.1–4.7 K. Differences in cloud feedbacks continue to be important contributors to this range. Some models show small deviations from a linear dependence of top-of-atmosphere radiative fluxes on global surface temperature change. We show that this phenomenon largely arises from shortwave cloud radiative effects overmore » the ocean and is consistent with independent estimates of forcing using fixed sea-surface temperature methods. Moreover, we suggest that future research should focus more on understanding transient climate change, including any time-scale dependence of the forcing and/or feedback, rather than on the equilibrium response to large instantaneous forcing.« less
Genetic Network Inference: From Co-Expression Clustering to Reverse Engineering
NASA Technical Reports Server (NTRS)
Dhaeseleer, Patrik; Liang, Shoudan; Somogyi, Roland
2000-01-01
Advances in molecular biological, analytical, and computational technologies are enabling us to systematically investigate the complex molecular processes underlying biological systems. In particular, using high-throughput gene expression assays, we are able to measure the output of the gene regulatory network. We aim here to review datamining and modeling approaches for conceptualizing and unraveling the functional relationships implicit in these datasets. Clustering of co-expression profiles allows us to infer shared regulatory inputs and functional pathways. We discuss various aspects of clustering, ranging from distance measures to clustering algorithms and multiple-duster memberships. More advanced analysis aims to infer causal connections between genes directly, i.e., who is regulating whom and how. We discuss several approaches to the problem of reverse engineering of genetic networks, from discrete Boolean networks, to continuous linear and non-linear models. We conclude that the combination of predictive modeling with systematic experimental verification will be required to gain a deeper insight into living organisms, therapeutic targeting, and bioengineering.
Islam, Naz Niamul; Hannan, M A; Shareef, Hussain; Mohamed, Azah; Salam, M A
2014-01-01
Power oscillation damping controller is designed in linearized model with heuristic optimization techniques. Selection of the objective function is very crucial for damping controller design by optimization algorithms. In this research, comparative analysis has been carried out to evaluate the effectiveness of popular objective functions used in power system oscillation damping. Two-stage lead-lag damping controller by means of power system stabilizers is optimized using differential search algorithm for different objective functions. Linearized model simulations are performed to compare the dominant mode's performance and then the nonlinear model is continued to evaluate the damping performance over power system oscillations. All the simulations are conducted in two-area four-machine power system to bring a detailed analysis. Investigated results proved that multiobjective D-shaped function is an effective objective function in terms of moving unstable and lightly damped electromechanical modes into stable region. Thus, D-shape function ultimately improves overall system damping and concurrently enhances power system reliability.
2011-01-01
Background Many nursing and health related research studies have continuous outcome measures that are inherently non-normal in distribution. The Box-Cox transformation provides a powerful tool for developing a parsimonious model for data representation and interpretation when the distribution of the dependent variable, or outcome measure, of interest deviates from the normal distribution. The objectives of this study was to contrast the effect of obtaining the Box-Cox power transformation parameter and subsequent analysis of variance with or without a priori knowledge of predictor variables under the classic linear or linear mixed model settings. Methods Simulation data from a 3 × 4 factorial treatments design, along with the Patient Falls and Patient Injury Falls from the National Database of Nursing Quality Indicators (NDNQI®) for the 3rd quarter of 2007 from a convenience sample of over one thousand US hospitals were analyzed. The effect of the nonlinear monotonic transformation was contrasted in two ways: a) estimating the transformation parameter along with factors with potential structural effects, and b) estimating the transformation parameter first and then conducting analysis of variance for the structural effect. Results Linear model ANOVA with Monte Carlo simulation and mixed models with correlated error terms with NDNQI examples showed no substantial differences on statistical tests for structural effects if the factors with structural effects were omitted during the estimation of the transformation parameter. Conclusions The Box-Cox power transformation can still be an effective tool for validating statistical inferences with large observational, cross-sectional, and hierarchical or repeated measure studies under the linear or the mixed model settings without prior knowledge of all the factors with potential structural effects. PMID:21854614
Hou, Qingjiang; Mahnken, Jonathan D; Gajewski, Byron J; Dunton, Nancy
2011-08-19
Many nursing and health related research studies have continuous outcome measures that are inherently non-normal in distribution. The Box-Cox transformation provides a powerful tool for developing a parsimonious model for data representation and interpretation when the distribution of the dependent variable, or outcome measure, of interest deviates from the normal distribution. The objectives of this study was to contrast the effect of obtaining the Box-Cox power transformation parameter and subsequent analysis of variance with or without a priori knowledge of predictor variables under the classic linear or linear mixed model settings. Simulation data from a 3 × 4 factorial treatments design, along with the Patient Falls and Patient Injury Falls from the National Database of Nursing Quality Indicators (NDNQI® for the 3rd quarter of 2007 from a convenience sample of over one thousand US hospitals were analyzed. The effect of the nonlinear monotonic transformation was contrasted in two ways: a) estimating the transformation parameter along with factors with potential structural effects, and b) estimating the transformation parameter first and then conducting analysis of variance for the structural effect. Linear model ANOVA with Monte Carlo simulation and mixed models with correlated error terms with NDNQI examples showed no substantial differences on statistical tests for structural effects if the factors with structural effects were omitted during the estimation of the transformation parameter. The Box-Cox power transformation can still be an effective tool for validating statistical inferences with large observational, cross-sectional, and hierarchical or repeated measure studies under the linear or the mixed model settings without prior knowledge of all the factors with potential structural effects.
A parametric LQ approach to multiobjective control system design
NASA Technical Reports Server (NTRS)
Kyr, Douglas E.; Buchner, Marc
1988-01-01
The synthesis of a constant parameter output feedback control law of constrained structure is set in a multiple objective linear quadratic regulator (MOLQR) framework. The use of intuitive objective functions such as model-following ability and closed-loop trajectory sensitivity, allow multiple objective decision making techniques, such as the surrogate worth tradeoff method, to be applied. For the continuous-time deterministic problem with an infinite time horizon, dynamic compensators as well as static output feedback controllers can be synthesized using a descent Anderson-Moore algorithm modified to impose linear equality constraints on the feedback gains by moving in feasible directions. Results of three different examples are presented, including a unique reformulation of the sensitivity reduction problem.
A Continuous Threshold Expectile Model.
Zhang, Feipeng; Li, Qunhua
2017-12-01
Expectile regression is a useful tool for exploring the relation between the response and the explanatory variables beyond the conditional mean. A continuous threshold expectile regression is developed for modeling data in which the effect of a covariate on the response variable is linear but varies below and above an unknown threshold in a continuous way. The estimators for the threshold and the regression coefficients are obtained using a grid search approach. The asymptotic properties for all the estimators are derived, and the estimator for the threshold is shown to achieve root-n consistency. A weighted CUSUM type test statistic is proposed for the existence of a threshold at a given expectile, and its asymptotic properties are derived under both the null and the local alternative models. This test only requires fitting the model under the null hypothesis in the absence of a threshold, thus it is computationally more efficient than the likelihood-ratio type tests. Simulation studies show that the proposed estimators and test have desirable finite sample performance in both homoscedastic and heteroscedastic cases. The application of the proposed method on a Dutch growth data and a baseball pitcher salary data reveals interesting insights. The proposed method is implemented in the R package cthreshER .
Vertical Distribution of Radiation Stress for Non-linear Shoaling Waves
NASA Astrophysics Data System (ADS)
Webb, B. M.; Slinn, D. N.
2004-12-01
The flux of momentum directed shoreward by an incident wave field, commonly referred to as the radiation stress, plays a significant role in nearshore circulation and, therefore, has a profound impact on the transport of pollutants, biota, and sediment in nearshore systems. Having received much attention since the seminal work of Longuet-Higgins and Stewart in the early 1960's, use of the radiation stress concept continues to be refined and evidence of its utility is widespread in literature pertaining to coastal and ocean science. A number of investigations, both numerical and analytical in nature, have used the concept of the radiation stress to derive appropriate forcing mechanisms that initiate cross-shore and longshore circulation, but typically in a depth-averaged sense due to a lack of information concerning the vertical distribution of the wave stresses. While depth-averaged nearshore circulation models are still widely used today, advancements in technology have permitted the adaptation of three-dimensional (3D) modeling techniques to study flow properties of complex nearshore circulation systems. It has been shown that the resulting circulation in these 3D models is very sensitive to the vertical distribution of the nearshore forcing, which have often been implemented as either depth-uniform or depth-linear distributions. Recently, analytical expressions describing the vertical structure of radiation stress components have appeared in the literature (see Mellor, 2003; Xia et al., 2004) but do not fully describe the magnitude and structure in the region bound by the trough and crest of non-linear, propagating waves. Utilizing a three-dimensional, non-linear, numerical model that resolves the time-dependent free surface, we present mean flow properties resulting from a simulation of Visser's (1984, 1991) laboratory experiment on uniform longshore currents. More specifically, we provide information regarding the vertical distribution of radiation stress components (Sxx and Sxy) resulting from obliquely incident, non-linear shoaling waves. Vertical profiles of the radiation stress components predicted by the numerical model are compared with published analytical solutions, expressions given by linear theory, and observations from an investigation employing second-order cnoidal wave theory.
Towards enhancing and delaying disturbances in free shear flows
NASA Technical Reports Server (NTRS)
Criminale, W. O.; Jackson, T. L.; Lasseigne, D. G.
1994-01-01
The family of shear flows comprising the jet, wake, and the mixing layer are subjected to perturbations in an inviscid incompressible fluid. By modeling the basic mean flows as parallel with piecewise linear variations for the velocities, complete and general solutions to the linearized equations of motion can be obtained in closed form as functions of all space variables and time when posed as an initial value problem. The results show that there is a continuous as well as the discrete spectrum that is more familiar in stability theory and therefore there can be both algebraic and exponential growth of disturbances in time. These bases make it feasible to consider control of such flows. To this end, the possibility of enhancing the disturbances in the mixing layer and delaying the onset in the jet and wake is investigated. It is found that growth of perturbations can be delayed to a considerable degree for the jet and the wake but, by comparison, cannot be enhanced in the mixing layer. By using moving coordinates, a method for demonstrating the predominant early and long time behavior of disturbances in these flows is given for continuous velocity profiles. It is shown that the early time transients are always algebraic whereas the asymptotic limit is that of an exponential normal mode. Numerical treatment of the new governing equations confirm the conclusions reached by use of the piecewise linear basic models. Although not pursued here, feedback mechanisms designed for control of the flow could be devised using the results of this work.
NASA Astrophysics Data System (ADS)
Salkin, Louis; Courbin, Laurent; Panizza, Pascal
2012-09-01
Combining experiments and theory, we investigate the break-up dynamics of deformable objects, such as drops and bubbles, against a linear micro-obstacle. Our experiments bring the role of the viscosity contrast Δη between dispersed and continuous phases to light: the evolution of the critical capillary number to break a drop as a function of its size is either nonmonotonic (Δη>0) or monotonic (Δη≤0). In the case of positive viscosity contrasts, experiments and modeling reveal the existence of an unexpected critical object size for which the critical capillary number for breakup is minimum. Using simple physical arguments, we derive a model that well describes observations, provides diagrams mapping the four hydrodynamic regimes identified experimentally, and demonstrates that the critical size originating from confinement solely depends on geometrical parameters of the obstacle.
Krasikova, Dina V; Le, Huy; Bachura, Eric
2018-06-01
To address a long-standing concern regarding a gap between organizational science and practice, scholars called for more intuitive and meaningful ways of communicating research results to users of academic research. In this article, we develop a common language effect size index (CLβ) that can help translate research results to practice. We demonstrate how CLβ can be computed and used to interpret the effects of continuous and categorical predictors in multiple linear regression models. We also elaborate on how the proposed CLβ index is computed and used to interpret interactions and nonlinear effects in regression models. In addition, we test the robustness of the proposed index to violations of normality and provide means for computing standard errors and constructing confidence intervals around its estimates. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
The application of LQR synthesis techniques to the turboshaft engine control problem
NASA Technical Reports Server (NTRS)
Pfeil, W. H.; De Los Reyes, G.; Bobula, G. A.
1984-01-01
A power turbine governor was designed for a recent-technology turboshaft engine coupled to a modern, articulated rotor system using Linear Quadratic Regulator (LQR) and Kalman Filter (KF) techniques. A linear, state-space model of the engine and rotor system was derived for six engine power settings from flight idle to maximum continuous. An integrator was appended to the fuel flow input to reduce the steady-state governor error to zero. Feedback gains were calculated for the system states at each power setting using the LQR technique. The main rotor tip speed state is not measurable, so a Kalman Filter of the rotor was used to estimate this state. The crossover of the system was increased to 10 rad/s compared to 2 rad/sec for a current governor. Initial computer simulations with a nonlinear engine model indicate a significant decrease in power turbine speed variation with the LQR governor compared to a conventional governor.
Sensitivity of regional forest carbon budgets to continuous and stochastic climate change pressures
NASA Astrophysics Data System (ADS)
Sulman, B. N.; Desai, A. R.; Scheller, R. M.
2010-12-01
Climate change is expected to impact forest-atmosphere carbon budgets through three processes: 1. Increased disturbance rates, including fires, mortality due to pest outbreaks, and severe storms 2. Changes in patterns of inter-annual variability, related to increased incidence of severe droughts and defoliating insect outbreaks 3. Continuous changes in forest productivity and respiration, related to increases in mean temperature, growing season length, and CO2 fertilization While the importance of these climate change effects in future regional carbon budgets has been established, quantitative characterization of the relative sensitivity of forested landscapes to these different types of pressures is needed. We present a model- and- data-based approach to understanding the sensitivity of forested landscapes to climate change pressures. Eddy-covariance and biometric measurements from forests in the northern United States were used to constrain two forest landscape models. The first, LandNEP, uses a prescribed functional form for the evolution of net ecosystem productivity (NEP) over the age of a forested grid cell, which is reset following a disturbance event. This model was used for investigating the basic statistical properties of a simple landscape’s responses to climate change pressures. The second model, LANDIS-II, includes different tree species and models forest biomass accumulation and succession, allowing us to investigate the effects of more complex forest processes such as species change and carbon pool accumulation on landscape responses to climate change effects. We tested the sensitivity of forested landscapes to these three types of climate change pressures by applying ensemble perturbations of random disturbance rates, distribution functions of inter-annual variability, and maximum potential carbon uptake rates, in the two models. We find that landscape-scale net carbon exchange responds linearly to continuous changes in potential carbon uptake and inter-annual variability, while responses to stochastic changes are non-linear and become more important at shorter mean disturbance intervals. These results provide insight on how to better parameterize coupled carbon-climate models to more realistically simulate feedbacks between forests and the atmosphere.
Non-hydrostatic semi-elastic hybrid-coordinate SISL extension of HIRLAM. Part I: numerical scheme
NASA Astrophysics Data System (ADS)
Rõõm, Rein; Männik, Aarne; Luhamaa, Andres
2007-10-01
Two-time-level, semi-implicit, semi-Lagrangian (SISL) scheme is applied to the non-hydrostatic pressure coordinate equations, constituting a modified Miller-Pearce-White model, in hybrid-coordinate framework. Neutral background is subtracted in the initial continuous dynamics, yielding modified equations for geopotential, temperature and logarithmic surface pressure fluctuation. Implicit Lagrangian marching formulae for single time-step are derived. A disclosure scheme is presented, which results in an uncoupled diagnostic system, consisting of 3-D Poisson equation for omega velocity and 2-D Helmholtz equation for logarithmic pressure fluctuation. The model is discretized to create a non-hydrostatic extension to numerical weather prediction model HIRLAM. The discretization schemes, trajectory computation algorithms and interpolation routines, as well as the physical parametrization package are maintained from parent hydrostatic HIRLAM. For stability investigation, the derived SISL model is linearized with respect to the initial, thermally non-equilibrium resting state. Explicit residuals of the linear model prove to be sensitive to the relative departures of temperature and static stability from the reference state. Relayed on the stability study, the semi-implicit term in the vertical momentum equation is replaced to the implicit term, which results in stability increase of the model.
Volitional and Real-Time Control Cursor Based on Eye Movement Decoding Using a Linear Decoding Model
Zhang, Cheng
2016-01-01
The aim of this study is to build a linear decoding model that reveals the relationship between the movement information and the EOG (electrooculogram) data to online control a cursor continuously with blinks and eye pursuit movements. First of all, a blink detection method is proposed to reject a voluntary single eye blink or double-blink information from EOG. Then, a linear decoding model of time series is developed to predict the position of gaze, and the model parameters are calibrated by the RLS (Recursive Least Square) algorithm; besides, the assessment of decoding accuracy is assessed through cross-validation procedure. Additionally, the subsection processing, increment control, and online calibration are presented to realize the online control. Finally, the technology is applied to the volitional and online control of a cursor to hit the multiple predefined targets. Experimental results show that the blink detection algorithm performs well with the voluntary blink detection rate over 95%. Through combining the merits of blinks and smooth pursuit movements, the movement information of eyes can be decoded in good conformity with the average Pearson correlation coefficient which is up to 0.9592, and all signal-to-noise ratios are greater than 0. The novel system allows people to successfully and economically control a cursor online with a hit rate of 98%. PMID:28058044
Quantifying the sensitivity of post-glacial sea level change to laterally varying viscosity
NASA Astrophysics Data System (ADS)
Crawford, Ophelia; Al-Attar, David; Tromp, Jeroen; Mitrovica, Jerry X.; Austermann, Jacqueline; Lau, Harriet C. P.
2018-05-01
We present a method for calculating the derivatives of measurements of glacial isostatic adjustment (GIA) with respect to the viscosity structure of the Earth and the ice sheet history. These derivatives, or kernels, quantify the linearised sensitivity of measurements to the underlying model parameters. The adjoint method is used to enable efficient calculation of theoretically exact sensitivity kernels within laterally heterogeneous earth models that can have a range of linear or non-linear viscoelastic rheologies. We first present a new approach to calculate GIA in the time domain, which, in contrast to the more usual formulation in the Laplace domain, is well suited to continuously varying earth models and to the use of the adjoint method. Benchmarking results show excellent agreement between our formulation and previous methods. We illustrate the potential applications of the kernels calculated in this way through a range of numerical calculations relative to a spherically symmetric background model. The complex spatial patterns of the sensitivities are not intuitive, and this is the first time that such effects are quantified in an efficient and accurate manner.
Metrics for linear kinematic features in sea ice
NASA Astrophysics Data System (ADS)
Levy, G.; Coon, M.; Sulsky, D.
2006-12-01
The treatment of leads as cracks or discontinuities (see Coon et al. presentation) requires some shift in the procedure of evaluation and comparison of lead-resolving models and their validation against observations. Common metrics used to evaluate ice model skills are by and large an adaptation of a least square "metric" adopted from operational numerical weather prediction data assimilation systems and are most appropriate for continuous fields and Eilerian systems where the observations and predictions are commensurate. However, this class of metrics suffers from some flaws in areas of sharp gradients and discontinuities (e.g., leads) and when Lagrangian treatments are more natural. After a brief review of these metrics and their performance in areas of sharp gradients, we present two new metrics specifically designed to measure model accuracy in representing linear features (e.g., leads). The indices developed circumvent the requirement that both the observations and model variables be commensurate (i.e., measured with the same units) by considering the frequencies of the features of interest/importance. We illustrate the metrics by scoring several hypothetical "simulated" discontinuity fields against the lead interpreted from RGPS observations.
Missing Data in Clinical Studies: Issues and Methods
Ibrahim, Joseph G.; Chu, Haitao; Chen, Ming-Hui
2012-01-01
Missing data are a prevailing problem in any type of data analyses. A participant variable is considered missing if the value of the variable (outcome or covariate) for the participant is not observed. In this article, various issues in analyzing studies with missing data are discussed. Particularly, we focus on missing response and/or covariate data for studies with discrete, continuous, or time-to-event end points in which generalized linear models, models for longitudinal data such as generalized linear mixed effects models, or Cox regression models are used. We discuss various classifications of missing data that may arise in a study and demonstrate in several situations that the commonly used method of throwing out all participants with any missing data may lead to incorrect results and conclusions. The methods described are applied to data from an Eastern Cooperative Oncology Group phase II clinical trial of liver cancer and a phase III clinical trial of advanced non–small-cell lung cancer. Although the main area of application discussed here is cancer, the issues and methods we discuss apply to any type of study. PMID:22649133
Mazo Lopera, Mauricio A; Coombes, Brandon J; de Andrade, Mariza
2017-09-27
Gene-environment (GE) interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM) and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs) by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP) of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma ( PPARG ) gene associated with diabetes.
A Linear-Elasticity Solver for Higher-Order Space-Time Mesh Deformation
NASA Technical Reports Server (NTRS)
Diosady, Laslo T.; Murman, Scott M.
2018-01-01
A linear-elasticity approach is presented for the generation of meshes appropriate for a higher-order space-time discontinuous finite-element method. The equations of linear-elasticity are discretized using a higher-order, spatially-continuous, finite-element method. Given an initial finite-element mesh, and a specified boundary displacement, we solve for the mesh displacements to obtain a higher-order curvilinear mesh. Alternatively, for moving-domain problems we use the linear-elasticity approach to solve for a temporally discontinuous mesh velocity on each time-slab and recover a continuous mesh deformation by integrating the velocity. The applicability of this methodology is presented for several benchmark test cases.
Quantum error correction of continuous-variable states against Gaussian noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ralph, T. C.
2011-08-15
We describe a continuous-variable error correction protocol that can correct the Gaussian noise induced by linear loss on Gaussian states. The protocol can be implemented using linear optics and photon counting. We explore the theoretical bounds of the protocol as well as the expected performance given current knowledge and technology.
Common pitfalls in statistical analysis: Linear regression analysis
Aggarwal, Rakesh; Ranganathan, Priya
2017-01-01
In a previous article in this series, we explained correlation analysis which describes the strength of relationship between two continuous variables. In this article, we deal with linear regression analysis which predicts the value of one continuous variable from another. We also discuss the assumptions and pitfalls associated with this analysis. PMID:28447022
ERIC Educational Resources Information Center
Osborne, Jason W.
2013-01-01
Osborne and Waters (2002) focused on checking some of the assumptions of multiple linear regression. In a critique of that paper, Williams, Grajales, and Kurkiewicz correctly clarify that regression models estimated using ordinary least squares require the assumption of normally distributed errors, but not the assumption of normally distributed…
Latent degradation indicators estimation and prediction: A Monte Carlo approach
NASA Astrophysics Data System (ADS)
Zhou, Yifan; Sun, Yong; Mathew, Joseph; Wolff, Rodney; Ma, Lin
2011-01-01
Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.
Riviere, Marie-Karelle; Ueckert, Sebastian; Mentré, France
2016-10-01
Non-linear mixed effect models (NLMEMs) are widely used for the analysis of longitudinal data. To design these studies, optimal design based on the expected Fisher information matrix (FIM) can be used instead of performing time-consuming clinical trial simulations. In recent years, estimation algorithms for NLMEMs have transitioned from linearization toward more exact higher-order methods. Optimal design, on the other hand, has mainly relied on first-order (FO) linearization to calculate the FIM. Although efficient in general, FO cannot be applied to complex non-linear models and with difficulty in studies with discrete data. We propose an approach to evaluate the expected FIM in NLMEMs for both discrete and continuous outcomes. We used Markov Chain Monte Carlo (MCMC) to integrate the derivatives of the log-likelihood over the random effects, and Monte Carlo to evaluate its expectation w.r.t. the observations. Our method was implemented in R using Stan, which efficiently draws MCMC samples and calculates partial derivatives of the log-likelihood. Evaluated on several examples, our approach showed good performance with relative standard errors (RSEs) close to those obtained by simulations. We studied the influence of the number of MC and MCMC samples and computed the uncertainty of the FIM evaluation. We also compared our approach to Adaptive Gaussian Quadrature, Laplace approximation, and FO. Our method is available in R-package MIXFIM and can be used to evaluate the FIM, its determinant with confidence intervals (CIs), and RSEs with CIs. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Oxidation kinetics of a continuous carbon phase in a nonreactive matrix
NASA Technical Reports Server (NTRS)
Eckel, Andrew J.; Cawley, James D.; Parthasarathy, Triplicane A.
1995-01-01
Analytical solutions of and experimental results on the oxidation kinetics of carbon in a pore are presented. Reaction rate, reaction sequence, oxidant partial pressure, total system pressure, pore/crack dimensions, and temperature are analyzed with respect to the influence of each on an overall linear-parabolic rate relationship. Direct measurement of carbon recession is performed using two microcomposite model systems oxidized in the temperature range of 700 to 1200 C, and for times to 35 h. Experimental results are evaluated using the derived analytical solutions. Implications on the oxidation resistance of continuous-fiber-reinforced ceramic-matrix composites containing a carbon constituent are discussed.
Stochastic search, optimization and regression with energy applications
NASA Astrophysics Data System (ADS)
Hannah, Lauren A.
Designing clean energy systems will be an important task over the next few decades. One of the major roadblocks is a lack of mathematical tools to economically evaluate those energy systems. However, solutions to these mathematical problems are also of interest to the operations research and statistical communities in general. This thesis studies three problems that are of interest to the energy community itself or provide support for solution methods: R&D portfolio optimization, nonparametric regression and stochastic search with an observable state variable. First, we consider the one stage R&D portfolio optimization problem to avoid the sequential decision process associated with the multi-stage. The one stage problem is still difficult because of a non-convex, combinatorial decision space and a non-convex objective function. We propose a heuristic solution method that uses marginal project values---which depend on the selected portfolio---to create a linear objective function. In conjunction with the 0-1 decision space, this new problem can be solved as a knapsack linear program. This method scales well to large decision spaces. We also propose an alternate, provably convergent algorithm that does not exploit problem structure. These methods are compared on a solid oxide fuel cell R&D portfolio problem. Next, we propose Dirichlet Process mixtures of Generalized Linear Models (DPGLM), a new method of nonparametric regression that accommodates continuous and categorical inputs, and responses that can be modeled by a generalized linear model. We prove conditions for the asymptotic unbiasedness of the DP-GLM regression mean function estimate. We also give examples for when those conditions hold, including models for compactly supported continuous distributions and a model with continuous covariates and categorical response. We empirically analyze the properties of the DP-GLM and why it provides better results than existing Dirichlet process mixture regression models. We evaluate DP-GLM on several data sets, comparing it to modern methods of nonparametric regression like CART, Bayesian trees and Gaussian processes. Compared to existing techniques, the DP-GLM provides a single model (and corresponding inference algorithms) that performs well in many regression settings. Finally, we study convex stochastic search problems where a noisy objective function value is observed after a decision is made. There are many stochastic search problems whose behavior depends on an exogenous state variable which affects the shape of the objective function. Currently, there is no general purpose algorithm to solve this class of problems. We use nonparametric density estimation to take observations from the joint state-outcome distribution and use them to infer the optimal decision for a given query state. We propose two solution methods that depend on the problem characteristics: function-based and gradient-based optimization. We examine two weighting schemes, kernel-based weights and Dirichlet process-based weights, for use with the solution methods. The weights and solution methods are tested on a synthetic multi-product newsvendor problem and the hour-ahead wind commitment problem. Our results show that in some cases Dirichlet process weights offer substantial benefits over kernel based weights and more generally that nonparametric estimation methods provide good solutions to otherwise intractable problems.
NASA Technical Reports Server (NTRS)
Clark, William S.; Hall, Kenneth C.
1994-01-01
A linearized Euler solver for calculating unsteady flows in turbomachinery blade rows due to both incident gusts and blade motion is presented. The model accounts for blade loading, blade geometry, shock motion, and wake motion. Assuming that the unsteadiness in the flow is small relative to the nonlinear mean solution, the unsteady Euler equations can be linearized about the mean flow. This yields a set of linear variable coefficient equations that describe the small amplitude harmonic motion of the fluid. These linear equations are then discretized on a computational grid and solved using standard numerical techniques. For transonic flows, however, one must use a linear discretization which is a conservative linearization of the non-linear discretized Euler equations to ensure that shock impulse loads are accurately captured. Other important features of this analysis include a continuously deforming grid which eliminates extrapolation errors and hence, increases accuracy, and a new numerically exact, nonreflecting far-field boundary condition treatment based on an eigenanalysis of the discretized equations. Computational results are presented which demonstrate the computational accuracy and efficiency of the method and demonstrate the effectiveness of the deforming grid, far-field nonreflecting boundary conditions, and shock capturing techniques. A comparison of the present unsteady flow predictions to other numerical, semi-analytical, and experimental methods shows excellent agreement. In addition, the linearized Euler method presented requires one or two orders-of-magnitude less computational time than traditional time marching techniques making the present method a viable design tool for aeroelastic analyses.
Li, Haocheng; Zhang, Yukun; Carroll, Raymond J; Keadle, Sarah Kozey; Sampson, Joshua N; Matthews, Charles E
2017-11-10
A mixed effect model is proposed to jointly analyze multivariate longitudinal data with continuous, proportion, count, and binary responses. The association of the variables is modeled through the correlation of random effects. We use a quasi-likelihood type approximation for nonlinear variables and transform the proposed model into a multivariate linear mixed model framework for estimation and inference. Via an extension to the EM approach, an efficient algorithm is developed to fit the model. The method is applied to physical activity data, which uses a wearable accelerometer device to measure daily movement and energy expenditure information. Our approach is also evaluated by a simulation study. Copyright © 2017 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, Niravkumar D.; Mukherjee, Anamitra; Kaushal, Nitin
Here, we employ a recently developed computational many-body technique to study for the first time the half-filled Anderson-Hubbard model at finite temperature and arbitrary correlation U and disorder V strengths. Interestingly, the narrow zero temperature metallic range induced by disorder from the Mott insulator expands with increasing temperature in a manner resembling a quantum critical point. Our study of the resistivity temperature scaling T α for this metal reveals non-Fermi liquid characteristics. Moreover, a continuous dependence of α on U and V from linear to nearly quadratic is observed. We argue that these exotic results arise from a systematic changemore » with U and V of the “effective” disorder, a combination of quenched disorder and intrinsic localized spins.« less
Receding horizon online optimization for torque control of gasoline engines.
Kang, Mingxin; Shen, Tielong
2016-11-01
This paper proposes a model-based nonlinear receding horizon optimal control scheme for the engine torque tracking problem. The controller design directly employs the nonlinear model exploited based on mean-value modeling principle of engine systems without any linearizing reformation, and the online optimization is achieved by applying the Continuation/GMRES (generalized minimum residual) approach. Several receding horizon control schemes are designed to investigate the effects of the integral action and integral gain selection. Simulation analyses and experimental validations are implemented to demonstrate the real-time optimization performance and control effects of the proposed torque tracking controllers. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
The Evolution of Data-Information-Knowledge-Wisdom in Nursing Informatics.
Ronquillo, Charlene; Currie, Leanne M; Rodney, Paddy
2016-01-01
The data-information-knowledge-wisdom (DIKW) model has been widely adopted in nursing informatics. In this article, we examine the evolution of DIKW in nursing informatics while incorporating critiques from other disciplines. This includes examination of assumptions of linearity and hierarchy and an exploration of the implicit philosophical grounding of the model. Two guiding questions are considered: (1) Does DIKW serve clinical information systems, nurses, or both? and (2) What level of theory does DIKW occupy? The DIKW model has been valuable in advancing the independent field of nursing informatics. We offer that if the model is to continue to move forward, its role and functions must be explicitly addressed.
Explicit asymmetric bounds for robust stability of continuous and discrete-time systems
NASA Technical Reports Server (NTRS)
Gao, Zhiqiang; Antsaklis, Panos J.
1993-01-01
The problem of robust stability in linear systems with parametric uncertainties is considered. Explicit stability bounds on uncertain parameters are derived and expressed in terms of linear inequalities for continuous systems, and inequalities with quadratic terms for discrete-times systems. Cases where system parameters are nonlinear functions of an uncertainty are also examined.
NASA Astrophysics Data System (ADS)
Evans, Alan C.; Dai, Weiqian; Collins, D. Louis; Neelin, Peter; Marrett, Sean
1991-06-01
We describe the implementation, experience and preliminary results obtained with a 3-D computerized brain atlas for topographical and functional analysis of brain sub-regions. A volume-of-interest (VOI) atlas was produced by manual contouring on 64 adjacent 2 mm-thick MRI slices to yield 60 brain structures in each hemisphere which could be adjusted, originally by global affine transformation or local interactive adjustments, to match individual MRI datasets. We have now added a non-linear deformation (warp) capability (Bookstein, 1989) into the procedure for fitting the atlas to the brain data. Specific target points are identified in both atlas and MRI spaces which define a continuous 3-D warp transformation that maps the atlas on to the individual brain image. The procedure was used to fit MRI brain image volumes from 16 young normal volunteers. Regional volume and positional variability were determined, the latter in such a way as to assess the extent to which previous linear models of brain anatomical variability fail to account for the true variation among normal individuals. Using a linear model for atlas deformation yielded 3-D fits of the MRI data which, when pooled across subjects and brain regions, left a residual mis-match of 6 - 7 mm as compared to the non-linear model. The results indicate a substantial component of morphometric variability is not accounted for by linear scaling. This has profound implications for applications which employ stereotactic coordinate systems which map individual brains into a common reference frame: quantitative neuroradiology, stereotactic neurosurgery and cognitive mapping of normal brain function with PET. In the latter case, the combination of a non-linear deformation algorithm would allow for accurate measurement of individual anatomic variations and the inclusion of such variations in inter-subject averaging methodologies used for cognitive mapping with PET.
Linear No-Threshold Model VS. Radiation Hormesis
Doss, Mohan
2013-01-01
The atomic bomb survivor cancer mortality data have been used in the past to justify the use of the linear no-threshold (LNT) model for estimating the carcinogenic effects of low dose radiation. An analysis of the recently updated atomic bomb survivor cancer mortality dose-response data shows that the data no longer support the LNT model but are consistent with a radiation hormesis model when a correction is applied for a likely bias in the baseline cancer mortality rate. If the validity of the phenomenon of radiation hormesis is confirmed in prospective human pilot studies, and is applied to the wider population, it could result in a considerable reduction in cancers. The idea of using radiation hormesis to prevent cancers was proposed more than three decades ago, but was never investigated in humans to determine its validity because of the dominance of the LNT model and the consequent carcinogenic concerns regarding low dose radiation. Since cancer continues to be a major health problem and the age-adjusted cancer mortality rates have declined by only ∼10% in the past 45 years, it may be prudent to investigate radiation hormesis as an alternative approach to reduce cancers. Prompt action is urged. PMID:24298226
High-flow oxygen therapy: pressure analysis in a pediatric airway model.
Urbano, Javier; del Castillo, Jimena; López-Herce, Jesús; Gallardo, José A; Solana, María J; Carrillo, Ángel
2012-05-01
The mechanism of high-flow oxygen therapy and the pressures reached in the airway have not been defined. We hypothesized that the flow would generate a low continuous positive pressure, and that elevated flow rates in this model could produce moderate pressures. The objective of this study was to analyze the pressure generated by a high-flow oxygen therapy system in an experimental model of the pediatric airway. An experimental in vitro study was performed. A high-flow oxygen therapy system was connected to 3 types of interface (nasal cannulae, nasal mask, and oronasal mask) and applied to 2 types of pediatric manikin (infant and neonatal). The pressures generated in the circuit, in the airway, and in the pharynx were measured at different flow rates (5, 10, 15, and 20 L/min). The experiment was conducted with and without a leak (mouth sealed and unsealed). Linear regression analyses were performed for each set of measurements. The pressures generated with the different interfaces were very similar. The maximum pressure recorded was 4 cm H(2)O with a flow of 20 L/min via nasal cannulae or nasal mask. When the mouth of the manikin was held open, the pressures reached in the airway and pharynxes were undetectable. Linear regression analyses showed a similar linear relationship between flow and pressures measured in the pharynx (pressure = -0.375 + 0.138 × flow) and in the airway (pressure = -0.375 + 0.158 × flow) with the closed mouth condition. According to our hypothesis, high-flow oxygen therapy systems produced a low-level CPAP in an experimental pediatric model, even with the use of very high flow rates. Linear regression analyses showed similar linear relationships between flow and pressures measured in the pharynx and in the airway. This finding suggests that, at least in part, the effects may be due to other mechanisms.
Implementation of projective measurements with linear optics and continuous photon counting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeoka, Masahiro; Sasaki, Masahide; Loock, Peter van
2005-02-01
We investigate the possibility of implementing a given projection measurement using linear optics and arbitrarily fast feedforward based on the continuous detection of photons. In particular, we systematically derive the so-called Dolinar scheme that achieves the minimum-error discrimination of binary coherent states. Moreover, we show that the Dolinar-type approach can also be applied to projection measurements in the regime of photonic-qubit signals. Our results demonstrate that for implementing a projection measurement with linear optics, in principle, unit success probability may be approached even without the use of expensive entangled auxiliary states, as they are needed in all known (near-)deterministic linear-opticsmore » proposals.« less
Abbes, Ilham Ben; Richard, Pierre-Yves; Lefebvre, Marie-Anne; Guilhem, Isabelle; Poirier, Jean-Yves
2013-05-01
Most closed-loop insulin delivery systems rely on model-based controllers to control the blood glucose (BG) level. Simple models of glucose metabolism, which allow easy design of the control law, are limited in their parametric identification from raw data. New control models and controllers issued from them are needed. A proportional integral derivative with double phase lead controller was proposed. Its design was based on a linearization of a new nonlinear control model of the glucose-insulin system in type 1 diabetes mellitus (T1DM) patients validated with the University of Virginia/Padova T1DM metabolic simulator. A 36 h scenario, including six unannounced meals, was tested in nine virtual adults. A previous trial database has been used to compare the performance of our controller with their previous results. The scenario was repeated 25 times for each adult in order to take continuous glucose monitoring noise into account. The primary outcome was the time BG levels were in target (70-180 mg/dl). Blood glucose values were in the target range for 77% of the time and below 50 mg/dl and above 250 mg/dl for 0.8% and 0.3% of the time, respectively. The low blood glucose index and high blood glucose index were 1.65 and 3.33, respectively. The linear controller presented, based on the linearization of a new easily identifiable nonlinear model, achieves good glucose control with low exposure to hypoglycemia and hyperglycemia. © 2013 Diabetes Technology Society.
Missing continuous outcomes under covariate dependent missingness in cluster randomised trials
Diaz-Ordaz, Karla; Bartlett, Jonathan W
2016-01-01
Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group. PMID:27177885
Missing continuous outcomes under covariate dependent missingness in cluster randomised trials.
Hossain, Anower; Diaz-Ordaz, Karla; Bartlett, Jonathan W
2017-06-01
Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group.
Brussee, Janneke M.; Yeo, Tsin W.; Lampah, Daniel A.; Anstey, Nicholas M.
2015-01-01
Impaired organ perfusion in severe falciparum malaria arises from microvascular sequestration of parasitized cells and endothelial dysfunction. Endothelial dysfunction in malaria is secondary to impaired nitric oxide (NO) bioavailability, in part due to decreased plasma concentrations of l-arginine, the substrate for endothelial cell NO synthase. We quantified the time course of the effects of adjunctive l-arginine treatment on endothelial function in 73 patients with moderately severe falciparum malaria derived from previous studies. Three groups of 10 different patients received 3 g, 6 g, or 12 g of l-arginine as a half-hour infusion. The remaining 43 received saline placebo. A pharmacokinetic-pharmacodynamic (PKPD) model was developed to describe the time course of changes in exhaled NO concentrations and reactive hyperemia-peripheral arterial tonometry (RH-PAT) index values describing endothelial function and then used to explore optimal dosing regimens for l-arginine. A PK model describing arginine concentrations in patients with moderately severe malaria was extended with two pharmacodynamic biomeasures, the intermediary biochemical step (NO production) and endothelial function (RH-PAT index). A linear model described the relationship between arginine concentrations and exhaled NO. NO concentrations were linearly related to RH-PAT index. Simulations of dosing schedules using this PKPD model predicted that the time within therapeutic range would increase with increasing arginine dose. However, simulations demonstrated that regimens of continuous infusion over longer periods would prolong the time within the therapeutic range even more. The optimal dosing regimen for l-arginine is likely to be administration schedule dependent. Further studies are necessary to characterize the effects of such continuous infusions of l-arginine on NO and microvascular reactivity in severe malaria. PMID:26482311
A new description of Earth's wobble modes using Clairaut coordinates: 1. Theory
NASA Astrophysics Data System (ADS)
Rochester, M. G.; Crossley, D. J.; Zhang, Y. L.
2014-09-01
This paper presents a novel mathematical reformulation of the theory of the free wobble/nutation of an axisymmetric reference earth model in hydrostatic equilibrium, using the linear momentum description. The new features of this work consist in the use of (i) Clairaut coordinates (rather than spherical polars), (ii) standard spherical harmonics (rather than generalized spherical surface harmonics), (iii) linear operators (rather than J-square symbols) to represent the effects of rotational and ellipticity coupling between dependent variables of different harmonic degree and (iv) a set of dependent variables all of which are continuous across material boundaries. The resulting infinite system of coupled ordinary differential equations is given explicitly, for an elastic solid mantle and inner core, an inviscid outer core and no magnetic field. The formulation is done to second order in the Earth's ellipticity. To this order it is shown that for wobble modes (in which the lowest harmonic in the displacement field is degree 1 toroidal, with azimuthal order m = ±1), it is sufficient to truncate the chain of coupled displacement fields at the toroidal harmonic of degree 5 in the solid parts of the earth model. In the liquid core, however, the harmonic expansion of displacement can in principle continue to indefinitely high degree at this order of accuracy. The full equations are shown to yield correct results in three simple cases amenable to analytic solution: a general earth model in rigid rotation, the tiltover mode in a homogeneous solid earth model and the tiltover and Chandler periods for an incompressible homogeneous solid earth model. Numerical results, from programmes based on this formulation, are presented in part II of this paper.
Immittance Data Validation by Kramers‐Kronig Relations – Derivation and Implications
2017-01-01
Abstract Explicitly based on causality, linearity (superposition) and stability (time invariance) and implicit on continuity (consistency), finiteness (convergence) and uniqueness (single valuedness) in the time domain, Kramers‐Kronig (KK) integral transform (KKT) relations for immittances are derived as pure mathematical constructs in the complex frequency domain using the two‐sided (bilateral) Laplace integral transform (LT) reduced to the Fourier domain for sufficiently rapid exponential decaying, bounded immittances. Novel anti KK relations are also derived to distinguish LTI (linear, time invariant) systems from non‐linear, unstable and acausal systems. All relations can be used to test KK transformability on the LTI principles of linearity, stability and causality of measured and model data by Fourier transform (FT) in immittance spectroscopy (IS). Also, integral transform relations are provided to estimate (conjugate) immittances at zero and infinite frequency particularly useful to normalise data and compare data. Also, important implications for IS are presented and suggestions for consistent data analysis are made which generally apply likewise to complex valued quantities in many fields of engineering and natural sciences. PMID:29577007
Visual Tracking via Sparse and Local Linear Coding.
Wang, Guofeng; Qin, Xueying; Zhong, Fan; Liu, Yue; Li, Hongbo; Peng, Qunsheng; Yang, Ming-Hsuan
2015-11-01
The state search is an important component of any object tracking algorithm. Numerous algorithms have been proposed, but stochastic sampling methods (e.g., particle filters) are arguably one of the most effective approaches. However, the discretization of the state space complicates the search for the precise object location. In this paper, we propose a novel tracking algorithm that extends the state space of particle observations from discrete to continuous. The solution is determined accurately via iterative linear coding between two convex hulls. The algorithm is modeled by an optimal function, which can be efficiently solved by either convex sparse coding or locality constrained linear coding. The algorithm is also very flexible and can be combined with many generic object representations. Thus, we first use sparse representation to achieve an efficient searching mechanism of the algorithm and demonstrate its accuracy. Next, two other object representation models, i.e., least soft-threshold squares and adaptive structural local sparse appearance, are implemented with improved accuracy to demonstrate the flexibility of our algorithm. Qualitative and quantitative experimental results demonstrate that the proposed tracking algorithm performs favorably against the state-of-the-art methods in dynamic scenes.
NASA Astrophysics Data System (ADS)
Qiu, Kang; Wang, Li-Fang; Shen, Jian; Yousif, Alssadig A. M.; He, Peng; Shao, Dan-Dan; Zhang, Xiao-Min; Kirunda, John B.; Jia, Ya
2016-11-01
Based on a deterministic continuous model of cell populations dynamics in the colonic crypt and in colorectal cancer, we propose four combinations of feedback mechanisms in the differentiations from stem cells (SCs) to transit cells (TCs) and then to differentiated cells (DCs), the four combinations include the double linear (LL), the linear and saturating (LS), the saturating and linear (SL), and the double saturating (SS) feedbacks, respectively. The relative fluctuations of the population of SCs, TCs, and DCs around equilibrium states with four feedback mechanisms are studied by using the Langevin method. With the increasing of net growth rate of TCs, it is found that the Fano factors of TCs and DCs go to a peak in a transient phase, and then increase again to infinity in the cases of LS and SS feedbacks. The “up-down-up” characteristic on the Fano factor (like the van der Waals loop) demonstrates that there exists a transient phase between the normal and cancerous phases, our novel findings suggest that the mathematical model with LS or SS feedback might be better to elucidate the dynamics of a normal and abnormal (cancerous) phases.
NASA Astrophysics Data System (ADS)
Kengne, J.; Jafari, S.; Njitacke, Z. T.; Yousefi Azar Khanian, M.; Cheukem, A.
2017-11-01
Mathematical models (ODEs) describing the dynamics of almost all continuous time chaotic nonlinear systems (e.g. Lorenz, Rossler, Chua, or Chen system) involve at least a nonlinear term in addition to linear terms. In this contribution, a novel (and singular) 3D autonomous chaotic system without linear terms is introduced. This system has an especial feature of having two twin strange attractors: one ordinary and one symmetric strange attractor when the time is reversed. The complex behavior of the model is investigated in terms of equilibria and stability, bifurcation diagrams, Lyapunov exponent plots, time series and Poincaré sections. Some interesting phenomena are found including for instance, period-doubling bifurcation, antimonotonicity (i.e. the concurrent creation and annihilation of periodic orbits) and chaos while monitoring the system parameters. Compared to the (unique) case previously reported by Xu and Wang (2014) [31], the system considered in this work displays a more 'elegant' mathematical expression and experiences richer dynamical behaviors. A suitable electronic circuit (i.e. the analog simulator) is designed and used for the investigations. Pspice based simulation results show a very good agreement with the theoretical analysis.
Friction laws at the nanoscale.
Mo, Yifei; Turner, Kevin T; Szlufarska, Izabela
2009-02-26
Macroscopic laws of friction do not generally apply to nanoscale contacts. Although continuum mechanics models have been predicted to break down at the nanoscale, they continue to be applied for lack of a better theory. An understanding of how friction force depends on applied load and contact area at these scales is essential for the design of miniaturized devices with optimal mechanical performance. Here we use large-scale molecular dynamics simulations with realistic force fields to establish friction laws in dry nanoscale contacts. We show that friction force depends linearly on the number of atoms that chemically interact across the contact. By defining the contact area as being proportional to this number of interacting atoms, we show that the macroscopically observed linear relationship between friction force and contact area can be extended to the nanoscale. Our model predicts that as the adhesion between the contacting surfaces is reduced, a transition takes place from nonlinear to linear dependence of friction force on load. This transition is consistent with the results of several nanoscale friction experiments. We demonstrate that the breakdown of continuum mechanics can be understood as a result of the rough (multi-asperity) nature of the contact, and show that roughness theories of friction can be applied at the nanoscale.
Reduced Order Modeling for Prediction and Control of Large-Scale Systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalashnikova, Irina; Arunajatesan, Srinivasan; Barone, Matthew Franklin
2014-05-01
This report describes work performed from June 2012 through May 2014 as a part of a Sandia Early Career Laboratory Directed Research and Development (LDRD) project led by the first author. The objective of the project is to investigate methods for building stable and efficient proper orthogonal decomposition (POD)/Galerkin reduced order models (ROMs): models derived from a sequence of high-fidelity simulations but having a much lower computational cost. Since they are, by construction, small and fast, ROMs can enable real-time simulations of complex systems for onthe- spot analysis, control and decision-making in the presence of uncertainty. Of particular interest tomore » Sandia is the use of ROMs for the quantification of the compressible captive-carry environment, simulated for the design and qualification of nuclear weapons systems. It is an unfortunate reality that many ROM techniques are computationally intractable or lack an a priori stability guarantee for compressible flows. For this reason, this LDRD project focuses on the development of techniques for building provably stable projection-based ROMs. Model reduction approaches based on continuous as well as discrete projection are considered. In the first part of this report, an approach for building energy-stable Galerkin ROMs for linear hyperbolic or incompletely parabolic systems of partial differential equations (PDEs) using continuous projection is developed. The key idea is to apply a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. It is shown that, for many PDE systems including the linearized compressible Euler and linearized compressible Navier-Stokes equations, the desired transformation is induced by a special inner product, termed the “symmetry inner product”. Attention is then turned to nonlinear conservation laws. A new transformation and corresponding energy-based inner product for the full nonlinear compressible Navier-Stokes equations is derived, and it is demonstrated that if a Galerkin ROM is constructed in this inner product, the ROM system energy will be bounded in a way that is consistent with the behavior of the exact solution to these PDEs, i.e., the ROM will be energy-stable. The viability of the linear as well as nonlinear continuous projection model reduction approaches developed as a part of this project is evaluated on several test cases, including the cavity configuration of interest in the targeted application area. In the second part of this report, some POD/Galerkin approaches for building stable ROMs using discrete projection are explored. It is shown that, for generic linear time-invariant (LTI) systems, a discrete counterpart of the continuous symmetry inner product is a weighted L2 inner product obtained by solving a Lyapunov equation. This inner product was first proposed by Rowley et al., and is termed herein the “Lyapunov inner product“. Comparisons between the symmetry inner product and the Lyapunov inner product are made, and the performance of ROMs constructed using these inner products is evaluated on several benchmark test cases. Also in the second part of this report, a new ROM stabilization approach, termed “ROM stabilization via optimization-based eigenvalue reassignment“, is developed for generic LTI systems. At the heart of this method is a constrained nonlinear least-squares optimization problem that is formulated and solved numerically to ensure accuracy of the stabilized ROM. Numerical studies reveal that the optimization problem is computationally inexpensive to solve, and that the new stabilization approach delivers ROMs that are stable as well as accurate. Summaries of “lessons learned“ and perspectives for future work motivated by this LDRD project are provided at the end of each of the two main chapters.« less
Huang, Haiying; Du, Qiaosheng; Kang, Xibing
2013-11-01
In this paper, a class of neutral high-order stochastic Hopfield neural networks with Markovian jump parameters and mixed time delays is investigated. The jumping parameters are modeled as a continuous-time finite-state Markov chain. At first, the existence of equilibrium point for the addressed neural networks is studied. By utilizing the Lyapunov stability theory, stochastic analysis theory and linear matrix inequality (LMI) technique, new delay-dependent stability criteria are presented in terms of linear matrix inequalities to guarantee the neural networks to be globally exponentially stable in the mean square. Numerical simulations are carried out to illustrate the main results. © 2013 ISA. Published by ISA. All rights reserved.
Linear stability analysis of scramjet unstart
NASA Astrophysics Data System (ADS)
Jang, Ik; Nichols, Joseph; Moin, Parviz
2015-11-01
We investigate the bifurcation structure of unstart and restart events in a dual-mode scramjet using the Reynolds-averaged Navier-Stokes equations. The scramjet of interest (HyShot II, Laurence et al., AIAA2011-2310) operates at a free-stream Mach number of approximately 8, and the length of the combustor chamber is 300mm. A heat-release model is applied to mimic the combustion process. Pseudo-arclength continuation with Newton-Raphson iteration is used to calculate multiple solution branches. Stability analysis based on linearized dynamics about the solution curves reveals a metric that optimally forewarns unstart. By combining direct and adjoint eigenmodes, structural sensitivity analysis suggests strategies for unstart mitigation, including changing the isolator length. This work is supported by DOE/NNSA and AFOSR.
NASA Technical Reports Server (NTRS)
McIlraith, Sheila; Biswas, Gautam; Clancy, Dan; Gupta, Vineet
2005-01-01
This paper reports on an on-going Project to investigate techniques to diagnose complex dynamical systems that are modeled as hybrid systems. In particular, we examine continuous systems with embedded supervisory controllers that experience abrupt, partial or full failure of component devices. We cast the diagnosis problem as a model selection problem. To reduce the space of potential models under consideration, we exploit techniques from qualitative reasoning to conjecture an initial set of qualitative candidate diagnoses, which induce a smaller set of models. We refine these diagnoses using parameter estimation and model fitting techniques. As a motivating case study, we have examined the problem of diagnosing NASA's Sprint AERCam, a small spherical robotic camera unit with 12 thrusters that enable both linear and rotational motion.
2014-06-20
zooplankton models (Lavery et al, 2007) have shown that the predicted scattering from zooplankton is dominated by copepods, amphipods, and pteropods ...which there is significant salinity gradient, the predicted scattering from the seasonal pycnocline during SW06 was not able to account for the...has focused on echoes from relatively small zooplankton, such as pteropods or copepods, potentially in the presence of microstructure or in mixed
Linear Modeling of Rotorcraft for Stability Analysis and Preliminary Design
1993-09-01
Bmat ) disp(’ ’) disp(’press any key to continue...’) pause clc elseif choice==4, V...lateral cyclic, pedal.]’) diap(’ ’) diup ( Bmat ) disp(’ ’) disp(’ ’) diup(’ Higenvalue’) diup(’ ’) diap (’Uncoupled’) diup(’ ’) disp(’Longitudinal plant...containing matrix variables V Amat Bmat Rcoup Flataug Glataug Rlataug Plataug Flataug % Glataug Rlonaug Plonaug ’a V * Configuring variables
Surface wave tomography of the European crust and upper mantle from ambient seismic noise
NASA Astrophysics Data System (ADS)
LU, Y.; Stehly, L.; Paul, A.
2017-12-01
We present a high-resolution 3-D Shear wave velocity model of the European crust and upper mantle derived from ambient seismic noise tomography. In this study, we collect 4 years of continuous vertical-component seismic recordings from 1293 broadband stations across Europe (10W-35E, 30N-75N). We analyze group velocity dispersion from 5s to 150s for cross-correlations of more than 0.8 million virtual source-receiver pairs. 2-D group velocity maps are estimated using adaptive parameterization to accommodate the strong heterogeneity of path coverage. 3-D velocity model is obtained by merging 1-D models inverted at each pixel through a two-step data-driven inversion algorithm: a non-linear Bayesian Monte Carlo inversion, followed by a linearized inversion. Resulting S-wave velocity model and Moho depth are compared with previous geophysical studies: 1) The crustal model and Moho depth show striking agreement with active seismic imaging results. Moreover, it even provides new valuable information such as a strong difference of the European Moho along two seismic profiles in the Western Alps (Cifalps and ECORS-CROP). 2) The upper mantle model displays strong similarities with published models even at 150km deep, which is usually imaged using earthquake records.
Expendable launch vehicle studies
NASA Technical Reports Server (NTRS)
Bainum, Peter M.; Reiss, Robert
1995-01-01
Analytical support studies of expendable launch vehicles concentrate on the stability of the dynamics during launch especially during or near the region of maximum dynamic pressure. The in-plane dynamic equations of a generic launch vehicle with multiple flexible bending and fuel sloshing modes are developed and linearized. The information from LeRC about the grids, masses, and modes is incorporated into the model. The eigenvalues of the plant are analyzed for several modeling factors: utilizing diagonal mass matrix, uniform beam assumption, inclusion of aerodynamics, and the interaction between the aerodynamics and the flexible bending motion. Preliminary PID, LQR, and LQG control designs with sensor and actuator dynamics for this system and simulations are also conducted. The initial analysis for comparison of PD (proportional-derivative) and full state feedback LQR Linear quadratic regulator) shows that the split weighted LQR controller has better performance than that of the PD. In order to meet both the performance and robustness requirements, the H(sub infinity) robust controller for the expendable launch vehicle is developed. The simulation indicates that both the performance and robustness of the H(sub infinity) controller are better than that for the PID and LQG controllers. The modelling and analysis support studies team has continued development of methodology, using eigensensitivity analysis, to solve three classes of discrete eigenvalue equations. In the first class, the matrix elements are non-linear functions of the eigenvector. All non-linear periodic motion can be cast in this form. Here the eigenvector is comprised of the coefficients of complete basis functions spanning the response space and the eigenvalue is the frequency. The second class of eigenvalue problems studied is the quadratic eigenvalue problem. Solutions for linear viscously damped structures or viscoelastic structures can be reduced to this form. Particular attention is paid to Maxwell and Kelvin models. The third class of problems consists of linear eigenvalue problems in which the elements of the mass and stiffness matrices are stochastic. dynamic structural response for which the parameters are given by probabilistic distribution functions, rather than deterministic values, can be cast in this form. Solutions for several problems in each class will be presented.
Genomic prediction based on data from three layer lines using non-linear regression models.
Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L
2014-11-06
Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional occurrence of large negative accuracies when the evaluated line was not included in the training dataset. Furthermore, when using a multi-line training dataset, non-linear models provided information on the genotype data that was complementary to the linear models, which indicates that the underlying data distributions of the three studied lines were indeed heterogeneous.
NASA Astrophysics Data System (ADS)
Ferhatoglu, Erhan; Cigeroglu, Ender; Özgüven, H. Nevzat
2018-07-01
In this paper, a new modal superposition method based on a hybrid mode shape concept is developed for the determination of steady state vibration response of nonlinear structures. The method is developed specifically for systems having nonlinearities where the stiffness of the system may take different limiting values. Stiffness variation of these nonlinear systems enables one to define different linear systems corresponding to each value of the limiting equivalent stiffness. Moreover, the response of the nonlinear system is bounded by the confinement of these linear systems. In this study, a modal superposition method utilizing novel hybrid mode shapes which are defined as linear combinations of the modal vectors of the limiting linear systems is proposed to determine periodic response of nonlinear systems. In this method the response of the nonlinear system is written in terms of hybrid modes instead of the modes of the underlying linear system. This provides decrease of the number of modes that should be retained for an accurate solution, which in turn reduces the number of nonlinear equations to be solved. In this way, computational time for response calculation is directly curtailed. In the solution, the equations of motion are converted to a set of nonlinear algebraic equations by using describing function approach, and the numerical solution is obtained by using Newton's method with arc-length continuation. The method developed is applied on two different systems: a lumped parameter model and a finite element model. Several case studies are performed and the accuracy and computational efficiency of the proposed modal superposition method with hybrid mode shapes are compared with those of the classical modal superposition method which utilizes the mode shapes of the underlying linear system.
Varieties of quantity estimation in children.
Sella, Francesco; Berteletti, Ilaria; Lucangeli, Daniela; Zorzi, Marco
2015-06-01
In the number-to-position task, with increasing age and numerical expertise, children's pattern of estimates shifts from a biased (nonlinear) to a formal (linear) mapping. This widely replicated finding concerns symbolic numbers, whereas less is known about other types of quantity estimation. In Experiment 1, Preschool, Grade 1, and Grade 3 children were asked to map continuous quantities, discrete nonsymbolic quantities (numerosities), and symbolic (Arabic) numbers onto a visual line. Numerical quantity was matched for the symbolic and discrete nonsymbolic conditions, whereas cumulative surface area was matched for the continuous and discrete quantity conditions. Crucially, in the discrete condition children's estimation could rely either on the cumulative area or numerosity. All children showed a linear mapping for continuous quantities, whereas a developmental shift from a logarithmic to a linear mapping was observed for both nonsymbolic and symbolic numerical quantities. Analyses on individual estimates suggested the presence of two distinct strategies in estimating discrete nonsymbolic quantities: one based on numerosity and the other based on spatial extent. In Experiment 2, a non-spatial continuous quantity (shades of gray) and new discrete nonsymbolic conditions were added to the set used in Experiment 1. Results confirmed the linear patterns for the continuous tasks, as well as the presence of a subset of children relying on numerosity for the discrete nonsymbolic numerosity conditions despite the availability of continuous visual cues. Overall, our findings demonstrate that estimation of numerical and non-numerical quantities is based on different processing strategies and follow different developmental trajectories. (c) 2015 APA, all rights reserved).
Petrov, Megan E; Weng, Jia; Reid, Kathryn J; Wang, Rui; Ramos, Alberto R; Wallace, Douglas M; Alcantara, Carmela; Cai, Jianwen; Perreira, Krista; Espinoza Giacinto, Rebeca A; Zee, Phyllis C; Sotres-Alvarez, Daniela; Patel, Sanjay R
2018-03-01
Commute time is associated with reduced sleep time, but previous studies have relied on self-reported sleep assessment. The present study investigated the relationships between commute time for employment and objective sleep patterns among non-shift working U.S. Hispanic/Latino adults. From 2010 to 2013, Hispanic/Latino employed, non-shift-working adults (n=760, aged 18-64 years) from the Sueño study, ancillary to the Hispanic Community Health Study/Study of Latinos, reported their total daily commute time to and from work, completed questionnaires on sleep and other health behaviors, and wore wrist actigraphs to record sleep duration, continuity, and variability for 1 week. Survey linear regression models of the actigraphic and self-reported sleep measures regressed on categorized commute time (short: 1-44 minutes; moderate: 45-89 minutes; long: ≥90 minutes) were built adjusting for relevant covariates. For associations that suggested a linear relationship, continuous commute time was modeled as the exposure. Moderation effects by age, sex, income, and depressive symptoms also were explored. Commute time was linearly related to sleep duration on work days such that each additional hour of commute time conferred 15 minutes of sleep loss (p=0.01). Compared with short commutes, individuals with moderate commutes had greater sleep duration variability (p=0.04) and lower interdaily stability (p=0.046, a measure of sleep/wake schedule regularity). No significant associations were detected for self-reported sleep measures. Commute time is significantly associated with actigraphy-measured sleep duration and regularity among Hispanic/Latino adults. Interventions to shorten commute times should be evaluated to help improve sleep habits in this minority population. Copyright © 2017 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
Anti-TNF levels in cord blood at birth are associated with anti-TNF type.
Kanis, Shannon L; de Lima, Alison; van der Ent, Cokkie; Rizopoulos, Dimitris; van der Woude, C Janneke
2018-05-15
Pregnancy guidelines for women with Inflammatory Bowel Disease (IBD) provide recommendations regarding anti-TNF cessation during pregnancy, in order to limit fetal exposure. Although infliximab (IFX) leads to higher anti-TNF concentrations in cord blood than adalimumab (ADA), recommendations are similar. We aimed to demonstrate the effect of anti-TNF cessation during pregnancy on fetal exposure, for IFX and ADA separately. We conducted a prospective single center cohort study. Women with IBD, using IFX or ADA, were followed-up during pregnancy. In case of sustained disease remission, anti-TNF was stopped in the third trimester. At birth, anti-TNF concentration was measured in cord blood. A linear regression model was developed to demonstrate anti-TNF concentration in cord blood at birth. In addition, outcomes such as disease activity, pregnancy outcomes and 1-year health outcomes of infants were collected. We included 131 pregnancies that resulted in a live birth (73 IFX, 58 ADA). At birth, 94 cord blood samples were obtained (52 IFX, 42 ADA), showing significantly higher levels of IFX than ADA (p<0.0001). Anti-TNF type and stop week were used in the linear regression model. During the third trimester, IFX transportation over the placenta increases exponentially, however, ADA transportation is limited and increases in a linear fashion. Overall, health outcomes were comparable. Our linear regression model shows that ADA may be continued longer during pregnancy as transportation over the placenta is lower than IFX. This may reduce relapse risk of the mother without increasing fetal anti-TNF exposure.
Born, Michel; Marzana, Daniela; Alfieri, Sara; Gavray, Claire
2015-01-01
In this article we propose looking into some factors for Civic Participation and the intention to continue to participate among local (Study I) and immigrant (Study II) young people living in Belgium and Germany. In Study I, 1,079 young people (M(age) = 19.23, 44.9% males) completed a self-report questionnaire asking about their Civic Participation. Multiple linear regressions reveal (a) evidence of a pool of variables significantly linked to Civic Participation: Institutional Trust, Collective-Efficacy, Parents' and Peers' Support, Political Interest, Motivations and (b) that Civic Participation, along with the mediation of the Participation's Efficacy, explains the Intention to Continue to Participate. An explanatory model was constructed on participation and the Intention to Continue to Participate on behalf of the native youth. This model is invariant between the two countries. In Study II, 276 young Turkish immigrants (M(age) = 20.80, 49.3% males) recruited in Belgium and Germany filled out the same questionnaire as in Study I. The same analysis was conducted as for Study I, and they provided the same results as the native group, highlighting the invariance of the model between natives and immigrants. Applicative repercussions are discussed.
Improved HDRG decoders for qudit and non-Abelian quantum error correction
NASA Astrophysics Data System (ADS)
Hutter, Adrian; Loss, Daniel; Wootton, James R.
2015-03-01
Hard-decision renormalization group (HDRG) decoders are an important class of decoding algorithms for topological quantum error correction. Due to their versatility, they have been used to decode systems with fractal logical operators, color codes, qudit topological codes, and non-Abelian systems. In this work, we develop a method of performing HDRG decoding which combines strengths of existing decoders and further improves upon them. In particular, we increase the minimal number of errors necessary for a logical error in a system of linear size L from \\Theta ({{L}2/3}) to Ω ({{L}1-ε }) for any ε \\gt 0. We apply our algorithm to decoding D({{{Z}}d}) quantum double models and a non-Abelian anyon model with Fibonacci-like fusion rules, and show that it indeed significantly outperforms previous HDRG decoders. Furthermore, we provide the first study of continuous error correction with imperfect syndrome measurements for the D({{{Z}}d}) quantum double models. The parallelized runtime of our algorithm is poly(log L) for the perfect measurement case. In the continuous case with imperfect syndrome measurements, the averaged runtime is O(1) for Abelian systems, while continuous error correction for non-Abelian anyons stays an open problem.
Doros, Gheorghe; Pencina, Michael; Rybin, Denis; Meisner, Allison; Fava, Maurizio
2013-07-20
Previous authors have proposed the sequential parallel comparison design (SPCD) to address the issue of high placebo response rate in clinical trials. The original use of SPCD focused on binary outcomes, but recent use has since been extended to continuous outcomes that arise more naturally in many fields, including psychiatry. Analytic methods proposed to date for analysis of SPCD trial continuous data included methods based on seemingly unrelated regression and ordinary least squares. Here, we propose a repeated measures linear model that uses all outcome data collected in the trial and accounts for data that are missing at random. An appropriate contrast formulated after the model has been fit can be used to test the primary hypothesis of no difference in treatment effects between study arms. Our extensive simulations show that when compared with the other methods, our approach preserves the type I error even for small sample sizes and offers adequate power and the smallest mean squared error under a wide variety of assumptions. We recommend consideration of our approach for analysis of data coming from SPCD trials. Copyright © 2013 John Wiley & Sons, Ltd.
Cascade model for fluvial geomorphology
NASA Technical Reports Server (NTRS)
Newman, W. I.; Turcotte, D. L.
1990-01-01
Erosional landscapes are generally scale invariant and fractal. Spectral studies provide quantitative confirmation of this statement. Linear theories of erosion will not generate scale-invariant topography. In order to explain the fractal behavior of landscapes a modified Fourier series has been introduced that is the basis for a renormalization approach. A nonlinear dynamical model has been introduced for the decay of the modified Fourier series coefficients that yield a fractal spectra. It is argued that a physical basis for this approach is that a fractal (or nearly fractal) distribution of storms (floods) continually renews erosional features on all scales.
Manpower Substitution and Productivity in Medical Practice
Reinhardt, Uwe E.
1973-01-01
Probably in response to the often alleged physician shortage in this country, concerted research efforts are under way to identify technically feasible opportunities for manpower substitution in the production of ambulatory health care. The approaches range from descriptive studies of the effect of task delegation on output of medical services to rigorous mathematical modeling of health care production by means of linear or continuous production functions. In this article the distinct methodological approaches underlying mathematical models are presented in synopsis, and their inherent strengths and weaknesses are contrasted. The discussion includes suggestions for future research directions. Images Fig. 2 PMID:4586735
Predicting Time to Hospital Discharge for Extremely Preterm Infants
Hintz, Susan R.; Bann, Carla M.; Ambalavanan, Namasivayam; Cotten, C. Michael; Das, Abhik; Higgins, Rosemary D.
2010-01-01
As extremely preterm infant mortality rates have decreased, concerns regarding resource utilization have intensified. Accurate models to predict time to hospital discharge could aid in resource planning, family counseling, and perhaps stimulate quality improvement initiatives. Objectives For infants <27 weeks estimated gestational age (EGA), to develop, validate and compare several models to predict time to hospital discharge based on time-dependent covariates, and based on the presence of 5 key risk factors as predictors. Patients and Methods This was a retrospective analysis of infants <27 weeks EGA, born 7/2002-12/2005 and surviving to discharge from a NICHD Neonatal Research Network site. Time to discharge was modeled as continuous (postmenstrual age at discharge, PMAD), and categorical variables (“Early” and “Late” discharge). Three linear and logistic regression models with time-dependent covariate inclusion were developed (perinatal factors only, perinatal+early neonatal factors, perinatal+early+later factors). Models for Early and Late discharge using the cumulative presence of 5 key risk factors as predictors were also evaluated. Predictive capabilities were compared using coefficient of determination (R2) for linear models, and AUC of ROC curve for logistic models. Results Data from 2254 infants were included. Prediction of PMAD was poor, with only 38% of variation explained by linear models. However, models incorporating later clinical characteristics were more accurate in predicting “Early” or “Late” discharge (full models: AUC 0.76-0.83 vs. perinatal factor models: AUC 0.56-0.69). In simplified key risk factors models, predicted probabilities for Early and Late discharge compared favorably with observed rates. Furthermore, the AUC (0.75-0.77) were similar to those of models including the full factor set. Conclusions Prediction of Early or Late discharge is poor if only perinatal factors are considered, but improves substantially with knowledge of later-occurring morbidities. Prediction using a few key risk factors is comparable to full models, and may offer a clinically applicable strategy. PMID:20008430
Lin, Ai-Ling; Fox, Peter T; Yang, Yihong; Lu, Hanzhang; Tan, Li-Hai; Gao, Jia-Hong
2009-01-01
The aim of this study was to investigate the relationship between relative cerebral blood flow (delta CBF) and relative cerebral metabolic rate of oxygen (delta CMRO(2)) during continuous visual stimulation (21 min at 8 Hz) with fMRI biophysical models by simultaneously measuring of BOLD, CBF and CBV fMRI signals. The delta CMRO(2) was determined by both a newly calibrated single-compartment model (SCM) and a multi-compartment model (MCM) and was in agreement between these two models (P>0.5). The duration-varying delta CBF and delta CMRO(2) showed a negative correlation with time (r=-0.97, P<0.001); i.e., delta CBF declines while delta CMRO(2) increases during continuous stimulation. This study also illustrated that without properly calibrating the critical parameters employed in the SCM, an incorrect and even an opposite appearance of the flow-metabolism relationship during prolonged visual stimulation (positively linear coupling) can result. The time-dependent negative correlation between flow and metabolism demonstrated in this fMRI study is consistent with a previous PET observation and further supports the view that the increase in CBF is driven by factors other than oxygen demand and the energy demands will eventually require increased aerobic metabolism as stimulation continues.
Multivariate meta-analysis using individual participant data
Riley, R. D.; Price, M. J.; Jackson, D.; Wardle, M.; Gueyffier, F.; Wang, J.; Staessen, J. A.; White, I. R.
2016-01-01
When combining results across related studies, a multivariate meta-analysis allows the joint synthesis of correlated effect estimates from multiple outcomes. Joint synthesis can improve efficiency over separate univariate syntheses, may reduce selective outcome reporting biases, and enables joint inferences across the outcomes. A common issue is that within-study correlations needed to fit the multivariate model are unknown from published reports. However, provision of individual participant data (IPD) allows them to be calculated directly. Here, we illustrate how to use IPD to estimate within-study correlations, using a joint linear regression for multiple continuous outcomes and bootstrapping methods for binary, survival and mixed outcomes. In a meta-analysis of 10 hypertension trials, we then show how these methods enable multivariate meta-analysis to address novel clinical questions about continuous, survival and binary outcomes; treatment–covariate interactions; adjusted risk/prognostic factor effects; longitudinal data; prognostic and multiparameter models; and multiple treatment comparisons. Both frequentist and Bayesian approaches are applied, with example software code provided to derive within-study correlations and to fit the models. PMID:26099484
Albanito, Fabrizio; Lebender, Ulrike; Cornulier, Thomas; Sapkota, Tek B; Brentrup, Frank; Stirling, Clare; Hillier, Jon
2017-03-10
There has been much debate about the uncertainties associated with the estimation of direct and indirect agricultural nitrous oxide (N 2 O) emissions in developing countries and in particular from tropical regions. In this study, we report an up-to-date review of the information published in peer-review journals on direct N 2 O emissions from agricultural systems in tropical and sub-tropical regions. We statistically analyze net-N 2 O-N emissions to estimate tropic-specific annual N 2 O emission factors (N 2 O-EFs) using a Generalized Additive Mixed Model (GAMM) which allowed the effects of multiple covariates to be modelled as linear or smooth non-linear continuous functions. Overall the mean N 2 O-EF was 1.2% for the tropics and sub-tropics, thus within the uncertainty range of IPCC-EF. On a regional basis, mean N 2 O-EFs were 1.4% for Africa, 1.1%, for Asia, 0.9% for Australia and 1.3% for Central &South America. Our annual N 2 O-EFs, estimated for a range of fertiliser rates using the available data, do not support recent studies hypothesising non-linear increase N 2 O-EFs as a function of applied N. Our findings highlight that in reporting annual N 2 O emissions and estimating N 2 O-EFs, particular attention should be paid in modelling the effect of study length on response of N 2 O.
Albanito, Fabrizio; Lebender, Ulrike; Cornulier, Thomas; Sapkota, Tek B.; Brentrup, Frank; Stirling, Clare; Hillier, Jon
2017-01-01
There has been much debate about the uncertainties associated with the estimation of direct and indirect agricultural nitrous oxide (N2O) emissions in developing countries and in particular from tropical regions. In this study, we report an up-to-date review of the information published in peer-review journals on direct N2O emissions from agricultural systems in tropical and sub-tropical regions. We statistically analyze net-N2O-N emissions to estimate tropic-specific annual N2O emission factors (N2O-EFs) using a Generalized Additive Mixed Model (GAMM) which allowed the effects of multiple covariates to be modelled as linear or smooth non-linear continuous functions. Overall the mean N2O-EF was 1.2% for the tropics and sub-tropics, thus within the uncertainty range of IPCC-EF. On a regional basis, mean N2O-EFs were 1.4% for Africa, 1.1%, for Asia, 0.9% for Australia and 1.3% for Central & South America. Our annual N2O-EFs, estimated for a range of fertiliser rates using the available data, do not support recent studies hypothesising non-linear increase N2O-EFs as a function of applied N. Our findings highlight that in reporting annual N2O emissions and estimating N2O-EFs, particular attention should be paid in modelling the effect of study length on response of N2O. PMID:28281637
NASA Astrophysics Data System (ADS)
Albanito, Fabrizio; Lebender, Ulrike; Cornulier, Thomas; Sapkota, Tek B.; Brentrup, Frank; Stirling, Clare; Hillier, Jon
2017-03-01
There has been much debate about the uncertainties associated with the estimation of direct and indirect agricultural nitrous oxide (N2O) emissions in developing countries and in particular from tropical regions. In this study, we report an up-to-date review of the information published in peer-review journals on direct N2O emissions from agricultural systems in tropical and sub-tropical regions. We statistically analyze net-N2O-N emissions to estimate tropic-specific annual N2O emission factors (N2O-EFs) using a Generalized Additive Mixed Model (GAMM) which allowed the effects of multiple covariates to be modelled as linear or smooth non-linear continuous functions. Overall the mean N2O-EF was 1.2% for the tropics and sub-tropics, thus within the uncertainty range of IPCC-EF. On a regional basis, mean N2O-EFs were 1.4% for Africa, 1.1%, for Asia, 0.9% for Australia and 1.3% for Central & South America. Our annual N2O-EFs, estimated for a range of fertiliser rates using the available data, do not support recent studies hypothesising non-linear increase N2O-EFs as a function of applied N. Our findings highlight that in reporting annual N2O emissions and estimating N2O-EFs, particular attention should be paid in modelling the effect of study length on response of N2O.
Three dimensional modelling of earthquake rupture cycles on frictional faults
NASA Astrophysics Data System (ADS)
Simpson, Guy; May, Dave
2017-04-01
We are developing an efficient MPI-parallel numerical method to simulate earthquake sequences on preexisting faults embedding within a three dimensional viscoelastic half-space. We solve the velocity form of the elasto(visco)dynamic equations using a continuous Galerkin Finite Element Method on an unstructured pentahedral mesh, which thus permits local spatial refinement in the vicinity of the fault. Friction sliding is coupled to the viscoelastic solid via rate- and state-dependent friction laws using the split-node technique. Our coupled formulation employs a picard-type non-linear solver with a fully implicit, first order accurate time integrator that utilises an adaptive time step that efficiently evolves the system through multiple seismic cycles. The implementation leverages advanced parallel solvers, preconditioners and linear algebra from the Portable Extensible Toolkit for Scientific Computing (PETSc) library. The model can treat heterogeneous frictional properties and stress states on the fault and surrounding solid as well as non-planar fault geometries. Preliminary tests show that the model successfully reproduces dynamic rupture on a vertical strike-slip fault in a half-space governed by rate-state friction with the ageing law.
Input-output characterization of an ultrasonic testing system by digital signal analysis
NASA Technical Reports Server (NTRS)
Williams, J. H., Jr.; Lee, S. S.; Karagulle, H.
1986-01-01
Ultrasonic test system input-output characteristics were investigated by directly coupling the transmitting and receiving transducers face to face without a test specimen. Some of the fundamentals of digital signal processing were summarized. Input and output signals were digitized by using a digital oscilloscope, and the digitized data were processed in a microcomputer by using digital signal-processing techniques. The continuous-time test system was modeled as a discrete-time, linear, shift-invariant system. In estimating the unit-sample response and frequency response of the discrete-time system, it was necessary to use digital filtering to remove low-amplitude noise, which interfered with deconvolution calculations. A digital bandpass filter constructed with the assistance of a Blackman window and a rectangular time window were used. Approximations of the impulse response and the frequency response of the continuous-time test system were obtained by linearly interpolating the defining points of the unit-sample response and the frequency response of the discrete-time system. The test system behaved as a linear-phase bandpass filter in the frequency range 0.6 to 2.3 MHz. These frequencies were selected in accordance with the criterion that they were 6 dB below the maximum peak of the amplitude of the frequency response. The output of the system to various inputs was predicted and the results were compared with the corresponding measurements on the system.
Finite elements of nonlinear continua.
NASA Technical Reports Server (NTRS)
Oden, J. T.
1972-01-01
The finite element method is extended to a broad class of practical nonlinear problems, treating both theory and applications from a general and unifying point of view. The thermomechanical principles of continuous media and the properties of the finite element method are outlined, and are brought together to produce discrete physical models of nonlinear continua. The mathematical properties of the models are analyzed, and the numerical solution of the equations governing the discrete models is examined. The application of the models to nonlinear problems in finite elasticity, viscoelasticity, heat conduction, and thermoviscoelasticity is discussed. Other specific topics include the topological properties of finite element models, applications to linear and nonlinear boundary value problems, convergence, continuum thermodynamics, finite elasticity, solutions to nonlinear partial differential equations, and discrete models of the nonlinear thermomechanical behavior of dissipative media.
Automated Interval velocity picking for Atlantic Multi-Channel Seismic Data
NASA Astrophysics Data System (ADS)
Singh, Vishwajit
2016-04-01
This paper described the challenge in developing and testing a fully automated routine for measuring interval velocities from multi-channel seismic data. Various approaches are employed for generating an interactive algorithm picking interval velocity for continuous 1000-5000 normal moveout (NMO) corrected gather and replacing the interpreter's effort for manual picking the coherent reflections. The detailed steps and pitfalls for picking the interval velocities from seismic reflection time measurements are describe in these approaches. Key ingredients these approaches utilized for velocity analysis stage are semblance grid and starting model of interval velocity. Basin-Hopping optimization is employed for convergence of the misfit function toward local minima. SLiding-Overlapping Window (SLOW) algorithm are designed to mitigate the non-linearity and ill- possessedness of root-mean-square velocity. Synthetic data case studies addresses the performance of the velocity picker generating models perfectly fitting the semblance peaks. A similar linear relationship between average depth and reflection time for synthetic model and estimated models proposed picked interval velocities as the starting model for the full waveform inversion to project more accurate velocity structure of the subsurface. The challenges can be categorized as (1) building accurate starting model for projecting more accurate velocity structure of the subsurface, (2) improving the computational cost of algorithm by pre-calculating semblance grid to make auto picking more feasible.
Forster, Jeri E.; MaWhinney, Samantha; Ball, Erika L.; Fairclough, Diane
2011-01-01
Dropout is common in longitudinal clinical trials and when the probability of dropout depends on unobserved outcomes even after conditioning on available data, it is considered missing not at random and therefore nonignorable. To address this problem, mixture models can be used to account for the relationship between a longitudinal outcome and dropout. We propose a Natural Spline Varying-coefficient mixture model (NSV), which is a straightforward extension of the parametric Conditional Linear Model (CLM). We assume that the outcome follows a varying-coefficient model conditional on a continuous dropout distribution. Natural cubic B-splines are used to allow the regression coefficients to semiparametrically depend on dropout and inference is therefore more robust. Additionally, this method is computationally stable and relatively simple to implement. We conduct simulation studies to evaluate performance and compare methodologies in settings where the longitudinal trajectories are linear and dropout time is observed for all individuals. Performance is assessed under conditions where model assumptions are both met and violated. In addition, we compare the NSV to the CLM and a standard random-effects model using an HIV/AIDS clinical trial with probable nonignorable dropout. The simulation studies suggest that the NSV is an improvement over the CLM when dropout has a nonlinear dependence on the outcome. PMID:22101223
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawton, L.J.; Mihalich, J.P.
1995-12-31
The chlorinated alkenes 1,1-dichloroethene (1,1-DCE), tetrachloroethene (PCE), and trichloroethene (TCE) are common environmental contaminants found in soil and groundwater at hazardous waste sites. Recent assessment of data from epidemiology and mechanistic studies indicates that although exposure to 1,1-DCE, PCE, and TCE causes tumor formation in rodents, it is unlikely that these chemicals are carcinogenic to humans. Nevertheless, many state and federal agencies continue to regulate these compounds as carcinogens through the use of the linearized multistage model and resulting cancer slope factor (CSF). The available data indicate that 1,1-DCE, PCE, and TCE should be assessed using a threshold (i.e., referencemore » dose [RfD]) approach rather than a CSF. This paper summarizes the available metabolic, toxicologic, and epidemiologic data that question the use of the linear multistage model (and CSF) for extrapolation from rodents to humans. A comparative analysis of potential risk-based cleanup goals (RBGs) for these three compounds in soil is presented for a hazardous waste site. Goals were calculated using the USEPA CSFs and using a threshold (i.e., RfD) approach. Costs associated with remediation activities required to meet each set of these cleanup goals are presented and compared.« less
Held, Elizabeth; Cape, Joshua; Tintle, Nathan
2016-01-01
Machine learning methods continue to show promise in the analysis of data from genetic association studies because of the high number of variables relative to the number of observations. However, few best practices exist for the application of these methods. We extend a recently proposed supervised machine learning approach for predicting disease risk by genotypes to be able to incorporate gene expression data and rare variants. We then apply 2 different versions of the approach (radial and linear support vector machines) to simulated data from Genetic Analysis Workshop 19 and compare performance to logistic regression. Method performance was not radically different across the 3 methods, although the linear support vector machine tended to show small gains in predictive ability relative to a radial support vector machine and logistic regression. Importantly, as the number of genes in the models was increased, even when those genes contained causal rare variants, model predictive ability showed a statistically significant decrease in performance for both the radial support vector machine and logistic regression. The linear support vector machine showed more robust performance to the inclusion of additional genes. Further work is needed to evaluate machine learning approaches on larger samples and to evaluate the relative improvement in model prediction from the incorporation of gene expression data.
About well-posed definition of geophysical fields'
NASA Astrophysics Data System (ADS)
Ermokhine, Konstantin; Zhdanova, Ludmila; Litvinova, Tamara
2013-04-01
We introduce a new approach to the downward continuation of geophysical fields based on approximation of observed data by continued fractions. Key Words: downward continuation, continued fraction, Viskovatov's algorithm. Many papers in geophysics are devoted to the downward continuation of geophysical fields from the earth surface to the lower halfspace. Known obstacle for the method practical use is a field's breaking-down phenomenon near the pole closest to the earth surface. It is explained by the discrepancy of the studied fields' mathematical description: linear presentation of the field in the polynomial form, Taylor or Fourier series, leads to essential and unremovable instability of the inverse problem since the field with specific features in the form of poles in the lower halfspace principally can't be adequately described by the linear construction. Field description by the rational fractions is closer to reality. In this case the presence of function's poles in the lower halfspace corresponds adequately to the denominator zeros. Method proposed below is based on the continued fractions. Let's consider the function measured along the profile and represented it in the form of the Tchebishev series (preliminary reducing the argument to the interval [-1, 1]): There are many variants of power series' presentation by continued fractions. The areas of series and corresponding continued fraction's convergence may differ essentially. As investigations have shown, the most suitable mathematical construction for geophysical fields' continuation is so called general C-fraction: where ( , z designates the depth) For construction of C-fraction corresponding to power series exists a rather effective and stable Viskovatov's algorithm (Viskovatov B. "De la methode generale pour reduire toutes sortes des quantitees en fraction continues". Memoires de l' Academie Imperiale des Sciences de St. Petersburg, 1, 1805). A fundamentally new algorithm for Downward Continuation (in an underground half-space) a field measured at the surface, allows you to make the interpretation of geophysical data, to build a cross-section, determine the depth, the approximate shape and size of the sources measured at the surface of the geophysical fields. Appliance of the method are any geophysical surveys: magnetic, gravimetric, electrical exploration, seismic, geochemical surveying, etc. Method was tested on model examples, and practical data. The results are confirmed by drilling.
Gallardo, E; Martínez, L J; Nowak, A K; van der Meulen, H P; Calleja, J M; Tejedor, C; Prieto, I; Granados, D; Taboada, A G; García, J M; Postigo, P A
2010-06-07
We study the optical emission of single semiconductor quantum dots weakly coupled to a photonic-crystal micro-cavity. The linearly polarized emission of a selected quantum dot changes continuously its polarization angle, from nearly perpendicular to the cavity mode polarization at large detuning, to parallel at zero detuning, and reversing sign for negative detuning. The linear polarization rotation is qualitatively interpreted in terms of the detuning dependent mixing of the quantum dot and cavity states. The present result is relevant to achieve continuous control of the linear polarization in single photon emitters.
Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.
Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen
2017-11-01
A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R 2 , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.
NASA Technical Reports Server (NTRS)
Kuo, B. C.; Singh, G.
1974-01-01
The dynamics of the Large Space Telescope (LST) control system were studied in order to arrive at a simplified model for computer simulation without loss of accuracy. The frictional nonlinearity of the Control Moment Gyroscope (CMG) Control Loop was analyzed in a model to obtain data for the following: (1) a continuous describing function for the gimbal friction nonlinearity; (2) a describing function of the CMG nonlinearity using an analytical torque equation; and (3) the discrete describing function and function plots for CMG functional linearity. Preliminary computer simulations are shown for the simplified LST system, first without, and then with analytical torque expressions. Transfer functions of the sampled-data LST system are also described. A final computer simulation is presented which uses elements of the simplified sampled-data LST system with analytical CMG frictional torque expressions.
The value of continuity: Refined isogeometric analysis and fast direct solvers
Garcia, Daniel; Pardo, David; Dalcin, Lisandro; ...
2016-08-24
Here, we propose the use of highly continuous finite element spaces interconnected with low continuity hyperplanes to maximize the performance of direct solvers. Starting from a highly continuous Isogeometric Analysis (IGA) discretization, we introduce C0-separators to reduce the interconnection between degrees of freedom in the mesh. By doing so, both the solution time and best approximation errors are simultaneously improved. We call the resulting method “refined Isogeometric Analysis (rIGA)”. To illustrate the impact of the continuity reduction, we analyze the number of Floating Point Operations (FLOPs), computational times, and memory required to solve the linear system obtained by discretizing themore » Laplace problem with structured meshes and uniform polynomial orders. Theoretical estimates demonstrate that an optimal continuity reduction may decrease the total computational time by a factor between p 2 and p 3, with pp being the polynomial order of the discretization. Numerical results indicate that our proposed refined isogeometric analysis delivers a speed-up factor proportional to p 2. In a 2D mesh with four million elements and p=5, the linear system resulting from rIGA is solved 22 times faster than the one from highly continuous IGA. In a 3D mesh with one million elements and p=3, the linear system is solved 15 times faster for the refined than the maximum continuity isogeometric analysis.« less
Doona, Christopher J; Feeherry, Florence E; Ross, Edward W
2005-04-15
Predictive microbial models generally rely on the growth of bacteria in laboratory broth to approximate the microbial growth kinetics expected to take place in actual foods under identical environmental conditions. Sigmoidal functions such as the Gompertz or logistics equation accurately model the typical microbial growth curve from the lag to the stationary phase and provide the mathematical basis for estimating parameters such as the maximum growth rate (MGR). Stationary phase data can begin to show a decline and make it difficult to discern which data to include in the analysis of the growth curve, a factor that influences the calculated values of the growth parameters. In contradistinction, the quasi-chemical kinetics model provides additional capabilities in microbial modelling and fits growth-death kinetics (all four phases of the microbial lifecycle continuously) for a general set of microorganisms in a variety of actual food substrates. The quasi-chemical model is differential equations (ODEs) that derives from a hypothetical four-step chemical mechanism involving an antagonistic metabolite (quorum sensing) and successfully fits the kinetics of pathogens (Staphylococcus aureus, Escherichia coli and Listeria monocytogenes) in various foods (bread, turkey meat, ham and cheese) as functions of different hurdles (a(w), pH, temperature and anti-microbial lactate). The calculated value of the MGR depends on whether growth-death data or only growth data are used in the fitting procedure. The quasi-chemical kinetics model is also exploited for use with the novel food processing technology of high-pressure processing. The high-pressure inactivation kinetics of E. coli are explored in a model food system over the pressure (P) range of 207-345 MPa (30,000-50,000 psi) and the temperature (T) range of 30-50 degrees C. In relatively low combinations of P and T, the inactivation curves are non-linear and exhibit a shoulder prior to a more rapid rate of microbial destruction. In the higher P, T regime, the inactivation plots tend to be linear. In all cases, the quasi-chemical model successfully fit the linear and curvi-linear inactivation plots for E. coli in model food systems. The experimental data and the quasi-chemical mathematical model described herein are candidates for inclusion in ComBase, the developing database that combines data and models from the USDA Pathogen Modeling Program and the UK Food MicroModel.
Markon, Kristian E
2010-08-01
The literature suggests that internalizing psychopathology relates to impairment incrementally and gradually. However, the form of this relationship has not been characterized. This form is critical to understanding internalizing psychopathology, as it is possible that internalizing may accelerate in effect at some level of severity, defining a natural boundary of abnormality. Here, a novel method-semiparametric structural equation modeling-was used to model the relationship between internalizing and impairment in a sample of 8,580 individuals from the 2000 British Office for National Statistics Survey of Psychiatric Morbidity, a large, population-representative study of psychopathology. This method allows one to model relationships between latent internalizing and impairment without assuming any particular form a priori and to compare models in which the relationship is constant and linear. Results suggest that the relationship between internalizing and impairment is in fact linear and constant across the entire range of internalizing variation and that it is impossible to nonarbitrarily define a specific level of internalizing beyond which consequences suddenly become catastrophic in nature. Results demonstrate the phenomenological continuity of internalizing psychopathology, highlight the importance of impairment as well as symptoms, and have clear implications for defining mental disorder. Copyright 2010 APA, all rights reserved
A non-linear data mining parameter selection algorithm for continuous variables
Razavi, Marianne; Brady, Sean
2017-01-01
In this article, we propose a new data mining algorithm, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, a preferred selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection more efficient. This algorithm introduces interpretable parameters by transforming the original inputs and also a faithful fit to the data. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology with the inclusion variable transformations and interactions. Moreover, this method controls multicollinearity, leading to an optimal set of explanatory variables. PMID:29131829
Continuous wave power scaling in high power broad area quantum cascade lasers
NASA Astrophysics Data System (ADS)
Suttinger, M.; Leshin, J.; Go, R.; Figueiredo, P.; Shu, H.; Lyakh, A.
2018-02-01
Experimental and model results for high power broad area quantum cascade lasers are presented. Continuous wave power scaling from 1.62 W to 2.34 W has been experimentally demonstrated for 3.15 mm-long, high reflection-coated 5.6 μm quantum cascade lasers with 15 stage active region for active region width increased from 10 μm to 20 μm. A semi-empirical model for broad area devices operating in continuous wave mode is presented. The model uses measured pulsed transparency current, injection efficiency, waveguide losses, and differential gain as input parameters. It also takes into account active region self-heating and sub-linearity of pulsed power vs current laser characteristic. The model predicts that an 11% improvement in maximum CW power and increased wall plug efficiency can be achieved from 3.15 mm x 25 μm devices with 21 stages of the same design but half doping in the active region. For a 16-stage design with a reduced stage thickness of 300Å, pulsed roll-over current density of 6 kA/cm2 , and InGaAs waveguide layers; optical power increase of 41% is projected. Finally, the model projects that power level can be increased to 4.5 W from 3.15 mm × 31 μm devices with the baseline configuration with T0 increased from 140 K for the present design to 250 K.
NASA Astrophysics Data System (ADS)
Aulenbach, B. T.; Burns, D. A.; Shanley, J. B.; Yanai, R. D.; Bae, K.; Wild, A.; Yang, Y.; Dong, Y.
2013-12-01
There are many sources of uncertainty in estimates of streamwater solute flux. Flux is the product of discharge and concentration (summed over time), each of which has measurement uncertainty of its own. Discharge can be measured almost continuously, but concentrations are usually determined from discrete samples, which increases uncertainty dependent on sampling frequency and how concentrations are assigned for the periods between samples. Gaps between samples can be estimated by linear interpolation or by models that that use the relations between concentration and continuously measured or known variables such as discharge, season, temperature, and time. For this project, developed in cooperation with QUEST (Quantifying Uncertainty in Ecosystem Studies), we evaluated uncertainty for three flux estimation methods and three different sampling frequencies (monthly, weekly, and weekly plus event). The constituents investigated were dissolved NO3, Si, SO4, and dissolved organic carbon (DOC), solutes whose concentration dynamics exhibit strongly contrasting behavior. The evaluation was completed for a 10-year period at five small, forested watersheds in Georgia, New Hampshire, New York, Puerto Rico, and Vermont. Concentration regression models were developed for each solute at each of the three sampling frequencies for all five watersheds. Fluxes were then calculated using (1) a linear interpolation approach, (2) a regression-model method, and (3) the composite method - which combines the regression-model method for estimating concentrations and the linear interpolation method for correcting model residuals to the observed sample concentrations. We considered the best estimates of flux to be derived using the composite method at the highest sampling frequencies. We also evaluated the importance of sampling frequency and estimation method on flux estimate uncertainty; flux uncertainty was dependent on the variability characteristics of each solute and varied for different reporting periods (e.g. 10-year, study period vs. annually vs. monthly). The usefulness of the two regression model based flux estimation approaches was dependent upon the amount of variance in concentrations the regression models could explain. Our results can guide the development of optimal sampling strategies by weighing sampling frequency with improvements in uncertainty in stream flux estimates for solutes with particular characteristics of variability. The appropriate flux estimation method is dependent on a combination of sampling frequency and the strength of concentration regression models. Sites: Biscuit Brook (Frost Valley, NY), Hubbard Brook Experimental Forest and LTER (West Thornton, NH), Luquillo Experimental Forest and LTER (Luquillo, Puerto Rico), Panola Mountain (Stockbridge, GA), Sleepers River Research Watershed (Danville, VT)
Disequilibrium dynamics in a Keynesian model with time delays
NASA Astrophysics Data System (ADS)
Gori, Luca; Guerrini, Luca; Sodini, Mauro
2018-05-01
The aim of this research is to analyse a Keynesian goods market closed economy by considering a continuous-time setup with fixed delays. The work compares dynamic results based on linear and nonlinear adjustment mechanisms through which the aggregate supply (production) reacts to a disequilibrium in the goods market and consumption depends on income at a preceding date. Both analytical and geometrical (stability switching curves) techniques are used to characterise the stability properties of the stationary equilibrium.
Infrastructure Tsunami Could Easily Dwarf Climate Change
NASA Astrophysics Data System (ADS)
Lansing, Stephen
Compared to the physical and biological sciences, so far complexity has had far less impact on mainstream social science. This is not surprising, but it is alarming because we find ourselves in the midst of a planetary-scale transition from the Holocene to the Anthropocene. We have already breached some planetary boundaries for sustainability, but those tipping points are nearly invisible from the perspective of the linear equilibrium models that continue to hold sway in social science...
LMI designmethod for networked-based PID control
NASA Astrophysics Data System (ADS)
Souza, Fernando de Oliveira; Mozelli, Leonardo Amaral; de Oliveira, Maurício Carvalho; Palhares, Reinaldo Martinez
2016-10-01
In this paper, we propose a methodology for the design of networked PID controllers for second-order delayed processes using linear matrix inequalities. The proposed procedure takes into account time-varying delay on the plant, time-varying delays induced by the network and packed dropouts. The design is carried on entirely using a continuous-time model of the closed-loop system where time-varying delays are used to represent sampling and holding occurring in a discrete-time digital PID controller.
Finite Element Study on Continuous Rotating versus Reciprocating Nickel-Titanium Instruments.
El-Anwar, Mohamed I; Yousief, Salah A; Kataia, Engy M; El-Wahab, Tarek M Abd
2016-01-01
In the present study, GTX and ProTaper as continuous rotating endodontic files were numerically compared with WaveOne reciprocating file using finite element analysis, aiming at having a low cost, accurate/trustworthy comparison as well as finding out the effect of instrument design and manufacturing material on its lifespan. Two 3D finite element models were especially prepared for this comparison. Commercial engineering CAD/CAM package was used to model full detailed flute geometries of the instruments. Multi-linear materials were defined in analysis by using real strain-stress data of NiTi and M-Wire. Non-linear static analysis was performed to simulate the instrument inside root canal at a 45° angle in the apical portion and subjected to 0.3 N.cm torsion. The three simulations in this study showed that M-Wire is slightly more resistant to failure than conventional NiTi. On the other hand, both materials are fairly similar in case of severe locking conditions. For the same instrument geometry, M-Wire instruments may have longer lifespan than the conventional NiTi ones. In case of severe locking conditions both materials will fail similarly. Larger cross sectional area (function of instrument taper) resisted better to failure than the smaller ones, while the cross sectional shape and its cutting angles could affect instrument cutting efficiency.
The use of auxiliary variables in capture-recapture and removal experiments
Pollock, K.H.; Hines, J.E.; Nichols, J.D.
1984-01-01
The dependence of animal capture probabilities on auxiliary variables is an important practical problem which has not been considered in the development of estimation procedures for capture-recapture and removal experiments. In this paper the linear logistic binary regression model is used to relate the probability of capture to continuous auxiliary variables. The auxiliary variables could be environmental quantities such as air or water temperature, or characteristics of individual animals, such as body length or weight. Maximum likelihood estimators of the population parameters are considered for a variety of models which all assume a closed population. Testing between models is also considered. The models can also be used when one auxiliary variable is a measure of the effort expended in obtaining the sample.
Random center vortex lines in continuous 3D space-time
DOE Office of Scientific and Technical Information (OSTI.GOV)
Höllwieser, Roman; Institute of Atomic and Subatomic Physics, Vienna University of Technology, Operngasse 9, 1040 Vienna; Altarawneh, Derar
2016-01-22
We present a model of center vortices, represented by closed random lines in continuous 2+1-dimensional space-time. These random lines are modeled as being piece-wise linear and an ensemble is generated by Monte Carlo methods. The physical space in which the vortex lines are defined is a cuboid with periodic boundary conditions. Besides moving, growing and shrinking of the vortex configuration, also reconnections are allowed. Our ensemble therefore contains not a fixed, but a variable number of closed vortex lines. This is expected to be important for realizing the deconfining phase transition. Using the model, we study both vortex percolation andmore » the potential V(R) between quark and anti-quark as a function of distance R at different vortex densities, vortex segment lengths, reconnection conditions and at different temperatures. We have found three deconfinement phase transitions, as a function of density, as a function of vortex segment length, and as a function of temperature. The model reproduces the qualitative features of confinement physics seen in SU(2) Yang-Mills theory.« less
Instability of turing patterns in reaction-diffusion-ODE systems.
Marciniak-Czochra, Anna; Karch, Grzegorz; Suzuki, Kanako
2017-02-01
The aim of this paper is to contribute to the understanding of the pattern formation phenomenon in reaction-diffusion equations coupled with ordinary differential equations. Such systems of equations arise, for example, from modeling of interactions between cellular processes such as cell growth, differentiation or transformation and diffusing signaling factors. We focus on stability analysis of solutions of a prototype model consisting of a single reaction-diffusion equation coupled to an ordinary differential equation. We show that such systems are very different from classical reaction-diffusion models. They exhibit diffusion-driven instability (turing instability) under a condition of autocatalysis of non-diffusing component. However, the same mechanism which destabilizes constant solutions of such models, destabilizes also all continuous spatially heterogeneous stationary solutions, and consequently, there exist no stable Turing patterns in such reaction-diffusion-ODE systems. We provide a rigorous result on the nonlinear instability, which involves the analysis of a continuous spectrum of a linear operator induced by the lack of diffusion in the destabilizing equation. These results are extended to discontinuous patterns for a class of nonlinearities.
LINEAR - DERIVATION AND DEFINITION OF A LINEAR AIRCRAFT MODEL
NASA Technical Reports Server (NTRS)
Duke, E. L.
1994-01-01
The Derivation and Definition of a Linear Model program, LINEAR, provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models. LINEAR was developed to provide a standard, documented, and verified tool to derive linear models for aircraft stability analysis and control law design. Linear system models define the aircraft system in the neighborhood of an analysis point and are determined by the linearization of the nonlinear equations defining vehicle dynamics and sensors. LINEAR numerically determines a linear system model using nonlinear equations of motion and a user supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. LINEAR is capable of extracting both linearized engine effects, such as net thrust, torque, and gyroscopic effects and including these effects in the linear system model. The point at which this linear model is defined is determined either by completely specifying the state and control variables, or by specifying an analysis point on a trajectory and directing the program to determine the control variables and the remaining state variables. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to provide easy selection of state, control, and observation variables to be used in a particular model. Thus, the order of the system model is completely under user control. Further, the program provides the flexibility of allowing alternate formulations of both the state and observation equations. Data describing the aircraft and the test case is input to the program through a terminal or formatted data files. All data can be modified interactively from case to case. The aerodynamic model can be defined in two ways: a set of nondimensional stability and control derivatives for the flight point of interest, or a full non-linear aerodynamic model as used in simulations. LINEAR is written in FORTRAN and has been implemented on a DEC VAX computer operating under VMS with a virtual memory requirement of approximately 296K of 8 bit bytes. Both an interactive and batch version are included. LINEAR was developed in 1988.
Spatial effects in discrete generation population models.
Carrillo, C; Fife, P
2005-02-01
A framework is developed for constructing a large class of discrete generation, continuous space models of evolving single species populations and finding their bifurcating patterned spatial distributions. Our models involve, in separate stages, the spatial redistribution (through movement laws) and local regulation of the population; and the fundamental properties of these events in a homogeneous environment are found. Emphasis is placed on the interaction of migrating individuals with the existing population through conspecific attraction (or repulsion), as well as on random dispersion. The nature of the competition of these two effects in a linearized scenario is clarified. The bifurcation of stationary spatially patterned population distributions is studied, with special attention given to the role played by that competition.
Spontaneous density fluctuations in granular flow and traffic
NASA Astrophysics Data System (ADS)
Herrmann, Hans J.
It is known that spontaneous density waves appear in granular material flowing through pipes or hoppers. A similar phenomenon is known from traffic jams on highways. Using numerical simulations we show that several types of waves exist and find that the density fluctuations follow a power law spectrum. We also investigate one-dimensional traffic models. If positions and velocities are continuous variables the model shows self-organized criticality driven by the slowest car. Lattice gas and lattice Boltzmann models reproduce the experimentally observed effects. Density waves are spontaneously generated when the viscosity has a non-linear dependence on density or shear rate as it is the case in traffic or granular flow.
Temporal BYY encoding, Markovian state spaces, and space dimension determination.
Xu, Lei
2004-09-01
As a complementary to those temporal coding approaches of the current major stream, this paper aims at the Markovian state space temporal models from the perspective of the temporal Bayesian Ying-Yang (BYY) learning with both new insights and new results on not only the discrete state featured Hidden Markov model and extensions but also the continuous state featured linear state spaces and extensions, especially with a new learning mechanism that makes selection of the state number or the dimension of state space either automatically during adaptive learning or subsequently after learning via model selection criteria obtained from this mechanism. Experiments are demonstrated to show how the proposed approach works.
Transport phenomena in the micropores of plug-type phase separators
NASA Technical Reports Server (NTRS)
Fazah, M. M.
1995-01-01
This study numerically investigates the transport phenomena within and across a porous-plug phase separator. The effect of temperature differential across a single pore and of the sidewall boundary conditions, i.e., isothermal or linear thermal gradient, are presented and discussed. The effects are quantified in terms of the evaporation mass flux across the boundary and the mean surface temperature. A two-dimensional finite element model is used to solve the continuity, momentum, and energy equations for the liquid. Temperature differentials across the pore interface of 1.0, and 1.5 K are examined and their effect on evaporation flux and mean surface temperature is shown. For isothermal side boundary conditions, the evaporation flux across the pore is directly proportional and linear with Delta T. For the case of an imposed linear thermal gradient on the side boundaries, Biot numbers of 0.0, 0.15, and 0.5 are examined. The most significant effect of Biot number is to lower the overall surface temperature and evaporation flux.
Xu, Xiaole; Chen, Shengyong
2014-01-01
This paper investigates the finite-time consensus problem of leader-following multiagent systems. The dynamical models for all following agents and the leader are assumed the same general form of linear system, and the interconnection topology among the agents is assumed to be switching and undirected. We mostly consider the continuous-time case. By assuming that the states of neighbouring agents are known to each agent, a sufficient condition is established for finite-time consensus via a neighbor-based state feedback protocol. While the states of neighbouring agents cannot be available and only the outputs of neighbouring agents can be accessed, the distributed observer-based consensus protocol is proposed for each following agent. A sufficient condition is provided in terms of linear matrix inequalities to design the observer-based consensus protocol, which makes the multiagent systems achieve finite-time consensus under switching topologies. Then, we discuss the counterparts for discrete-time case. Finally, we provide an illustrative example to show the effectiveness of the design approach. PMID:24883367
Input-output characterization of an ultrasonic testing system by digital signal analysis
NASA Technical Reports Server (NTRS)
Karaguelle, H.; Lee, S. S.; Williams, J., Jr.
1984-01-01
The input/output characteristics of an ultrasonic testing system used for stress wave factor measurements were studied. The fundamentals of digital signal processing are summarized. The inputs and outputs are digitized and processed in a microcomputer using digital signal processing techniques. The entire ultrasonic test system, including transducers and all electronic components, is modeled as a discrete-time linear shift-invariant system. Then the impulse response and frequency response of the continuous time ultrasonic test system are estimated by interpolating the defining points in the unit sample response and frequency response of the discrete time system. It is found that the ultrasonic test system behaves as a linear phase bandpass filter. Good results were obtained for rectangular pulse inputs of various amplitudes and durations and for tone burst inputs whose center frequencies are within the passband of the test system and for single cycle inputs of various amplitudes. The input/output limits on the linearity of the system are determined.
Reference governors for controlled belt restraint systems
NASA Astrophysics Data System (ADS)
van der Laan, E. P.; Heemels, W. P. M. H.; Luijten, H.; Veldpaus, F. E.; Steinbuch, M.
2010-07-01
Today's restraint systems typically include a number of airbags, and a three-point seat belt with load limiter and pretensioner. For the class of real-time controlled restraint systems, the restraint actuator settings are continuously manipulated during the crash. This paper presents a novel control strategy for these systems. The control strategy developed here is based on a combination of model predictive control and reference management, in which a non-linear device - a reference governor (RG) - is added to a primal closed-loop controlled system. This RG determines an optimal setpoint in terms of injury reduction and constraint satisfaction by solving a constrained optimisation problem. Prediction of the vehicle motion, required to predict future constraint violation, is included in the design and is based on past crash data, using linear regression techniques. Simulation results with MADYMO models show that, with ideal sensors and actuators, a significant reduction (45%) of the peak chest acceleration can be achieved, without prior knowledge of the crash. Furthermore, it is shown that the algorithms are sufficiently fast to be implemented online.
NASA Astrophysics Data System (ADS)
Vitelli, Vincenzo
2012-02-01
Non-linear sound is an extreme phenomenon typically observed in solids after violent explosions. But granular media are different. Right when they unjam, these fragile and disordered solids exhibit vanishing elastic moduli and sound speed, so that even tiny mechanical perturbations form supersonic shocks. Here, we perform simulations in which two-dimensional jammed granular packings are continuously compressed, and demonstrate that the resulting excitations are strongly nonlinear shocks, rather than linear waves. We capture the full dependence of the shock speed on pressure and compression speed by a surprisingly simple analytical model. We also treat shear shocks within a simplified viscoelastic model of nearly-isostatic random networks comprised of harmonic springs. In this case, anharmonicity does not originate locally from nonlinear interactions between particles, as in granular media; instead, it emerges from the global architecture of the network. As a result, the diverging width of the shear shocks bears a nonlinear signature of the diverging isostatic length associated with the loss of rigidity in these floppy networks.
NASA Astrophysics Data System (ADS)
Tutcuoglu, A.; Majidi, C.
2014-12-01
Using principles of damped harmonic oscillation with continuous media, we examine electrostatic energy harvesting with a "soft-matter" array of dielectric elastomer (DE) transducers. The array is composed of infinitely thin and deformable electrodes separated by layers of insulating elastomer. During vibration, it deforms longitudinally, resulting in a change in the capacitance and electrical enthalpy of the charged electrodes. Depending on the phase of electrostatic loading, the DE array can function as either an actuator that amplifies small vibrations or a generator that converts these external excitations into electrical power. Both cases are addressed with a comprehensive theory that accounts for the influence of viscoelasticity, dielectric breakdown, and electromechanical coupling induced by Maxwell stress. In the case of a linearized Kelvin-Voigt model of the dielectric, we obtain a closed-form estimate for the electrical power output and a scaling law for DE generator design. For the complete nonlinear model, we obtain the optimal electrostatic voltage input for maximum electrical power output.
On the membrane approximation in isothermal film casting
NASA Astrophysics Data System (ADS)
Hagen, Thomas
2014-08-01
In this work, a one-dimensional model for isothermal film casting is studied. Film casting is an important engineering process to manufacture thin films and sheets from a highly viscous polymer melt. The model equations account for variations in film width and film thickness, and arise from thinness and kinematic assumptions for the free liquid film. The first aspect of our study is a rigorous discussion of the existence and uniqueness of stationary solutions. This objective is approached via the argument principle, exploiting the homotopy invariance of a family of analytic functions. As our second objective, we analyze the linearization of the governing equations about stationary solutions. It is shown that solutions for the associated boundary-initial value problem are given by a strongly continuous semigroup of bounded linear operators. To reach this result, we cast the relevant Cauchy problem in a more accessible form. These transformed equations allow us insight into the regularity of the semigroup, thus yielding the validity of the spectral mapping theorem for the semigroup and the spectrally determined growth property.
Diaz, Francisco J
2016-10-15
We propose statistical definitions of the individual benefit of a medical or behavioral treatment and of the severity of a chronic illness. These definitions are used to develop a graphical method that can be used by statisticians and clinicians in the data analysis of clinical trials from the perspective of personalized medicine. The method focuses on assessing and comparing individual effects of treatments rather than average effects and can be used with continuous and discrete responses, including dichotomous and count responses. The method is based on new developments in generalized linear mixed-effects models, which are introduced in this article. To illustrate, analyses of data from the Sequenced Treatment Alternatives to Relieve Depression clinical trial of sequences of treatments for depression and data from a clinical trial of respiratory treatments are presented. The estimation of individual benefits is also explained. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Stone, Mandy L.; Graham, Jennifer L.; Gatotho, Jackline W.
2013-01-01
Cheney Reservoir, located in south-central Kansas, is one of the primary water supplies for the city of Wichita, Kansas. The U.S. Geological Survey has operated a continuous real-time water-quality monitoring station in Cheney Reservoir since 2001; continuously measured physicochemical properties include specific conductance, pH, water temperature, dissolved oxygen, turbidity, fluorescence (wavelength range 650 to 700 nanometers; estimate of total chlorophyll), and reservoir elevation. Discrete water-quality samples were collected during 2001 through 2009 and analyzed for sediment, nutrients, taste-and-odor compounds, cyanotoxins, phytoplankton community composition, actinomycetes bacteria, and other water-quality measures. Regression models were developed to establish relations between discretely sampled constituent concentrations and continuously measured physicochemical properties to compute concentrations of constituents that are not easily measured in real time. The water-quality information in this report is important to the city of Wichita because it allows quantification and characterization of potential constituents of concern in Cheney Reservoir. This report updates linear regression models published in 2006 that were based on data collected during 2001 through 2003. The update uses discrete and continuous data collected during May 2001 through December 2009. Updated models to compute dissolved solids, sodium, chloride, and suspended solids were similar to previously published models. However, several other updated models changed substantially from previously published models. In addition to updating relations that were previously developed, models also were developed for four new constituents, including magnesium, dissolved phosphorus, actinomycetes bacteria, and the cyanotoxin microcystin. In addition, a conversion factor of 0.74 was established to convert the Yellow Springs Instruments (YSI) model 6026 turbidity sensor measurements to the newer YSI model 6136 sensor at the Cheney Reservoir site. Because a high percentage of geosmin and microcystin data were below analytical detection thresholds (censored data), multiple logistic regression was used to develop models that best explained the probability of geosmin and microcystin concentrations exceeding relevant thresholds. The geosmin and microcystin models are particularly important because geosmin is a taste-and-odor compound and microcystin is a cyanotoxin.
Chen, Xiaofeng; Song, Qiankun; Li, Zhongshan; Zhao, Zhenjiang; Liu, Yurong
2018-07-01
This paper addresses the problem of stability for continuous-time and discrete-time quaternion-valued neural networks (QVNNs) with linear threshold neurons. Applying the semidiscretization technique to the continuous-time QVNNs, the discrete-time analogs are obtained, which preserve the dynamical characteristics of their continuous-time counterparts. Via the plural decomposition method of quaternion, homeomorphic mapping theorem, as well as Lyapunov theorem, some sufficient conditions on the existence, uniqueness, and global asymptotical stability of the equilibrium point are derived for the continuous-time QVNNs and their discrete-time analogs, respectively. Furthermore, a uniform sufficient condition on the existence, uniqueness, and global asymptotical stability of the equilibrium point is obtained for both continuous-time QVNNs and their discrete-time version. Finally, two numerical examples are provided to substantiate the effectiveness of the proposed results.
Structure and Reversibility of 2D von Neumann Cellular Automata Over Triangular Lattice
NASA Astrophysics Data System (ADS)
Uguz, Selman; Redjepov, Shovkat; Acar, Ecem; Akin, Hasan
2017-06-01
Even though the fundamental main structure of cellular automata (CA) is a discrete special model, the global behaviors at many iterative times and on big scales could be a close, nearly a continuous, model system. CA theory is a very rich and useful phenomena of dynamical model that focuses on the local information being relayed to the neighboring cells to produce CA global behaviors. The mathematical points of the basic model imply the computable values of the mathematical structure of CA. After modeling the CA structure, an important problem is to be able to move forwards and backwards on CA to understand their behaviors in more elegant ways. A possible case is when CA is to be a reversible one. In this paper, we investigate the structure and the reversibility of two-dimensional (2D) finite, linear, triangular von Neumann CA with null boundary case. It is considered on ternary field ℤ3 (i.e. 3-state). We obtain their transition rule matrices for each special case. For given special triangular information (transition) rule matrices, we prove which triangular linear 2D von Neumann CAs are reversible or not. It is known that the reversibility cases of 2D CA are generally a much challenged problem. In the present study, the reversibility problem of 2D triangular, linear von Neumann CA with null boundary is resolved completely over ternary field. As far as we know, there is no structure and reversibility study of von Neumann 2D linear CA on triangular lattice in the literature. Due to the main CA structures being sufficiently simple to investigate in mathematical ways, and also very complex to obtain in chaotic systems, it is believed that the present construction can be applied to many areas related to these CA using any other transition rules.
A Thermodynamic Theory Of Solid Viscoelasticity. Part 1: Linear Viscoelasticity.
NASA Technical Reports Server (NTRS)
Freed, Alan D.; Leonov, Arkady I.
2002-01-01
The present series of three consecutive papers develops a general theory for linear and finite solid viscoelasticity. Because the most important object for nonlinear studies are rubber-like materials, the general approach is specified in a form convenient for solving problems important for many industries that involve rubber-like materials. General linear and nonlinear theories for non-isothermal deformations of viscoelastic solids are developed based on the quasi-linear approach of non-equilibrium thermodynamics. In this, the first paper of the series, we analyze non-isothermal linear viscoelasticity, which is applicable in a range of small strains not only to all synthetic polymers and bio-polymers but also to some non-polymeric materials. Although the linear case seems to be well developed, there still are some reasons to implement a thermodynamic derivation of constitutive equations for solid-like, non-isothermal, linear viscoelasticity. The most important is the thermodynamic modeling of thermo-rheological complexity , i.e. different temperature dependences of relaxation parameters in various parts of relaxation spectrum. A special structure of interaction matrices is established for different physical mechanisms contributed to the normal relaxation modes. This structure seems to be in accord with observations, and creates a simple mathematical framework for both continuum and molecular theories of the thermo-rheological complex relaxation phenomena. Finally, a unified approach is briefly discussed that, in principle, allows combining both the long time (discrete) and short time (continuous) descriptions of relaxation behaviors for polymers in the rubbery and glassy regions.
De Vendittis, Emmanuele; Castellano, Immacolata; Cotugno, Roberta; Ruocco, Maria Rosaria; Raimo, Gennaro; Masullo, Mariorosario
2008-01-07
The growth temperature adaptation of six model proteins has been studied in 42 microorganisms belonging to eubacterial and archaeal kingdoms, covering optimum growth temperatures from 7 to 103 degrees C. The selected proteins include three elongation factors involved in translation, the enzymes glyceraldehyde-3-phosphate dehydrogenase and superoxide dismutase, the cell division protein FtsZ. The common strategy of protein adaptation from cold to hot environments implies the occurrence of small changes in the amino acid composition, without altering the overall structure of the macromolecule. These continuous adjustments were investigated through parameters related to the amino acid composition of each protein. The average value per residue of mass, volume and accessible surface area allowed an evaluation of the usage of bulky residues, whereas the average hydrophobicity reflected that of hydrophobic residues. The specific proportion of bulky and hydrophobic residues in each protein almost linearly increased with the temperature of the host microorganism. This finding agrees with the structural and functional properties exhibited by proteins in differently adapted sources, thus explaining the great compactness or the high flexibility exhibited by (hyper)thermophilic or psychrophilic proteins, respectively. Indeed, heat-adapted proteins incline toward the usage of heavier-size and more hydrophobic residues with respect to mesophiles, whereas the cold-adapted macromolecules show the opposite behavior with a certain preference for smaller-size and less hydrophobic residues. An investigation on the different increase of bulky residues along with the growth temperature observed in the six model proteins suggests the relevance of the possible different role and/or structure organization played by protein domains. The significance of the linear correlations between growth temperature and parameters related to the amino acid composition improved when the analysis was collectively carried out on all model proteins.
Dynamics of Transformation from Segregation to Mixed Wealth Cities
Sahasranaman, Anand; Jensen, Henrik Jeldtoft
2016-01-01
We model the dynamics of a variation of the Schelling model for agents described simply by a continuously distributed variable—wealth. Agent movement is not dictated by agent choice as in the classic Schelling model, but by their wealth status. Agents move to neighborhoods where their wealth is not lesser than that of some proportion of their neighbors, the threshold level. As in the case of the classic Schelling model, we find here that wealth-based segregation occurs and persists. However, introducing uncertainty into the decision to move—that is, with some probability, if agents are allowed to move even though the threshold condition is contravened—we find that even for small proportions of such disallowed moves, the dynamics no longer yield segregation but instead sharply transition into a persistent mixed wealth distribution, consistent with empirical findings of Benenson, Hatna, and Or. We investigate the nature of this sharp transformation, and find that it is because of a non-linear relationship between allowed moves (moves where threshold condition is satisfied) and disallowed moves (moves where it is not). For small increases in disallowed moves, there is a rapid corresponding increase in allowed moves (before the rate of increase tapers off and tends to zero), and it is the effect of this non-linearity on the dynamics of the system that causes the rapid transition from a segregated to a mixed wealth state. The contravention of the tolerance condition, sanctioning disallowed moves, could be interpreted as public policy interventions to drive de-segregation. Our finding therefore suggests that it might require limited, but continually implemented, public intervention—just sufficient to enable a small, persistently sustained fraction of disallowed moves so as to trigger the dynamics that drive the transformation from a segregated to mixed equilibrium. PMID:27861578
Anomalous dielectric relaxation with linear reaction dynamics in space-dependent force fields.
Hong, Tao; Tang, Zhengming; Zhu, Huacheng
2016-12-28
The anomalous dielectric relaxation of disordered reaction with linear reaction dynamics is studied via the continuous time random walk model in the presence of space-dependent electric field. Two kinds of modified reaction-subdiffusion equations are derived for different linear reaction processes by the master equation, including the instantaneous annihilation reaction and the noninstantaneous annihilation reaction. If a constant proportion of walkers is added or removed instantaneously at the end of each step, there will be a modified reaction-subdiffusion equation with a fractional order temporal derivative operating on both the standard diffusion term and a linear reaction kinetics term. If the walkers are added or removed at a constant per capita rate during the waiting time between steps, there will be a standard linear reaction kinetics term but a fractional order temporal derivative operating on an anomalous diffusion term. The dielectric polarization is analyzed based on the Legendre polynomials and the dielectric properties of both reactions can be expressed by the effective rotational diffusion function and component concentration function, which is similar to the standard reaction-diffusion process. The results show that the effective permittivity can be used to describe the dielectric properties in these reactions if the chemical reaction time is much longer than the relaxation time.
Parameter Identification of Static Friction Based on An Optimal Exciting Trajectory
NASA Astrophysics Data System (ADS)
Tu, X.; Zhao, P.; Zhou, Y. F.
2017-12-01
In this paper, we focus on how to improve the identification efficiency of friction parameters in a robot joint. First, the static friction model that has only linear dependencies with respect to their parameters is adopted so that the servomotor dynamics can be linearized. In this case, the traditional exciting trajectory based on Fourier series is modified by replacing the constant term with quintic polynomial to ensure the boundary continuity of speed and acceleration. Then, the Fourier-related parameters are optimized by genetic algorithm(GA) in which the condition number of regression matrix is set as the fitness function. At last, compared with the constant-velocity tracking experiment, the friction parameters from the exciting trajectory experiment has the similar result with the advantage of time reduction.
Eigensensitivity analysis of rotating clamped uniform beams with the asymptotic numerical method
NASA Astrophysics Data System (ADS)
Bekhoucha, F.; Rechak, S.; Cadou, J. M.
2016-12-01
In this paper, free vibrations of a rotating clamped Euler-Bernoulli beams with uniform cross section are studied using continuation method, namely asymptotic numerical method. The governing equations of motion are derived using Lagrange's method. The kinetic and strain energy expression are derived from Rayleigh-Ritz method using a set of hybrid variables and based on a linear deflection assumption. The derived equations are transformed in two eigenvalue problems, where the first is a linear gyroscopic eigenvalue problem and presents the coupled lagging and stretch motions through gyroscopic terms. While the second is standard eigenvalue problem and corresponds to the flapping motion. Those two eigenvalue problems are transformed into two functionals treated by continuation method, the Asymptotic Numerical Method. New method proposed for the solution of the linear gyroscopic system based on an augmented system, which transforms the original problem to a standard form with real symmetric matrices. By using some techniques to resolve these singular problems by the continuation method, evolution curves of the natural frequencies against dimensionless angular velocity are determined. At high angular velocity, some singular points, due to the linear elastic assumption, are computed. Numerical tests of convergence are conducted and the obtained results are compared to the exact values. Results obtained by continuation are compared to those computed with discrete eigenvalue problem.
Quantum Kramers model: Corrections to the linear response theory for continuous bath spectrum
NASA Astrophysics Data System (ADS)
Rips, Ilya
2017-01-01
Decay of the metastable state is analyzed within the quantum Kramers model in the weak-to-intermediate dissipation regime. The decay kinetics in this regime is determined by energy exchange between the unstable mode and the stable modes of thermal bath. In our previous paper [Phys. Rev. A 42, 4427 (1990), 10.1103/PhysRevA.42.4427], Grabert's perturbative approach to well dynamics in the case of the discrete bath [Phys. Rev. Lett. 61, 1683 (1988), 10.1103/PhysRevLett.61.1683] has been extended to account for the second order terms in the classical equations of motion (EOM) for the stable modes. Account of the secular terms reduces EOM for the stable modes to those of the forced oscillator with the time-dependent frequency (TDF oscillator). Analytic expression for the characteristic function of energy loss of the unstable mode has been derived in terms of the generating function of the transition probabilities for the quantum forced TDF oscillator. In this paper, the approach is further developed and applied to the case of the continuous frequency spectrum of the bath. The spectral density functions of the bath of stable modes are expressed in terms of the dissipative properties (the friction function) of the original bath. They simplify considerably for the one-dimensional systems, when the density of phonon states is constant. Explicit expressions for the fourth order corrections to the linear response theory result for the characteristic function of the energy loss and its cumulants are obtained for the particular case of the cubic potential with Ohmic (Markovian) dissipation. The range of validity of the perturbative approach in this case is determined (γ /ωb<0.26 ), which includes the turnover region. The dominant correction to the linear response theory result is associated with the "work function" and leads to reduction of the average energy loss and its dispersion. This reduction increases with the increasing dissipation strength (up to ˜10 % ) within the range of validity of the approach. We have also calculated corrections to the depopulation factor and the escape rate for the quantum and for the classical Kramers models. Results for the classical escape rate are in very good agreement with the numerical simulations for high barriers. The results can serve as an additional proof of the robustness and accuracy of the linear response theory.
Quantum Kramers model: Corrections to the linear response theory for continuous bath spectrum.
Rips, Ilya
2017-01-01
Decay of the metastable state is analyzed within the quantum Kramers model in the weak-to-intermediate dissipation regime. The decay kinetics in this regime is determined by energy exchange between the unstable mode and the stable modes of thermal bath. In our previous paper [Phys. Rev. A 42, 4427 (1990)PLRAAN1050-294710.1103/PhysRevA.42.4427], Grabert's perturbative approach to well dynamics in the case of the discrete bath [Phys. Rev. Lett. 61, 1683 (1988)PRLTAO0031-900710.1103/PhysRevLett.61.1683] has been extended to account for the second order terms in the classical equations of motion (EOM) for the stable modes. Account of the secular terms reduces EOM for the stable modes to those of the forced oscillator with the time-dependent frequency (TDF oscillator). Analytic expression for the characteristic function of energy loss of the unstable mode has been derived in terms of the generating function of the transition probabilities for the quantum forced TDF oscillator. In this paper, the approach is further developed and applied to the case of the continuous frequency spectrum of the bath. The spectral density functions of the bath of stable modes are expressed in terms of the dissipative properties (the friction function) of the original bath. They simplify considerably for the one-dimensional systems, when the density of phonon states is constant. Explicit expressions for the fourth order corrections to the linear response theory result for the characteristic function of the energy loss and its cumulants are obtained for the particular case of the cubic potential with Ohmic (Markovian) dissipation. The range of validity of the perturbative approach in this case is determined (γ/ω_{b}<0.26), which includes the turnover region. The dominant correction to the linear response theory result is associated with the "work function" and leads to reduction of the average energy loss and its dispersion. This reduction increases with the increasing dissipation strength (up to ∼10%) within the range of validity of the approach. We have also calculated corrections to the depopulation factor and the escape rate for the quantum and for the classical Kramers models. Results for the classical escape rate are in very good agreement with the numerical simulations for high barriers. The results can serve as an additional proof of the robustness and accuracy of the linear response theory.
Application of Nearly Linear Solvers to Electric Power System Computation
NASA Astrophysics Data System (ADS)
Grant, Lisa L.
To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.
Arnason, T; Albertsdóttir, E; Fikse, W F; Eriksson, S; Sigurdsson, A
2012-02-01
The consequences of assuming a zero environmental covariance between a binary trait 'test-status' and a continuous trait on the estimates of genetic parameters by restricted maximum likelihood and Gibbs sampling and on response from genetic selection when the true environmental covariance deviates from zero were studied. Data were simulated for two traits (one that culling was based on and a continuous trait) using the following true parameters, on the underlying scale: h² = 0.4; r(A) = 0.5; r(E) = 0.5, 0.0 or -0.5. The selection on the continuous trait was applied to five subsequent generations where 25 sires and 500 dams produced 1500 offspring per generation. Mass selection was applied in the analysis of the effect on estimation of genetic parameters. Estimated breeding values were used in the study of the effect of genetic selection on response and accuracy. The culling frequency was either 0.5 or 0.8 within each generation. Each of 10 replicates included 7500 records on 'test-status' and 9600 animals in the pedigree file. Results from bivariate analysis showed unbiased estimates of variance components and genetic parameters when true r(E) = 0.0. For r(E) = 0.5, variance components (13-19% bias) and especially (50-80%) were underestimated for the continuous trait, while heritability estimates were unbiased. For r(E) = -0.5, heritability estimates of test-status were unbiased, while genetic variance and heritability of the continuous trait together with were overestimated (25-50%). The bias was larger for the higher culling frequency. Culling always reduced genetic progress from selection, but the genetic progress was found to be robust to the use of wrong parameter values of the true environmental correlation between test-status and the continuous trait. Use of a bivariate linear-linear model reduced bias in genetic evaluations, when data were subject to culling. © 2011 Blackwell Verlag GmbH.
NASA Astrophysics Data System (ADS)
Dong, Tianyu; Shi, Yi; Liu, Hui; Chen, Feng; Ma, Xikui; Mittra, Raj
2017-12-01
In this work, we present a rigorous approach for analyzing the optical response of multilayered spherical nano-particles comprised of either plasmonic metal or dielectric, when there is no longer radial symmetry and when nonlocality is included. The Lorenz-Mie theory is applied, and a linearized hydrodynamic Drude model as well as the general nonlocal optical response model for the metals are employed. Additional boundary conditions, viz., the continuity of normal components of polarization current density and the continuity of first-order pressure of free electron density, respectively, are incorporated when handling interfaces involving metals. The application of spherical addition theorems, enables us to express a spherical harmonic about one origin to spherical harmonics about a different origin, and leads to a linear system of equations for the inward- and outward-field modal coefficients for all the layers in the nanoparticle. Scattering matrices at interfaces are obtained and cascaded to obtain the expansion coefficients, to yield the final solution. Through extensive modelling of stratified concentric and eccentric metal-involved spherical nanoshells illuminating by a plane wave, we show that, within a nonlocal description, significant modifications of plasmonic response appear, e.g. a blue-shift in the extinction / scattering spectrum and a broadening spectrum of the resonance. In addition, it has been demonstrated that core-shell nanostructures provide an option for tunable Fano-resonance generators. The proposed method shows its capability and flexibility to analyze the nonlocal response of eccentric hybrid metal-dielectric multilayer structures as well as adjoined metal-involved nanoparticles, even when the number of layers is large.
A powerful and flexible approach to the analysis of RNA sequence count data.
Zhou, Yi-Hui; Xia, Kai; Wright, Fred A
2011-10-01
A number of penalization and shrinkage approaches have been proposed for the analysis of microarray gene expression data. Similar techniques are now routinely applied to RNA sequence transcriptional count data, although the value of such shrinkage has not been conclusively established. If penalization is desired, the explicit modeling of mean-variance relationships provides a flexible testing regimen that 'borrows' information across genes, while easily incorporating design effects and additional covariates. We describe BBSeq, which incorporates two approaches: (i) a simple beta-binomial generalized linear model, which has not been extensively tested for RNA-Seq data and (ii) an extension of an expression mean-variance modeling approach to RNA-Seq data, involving modeling of the overdispersion as a function of the mean. Our approaches are flexible, allowing for general handling of discrete experimental factors and continuous covariates. We report comparisons with other alternate methods to handle RNA-Seq data. Although penalized methods have advantages for very small sample sizes, the beta-binomial generalized linear model, combined with simple outlier detection and testing approaches, appears to have favorable characteristics in power and flexibility. An R package containing examples and sample datasets is available at http://www.bios.unc.edu/research/genomic_software/BBSeq yzhou@bios.unc.edu; fwright@bios.unc.edu Supplementary data are available at Bioinformatics online.
Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A
2017-02-01
This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r = 0.71-0.88, RMSE: 1.11-1.61 METs; p > 0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r = 0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r = 0.88, RMSE: 1.10-1.11 METs; p > 0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r = 0.88, RMSE: 1.12 METs. Linear models-correlations: r = 0.86, RMSE: 1.18-1.19 METs; p < 0.05), and both ANNs had higher correlations and lower RMSE than both linear models for the wrist-worn accelerometers (ANN-correlations: r = 0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r = 0.71-0.73, RMSE: 1.55-1.61 METs; p < 0.01). For studies using wrist-worn accelerometers, machine learning models offer a significant improvement in EE prediction accuracy over linear models. Conversely, linear models showed similar EE prediction accuracy to machine learning models for hip- and thigh-worn accelerometers and may be viable alternative modeling techniques for EE prediction for hip- or thigh-worn accelerometers.
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Petersen, Brian J.; Scott, David D.
1996-01-01
This paper develops a dynamic model for pressure sensors in continuum and rarefied flows with longitudinal temperature gradients. The model was developed from the unsteady Navier-Stokes momentum, energy, and continuity equations and was linearized using small perturbations. The energy equation was decoupled from momentum and continuity assuming a polytropic flow process. Rarefied flow conditions were accounted for using a slip flow boundary condition at the tubing wall. The equations were radially averaged and solved assuming gas properties remain constant along a small tubing element. This fundamental solution was used as a building block for arbitrary geometries where fluid properties may also vary longitudinally in the tube. The problem was solved recursively starting at the transducer and working upstream in the tube. Dynamic frequency response tests were performed for continuum flow conditions in the presence of temperature gradients. These tests validated the recursive formulation of the model. Model steady-state behavior was analyzed using the final value theorem. Tests were performed for rarefied flow conditions and compared to the model steady-state response to evaluate the regime of applicability. Model comparisons were excellent for Knudsen numbers up to 0.6. Beyond this point, molecular affects caused model analyses to become inaccurate.
NASA Astrophysics Data System (ADS)
Tariqul Islam, Md.; Sturkell, Erik; Sigmundsson, Freysteinn; Drouin, Vincent Jean Paul B.; Ófeigsson, Benedikt G.
2014-05-01
Iceland is located on the mid Atlantic ridge, where the spreading rate is nearly 2 cm/yr. The high rate of magmatism in Iceland is caused by the interaction between the Iceland hotspot and the divergent mid-Atlantic plate boundary. Iceland hosts about 35 volcanoes or volcanic systems that are active. Most of these are aliened along the plate boundary. The best studied magma chamber of central volcanoes (e.g., Askja, Krafla, Grimsvötn, Katla) have verified (suggested) a shallow magma chamber (< 5 km), which has been model successfully with a Mogi source, using elastic and/or elastic-viscoelastic half-space. Maxwell and Newtonian viscosity is mainly considered for viscoelastic half-space. Therefore, rheology may be oversimplified. Our attempt is to study deformation of the Askja volcano together with plate spreading in Iceland using temperature-dependent non-linear rheology. It offers continuous variation of rheology, laterally and vertically from rift axis and surface. To implement it, we consider thermo-mechanic coupling models where rheology follows dislocation flow in dry condition based on a temperature distribution. Continuous deflation of the Askja volcanic system is associated with solidification of magma in the magma chamber and post eruption relaxation. A long time series of levelling data show its subsidence trend to exponentially. In our preliminary models, a magma chamber at 2.8 km depth with 0.5 km radius is introduced at the ridge axis as a Mogi source. Simultaneously far field of rift axis stretching by 18.4 mm/yr (measured during 2007 to 20013) is applied to reproduce plate spreading. Predicted surface deformation caused of combined effect of tectonic-volcanic activities is evaluated with GPS during 2003-2009 and RADARSAT InSAR data during 2000 to 2010. During 2003-2009, data from the GPS site OLAF (close to the centre of subsidence) shows average rate of subsidence 19±1 mm/yr relative to the ITRF2005 reference frame. The MASK (Mid ASKJA) site is another GPS station at the top of predicted centre of magma chamber correlates well with OLAF site at 500 m distance from MASK. Average subsidence rates derived from GPS measurements show comparable rate derived from InSAR data. Velocities derived from InSAR show that the yearly maximum subsidence rates in the Askja caldera decrease linearly. The optimized pressure decrease in the magma chamber from the model follows an exponential decay, with P (MPa) = 2.0177 EXP(-0.0176x), where x is the numbers of years (1,2,3 .. 10). However total ramp pressure drop during this period (10 years) is 4 MPa and additional 4.68 MPa pressure drop may be caused of rheological relaxation.
Estimation of suspended-sediment rating curves and mean suspended-sediment loads
Crawford, Charles G.
1991-01-01
A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.
A Parametric Computational Model of the Action Potential of Pacemaker Cells.
Ai, Weiwei; Patel, Nitish D; Roop, Partha S; Malik, Avinash; Andalam, Sidharta; Yip, Eugene; Allen, Nathan; Trew, Mark L
2018-01-01
A flexible, efficient, and verifiable pacemaker cell model is essential to the design of real-time virtual hearts that can be used for closed-loop validation of cardiac devices. A new parametric model of pacemaker action potential is developed to address this need. The action potential phases are modeled using hybrid automaton with one piecewise-linear continuous variable. The model can capture rate-dependent dynamics, such as action potential duration restitution, conduction velocity restitution, and overdrive suppression by incorporating nonlinear update functions. Simulated dynamics of the model compared well with previous models and clinical data. The results show that the parametric model can reproduce the electrophysiological dynamics of a variety of pacemaker cells, such as sinoatrial node, atrioventricular node, and the His-Purkinje system, under varying cardiac conditions. This is an important contribution toward closed-loop validation of cardiac devices using real-time heart models.
Some estimation formulae for continuous time-invariant linear systems
NASA Technical Reports Server (NTRS)
Bierman, G. J.; Sidhu, G. S.
1975-01-01
In this brief paper we examine a Riccati equation decomposition due to Reid and Lainiotis and apply the result to the continuous time-invariant linear filtering problem. Exploitation of the time-invariant structure leads to integration-free covariance recursions which are of use in covariance analyses and in filter implementations. A super-linearly convergent iterative solution to the algebraic Riccati equation (ARE) is developed. The resulting algorithm, arranged in a square-root form, is thought to be numerically stable and competitive with other ARE solution methods. Certain covariance relations that are relevant to the fixed-point and fixed-lag smoothing problems are also discussed.
On the Impact of a Quadratic Acceleration Term in the Analysis of Position Time Series
NASA Astrophysics Data System (ADS)
Bogusz, Janusz; Klos, Anna; Bos, Machiel Simon; Hunegnaw, Addisu; Teferle, Felix Norman
2016-04-01
The analysis of Global Navigation Satellite System (GNSS) position time series generally assumes that each of the coordinate component series is described by the sum of a linear rate (velocity) and various periodic terms. The residuals, the deviations between the fitted model and the observations, are then a measure of the epoch-to-epoch scatter and have been used for the analysis of the stochastic character (noise) of the time series. Often the parameters of interest in GNSS position time series are the velocities and their associated uncertainties, which have to be determined with the highest reliability. It is clear that not all GNSS position time series follow this simple linear behaviour. Therefore, we have added an acceleration term in the form of a quadratic polynomial function to the model in order to better describe the non-linear motion in the position time series. This non-linear motion could be a response to purely geophysical processes, for example, elastic rebound of the Earth's crust due to ice mass loss in Greenland, artefacts due to deficiencies in bias mitigation models, for example, of the GNSS satellite and receiver antenna phase centres, or any combination thereof. In this study we have simulated 20 time series with different stochastic characteristics such as white, flicker or random walk noise of length of 23 years. The noise amplitude was assumed at 1 mm/y-/4. Then, we added the deterministic part consisting of a linear trend of 20 mm/y (that represents the averaged horizontal velocity) and accelerations ranging from minus 0.6 to plus 0.6 mm/y2. For all these data we estimated the noise parameters with Maximum Likelihood Estimation (MLE) using the Hector software package without taken into account the non-linear term. In this way we set the benchmark to then investigate how the noise properties and velocity uncertainty may be affected by any un-modelled, non-linear term. The velocities and their uncertainties versus the accelerations for different types of noise are determined. Furthermore, we have selected 40 globally distributed stations that have a clear non-linear behaviour from two different International GNSS Service (IGS) analysis centers: JPL (Jet Propulsion Laboratory) and BLT (British Isles continuous GNSS Facility and University of Luxembourg Tide Gauge Benchmark Monitoring (TIGA) Analysis Center). We obtained maximum accelerations of -1.8±1.2 mm2/y and -4.5±3.3 mm2/y for the horizontal and vertical components, respectively. The noise analysis tests have shown that the addition of the non-linear term has significantly whitened the power spectra of the position time series, i.e. shifted the spectral index from flicker towards white noise.
User's manual for LINEAR, a FORTRAN program to derive linear aircraft models
NASA Technical Reports Server (NTRS)
Duke, Eugene L.; Patterson, Brian P.; Antoniewicz, Robert F.
1987-01-01
This report documents a FORTRAN program that provides a powerful and flexible tool for the linearization of aircraft models. The program LINEAR numerically determines a linear system model using nonlinear equations of motion and a user-supplied nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model.
NASA Astrophysics Data System (ADS)
Kamaruddin, Ainur Amira; Ali, Zalila; Noor, Norlida Mohd.; Baharum, Adam; Ahmad, Wan Muhamad Amir W.
2014-07-01
Logistic regression analysis examines the influence of various factors on a dichotomous outcome by estimating the probability of the event's occurrence. Logistic regression, also called a logit model, is a statistical procedure used to model dichotomous outcomes. In the logit model the log odds of the dichotomous outcome is modeled as a linear combination of the predictor variables. The log odds ratio in logistic regression provides a description of the probabilistic relationship of the variables and the outcome. In conducting logistic regression, selection procedures are used in selecting important predictor variables, diagnostics are used to check that assumptions are valid which include independence of errors, linearity in the logit for continuous variables, absence of multicollinearity, and lack of strongly influential outliers and a test statistic is calculated to determine the aptness of the model. This study used the binary logistic regression model to investigate overweight and obesity among rural secondary school students on the basis of their demographics profile, medical history, diet and lifestyle. The results indicate that overweight and obesity of students are influenced by obesity in family and the interaction between a student's ethnicity and routine meals intake. The odds of a student being overweight and obese are higher for a student having a family history of obesity and for a non-Malay student who frequently takes routine meals as compared to a Malay student.
NASA Astrophysics Data System (ADS)
Parris, B. A.; Egbert, G. D.; Key, K.; Livelybrooks, D.
2016-12-01
Magnetotellurics (MT) is an electromagnetic technique used to model the inner Earth's electrical conductivity structure. MT data can be analyzed using iterative, linearized inversion techniques to generate models imaging, in particular, conductive partial melts and aqueous fluids that play critical roles in subduction zone processes and volcanism. For example, the Magnetotelluric Observations of Cascadia using a Huge Array (MOCHA) experiment provides amphibious data useful for imaging subducted fluids from trench to mantle wedge corner. When using MOD3DEM(Egbert et al. 2012), a finite difference inversion package, we have encountered problems inverting, particularly, sea floor stations due to the strong, nearby conductivity gradients. As a work-around, we have found that denser, finer model grids near the land-sea interface produce better inversions, as characterized by reduced data residuals. This is partly to be due to our ability to more accurately capture topography and bathymetry. We are experimenting with improved interpolation schemes that more accurately track EM fields across cell boundaries, with an eye to enhancing the accuracy of the simulated responses and, thus, inversion results. We are adapting how MOD3DEM interpolates EM fields in two ways. The first seeks to improve weighting functions for interpolants to better address current continuity across grid boundaries. Electric fields are interpolated using a tri-linear spline technique, where the eight nearest electrical field estimates are each given weights determined by the technique, a kind of weighted average. We are modifying these weights to include cross-boundary conductivity ratios to better model current continuity. We are also adapting some of the techniques discussed in Shantsev et al (2014) to enhance the accuracy of the interpolated fields calculated by our forward solver, as well as to better approximate the sensitivities passed to the software's Jacobian that are used to generate a new forward model during each iteration of the inversion.
Latent log-linear models for handwritten digit classification.
Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann
2012-06-01
We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.
Vehicular traffic noise prediction using soft computing approach.
Singh, Daljeet; Nigam, S P; Agrawal, V P; Kumar, Maneek
2016-12-01
A new approach for the development of vehicular traffic noise prediction models is presented. Four different soft computing methods, namely, Generalized Linear Model, Decision Trees, Random Forests and Neural Networks, have been used to develop models to predict the hourly equivalent continuous sound pressure level, Leq, at different locations in the Patiala city in India. The input variables include the traffic volume per hour, percentage of heavy vehicles and average speed of vehicles. The performance of the four models is compared on the basis of performance criteria of coefficient of determination, mean square error and accuracy. 10-fold cross validation is done to check the stability of the Random Forest model, which gave the best results. A t-test is performed to check the fit of the model with the field data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Heteroscedastic Latent Trait Models for Dichotomous Data.
Molenaar, Dylan
2015-09-01
Effort has been devoted to account for heteroscedasticity with respect to observed or latent moderator variables in item or test scores. For instance, in the multi-group generalized linear latent trait model, it could be tested whether the observed (polychoric) covariance matrix differs across the levels of an observed moderator variable. In the case that heteroscedasticity arises across the latent trait itself, existing models commonly distinguish between heteroscedastic residuals and a skewed trait distribution. These models have valuable applications in intelligence, personality and psychopathology research. However, existing approaches are only limited to continuous and polytomous data, while dichotomous data are common in intelligence and psychopathology research. Therefore, in present paper, a heteroscedastic latent trait model is presented for dichotomous data. The model is studied in a simulation study, and applied to data pertaining alcohol use and cognitive ability.
Bayesian integration and non-linear feedback control in a full-body motor task.
Stevenson, Ian H; Fernandes, Hugo L; Vilares, Iris; Wei, Kunlin; Körding, Konrad P
2009-12-01
A large number of experiments have asked to what degree human reaching movements can be understood as being close to optimal in a statistical sense. However, little is known about whether these principles are relevant for other classes of movements. Here we analyzed movement in a task that is similar to surfing or snowboarding. Human subjects stand on a force plate that measures their center of pressure. This center of pressure affects the acceleration of a cursor that is displayed in a noisy fashion (as a cloud of dots) on a projection screen while the subject is incentivized to keep the cursor close to a fixed position. We find that salient aspects of observed behavior are well-described by optimal control models where a Bayesian estimation model (Kalman filter) is combined with an optimal controller (either a Linear-Quadratic-Regulator or Bang-bang controller). We find evidence that subjects integrate information over time taking into account uncertainty. However, behavior in this continuous steering task appears to be a highly non-linear function of the visual feedback. While the nervous system appears to implement Bayes-like mechanisms for a full-body, dynamic task, it may additionally take into account the specific costs and constraints of the task.
Gonzales, Matthew J.; Sturgeon, Gregory; Segars, W. Paul; McCulloch, Andrew D.
2016-01-01
Cubic Hermite hexahedral finite element meshes have some well-known advantages over linear tetrahedral finite element meshes in biomechanical and anatomic modeling using isogeometric analysis. These include faster convergence rates as well as the ability to easily model rule-based anatomic features such as cardiac fiber directions. However, it is not possible to create closed complex objects with only regular nodes; these objects require the presence of extraordinary nodes (nodes with 3 or >= 5 adjacent elements in 2D) in the mesh. The presence of extraordinary nodes requires new constraints on the derivatives of adjacent elements to maintain continuity. We have developed a new method that uses an ensemble coordinate frame at the nodes and a local-to-global mapping to maintain continuity. In this paper, we make use of this mapping to create cubic Hermite models of the human ventricles and a four-chamber heart. We also extend the methods to the finite element equations to perform biomechanics simulations using these meshes. The new methods are validated using simple test models and applied to anatomically accurate ventricular meshes with valve annuli to simulate complete cardiac cycle simulations. PMID:27182096
Quantum simulation of the spin-boson model with a microwave circuit
NASA Astrophysics Data System (ADS)
Leppäkangas, Juha; Braumüller, Jochen; Hauck, Melanie; Reiner, Jan-Michael; Schwenk, Iris; Zanker, Sebastian; Fritz, Lukas; Ustinov, Alexey V.; Weides, Martin; Marthaler, Michael
2018-05-01
We consider superconducting circuits for the purpose of simulating the spin-boson model. The spin-boson model consists of a single two-level system coupled to bosonic modes. In most cases, the model is considered in a limit where the bosonic modes are sufficiently dense to form a continuous spectral bath. A very well known case is the Ohmic bath, where the density of states grows linearly with the frequency. In the limit of weak coupling or large temperature, this problem can be solved numerically. If the coupling is strong, the bosonic modes can become sufficiently excited to make a classical simulation impossible. Here we discuss how a quantum simulation of this problem can be performed by coupling a superconducting qubit to a set of microwave resonators. We demonstrate a possible implementation of a continuous spectral bath with individual bath resonators coupling strongly to the qubit. Applying a microwave drive scheme potentially allows us to access the strong-coupling regime of the spin-boson model. We discuss how the resulting spin relaxation dynamics with different initialization conditions can be probed by standard qubit-readout techniques from circuit quantum electrodynamics.
Chaotic Motions in the Real Fuzzy Electronic Circuits (Preprint)
2012-12-01
the research field of secure communications, the original source should be blended with other complex signals. Chaotic signals are one of the good... blending of the linear system models. Consider a continuous-time nonlinear dynamic system as follows: Rule i: IF )(1 tx is ...1iM and )(txn is...Chaos Solitons Fractals, vol. 21, no. 4, pp. 957–965, 2004. 29. L. M. Tam and W. M. SiTou, “Parametric study of the fractional order Chen–Lee
A transverse Kelvin-Helmholtz instability in a magnetized plasma
NASA Technical Reports Server (NTRS)
Kintner, P.; Dangelo, N.
1977-01-01
An analysis is conducted of the transverse Kelvin-Helmholtz instability in a magnetized plasma for unstable flute modes. The analysis makes use of a two-fluid model. Details regarding the instability calculation are discussed, taking into account the ion continuity and momentum equations, the solution of a zero-order and a first-order component, and the properties of the solution. It is expected that the linear calculation conducted will apply to situations in which the plasma has experienced no more than a few growth periods.
Pathwise upper semi-continuity of random pullback attractors along the time axis
NASA Astrophysics Data System (ADS)
Cui, Hongyong; Kloeden, Peter E.; Wu, Fuke
2018-07-01
The pullback attractor of a non-autonomous random dynamical system is a time-indexed family of random sets, typically having the form {At(ṡ) } t ∈ R with each At(ṡ) a random set. This paper is concerned with the nature of such time-dependence. It is shown that the upper semi-continuity of the mapping t ↦At(ω) for each ω fixed has an equivalence relationship with the uniform compactness of the local union ∪s∈IAs(ω) , where I ⊂ R is compact. Applied to a semi-linear degenerate parabolic equation with additive noise and a wave equation with multiplicative noise we show that, in order to prove the above locally uniform compactness and upper semi-continuity, no additional conditions are required, in which sense the two properties appear to be general properties satisfied by a large number of real models.
A new model integrating short- and long-term aging of copper added to soils
Zeng, Saiqi; Li, Jumei; Wei, Dongpu
2017-01-01
Aging refers to the processes by which the bioavailability/toxicity, isotopic exchangeability, and extractability of metals added to soils decline overtime. We studied the characteristics of the aging process in copper (Cu) added to soils and the factors that affect this process. Then we developed a semi-mechanistic model to predict the lability of Cu during the aging process with descriptions of the diffusion process using complementary error function. In the previous studies, two semi-mechanistic models to separately predict short-term and long-term aging of Cu added to soils were developed with individual descriptions of the diffusion process. In the short-term model, the diffusion process was linearly related to the square root of incubation time (t1/2), and in the long-term model, the diffusion process was linearly related to the natural logarithm of incubation time (lnt). Both models could predict short-term or long-term aging processes separately, but could not predict the short- and long-term aging processes by one model. By analyzing and combining the two models, we found that the short- and long-term behaviors of the diffusion process could be described adequately using the complementary error function. The effect of temperature on the diffusion process was obtained in this model as well. The model can predict the aging process continuously based on four factors—soil pH, incubation time, soil organic matter content and temperature. PMID:28820888
Left-handed and right-handed U(1) gauge symmetry
NASA Astrophysics Data System (ADS)
Nomura, Takaaki; Okada, Hiroshi
2018-01-01
We propose a model with the left-handed and right-handed continuous Abelian gauge symmetry; U(1) L × U(1) R . Then three right-handed neutrinos are naturally required to achieve U(1) R anomaly cancellations, while several mirror fermions are also needed to do U(1) L anomaly cancellations. Then we formulate the model, and discuss its testability of the new gauge interactions at collider physics such as the large hadron collider (LHC) and the international linear collider (ILC). In particular, we can investigate chiral structure of the interactions by the analysis of forward-backward asymmetry based on polarized beam at the ILC.
Simplified large African carnivore density estimators from track indices.
Winterbach, Christiaan W; Ferreira, Sam M; Funston, Paul J; Somers, Michael J
2016-01-01
The range, population size and trend of large carnivores are important parameters to assess their status globally and to plan conservation strategies. One can use linear models to assess population size and trends of large carnivores from track-based surveys on suitable substrates. The conventional approach of a linear model with intercept may not intercept at zero, but may fit the data better than linear model through the origin. We assess whether a linear regression through the origin is more appropriate than a linear regression with intercept to model large African carnivore densities and track indices. We did simple linear regression with intercept analysis and simple linear regression through the origin and used the confidence interval for ß in the linear model y = αx + ß, Standard Error of Estimate, Mean Squares Residual and Akaike Information Criteria to evaluate the models. The Lion on Clay and Low Density on Sand models with intercept were not significant ( P > 0.05). The other four models with intercept and the six models thorough origin were all significant ( P < 0.05). The models using linear regression with intercept all included zero in the confidence interval for ß and the null hypothesis that ß = 0 could not be rejected. All models showed that the linear model through the origin provided a better fit than the linear model with intercept, as indicated by the Standard Error of Estimate and Mean Square Residuals. Akaike Information Criteria showed that linear models through the origin were better and that none of the linear models with intercept had substantial support. Our results showed that linear regression through the origin is justified over the more typical linear regression with intercept for all models we tested. A general model can be used to estimate large carnivore densities from track densities across species and study areas. The formula observed track density = 3.26 × carnivore density can be used to estimate densities of large African carnivores using track counts on sandy substrates in areas where carnivore densities are 0.27 carnivores/100 km 2 or higher. To improve the current models, we need independent data to validate the models and data to test for non-linear relationship between track indices and true density at low densities.
Quality tracing in meat supply chains
Mack, Miriam; Dittmer, Patrick; Veigt, Marius; Kus, Mehmet; Nehmiz, Ulfert; Kreyenschmidt, Judith
2014-01-01
The aim of this study was the development of a quality tracing model for vacuum-packed lamb that is applicable in different meat supply chains. Based on the development of relevant sensory parameters, the predictive model was developed by combining a linear primary model and the Arrhenius model as the secondary model. Then a process analysis was conducted to define general requirements for the implementation of the temperature-based model into a meat supply chain. The required hardware and software for continuous temperature monitoring were developed in order to use the model under practical conditions. Further on a decision support tool was elaborated in order to use the model as an effective tool in combination with the temperature monitoring equipment for the improvement of quality and storage management within the meat logistics network. Over the long term, this overall procedure will support the reduction of food waste and will improve the resources efficiency of food production. PMID:24797136
Quality tracing in meat supply chains.
Mack, Miriam; Dittmer, Patrick; Veigt, Marius; Kus, Mehmet; Nehmiz, Ulfert; Kreyenschmidt, Judith
2014-06-13
The aim of this study was the development of a quality tracing model for vacuum-packed lamb that is applicable in different meat supply chains. Based on the development of relevant sensory parameters, the predictive model was developed by combining a linear primary model and the Arrhenius model as the secondary model. Then a process analysis was conducted to define general requirements for the implementation of the temperature-based model into a meat supply chain. The required hardware and software for continuous temperature monitoring were developed in order to use the model under practical conditions. Further on a decision support tool was elaborated in order to use the model as an effective tool in combination with the temperature monitoring equipment for the improvement of quality and storage management within the meat logistics network. Over the long term, this overall procedure will support the reduction of food waste and will improve the resources efficiency of food production.
ARMA models for earthquake ground motions. Seismic safety margins research program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, M. K.; Kwiatkowski, J. W.; Nau, R. F.
1981-02-01
Four major California earthquake records were analyzed by use of a class of discrete linear time-domain processes commonly referred to as ARMA (Autoregressive/Moving-Average) models. It was possible to analyze these different earthquakes, identify the order of the appropriate ARMA model(s), estimate parameters, and test the residuals generated by these models. It was also possible to show the connections, similarities, and differences between the traditional continuous models (with parameter estimates based on spectral analyses) and the discrete models with parameters estimated by various maximum-likelihood techniques applied to digitized acceleration data in the time domain. The methodology proposed is suitable for simulatingmore » earthquake ground motions in the time domain, and appears to be easily adapted to serve as inputs for nonlinear discrete time models of structural motions. 60 references, 19 figures, 9 tables.« less
Non relativistic limit of integrable QFT and Lieb-Liniger models
NASA Astrophysics Data System (ADS)
Bastianello, Alvise; De Luca, Andrea; Mussardo, Giuseppe
2016-12-01
In this paper we study a suitable limit of integrable QFT with the aim to identify continuous non-relativistic integrable models with local interactions. This limit amounts to sending to infinity the speed of light c but simultaneously adjusting the coupling constant g of the quantum field theories in such a way to keep finite the energies of the various excitations. The QFT considered here are Toda field theories and the O(N) non-linear sigma model. In both cases the resulting non-relativistic integrable models consist only of Lieb-Liniger models, which are fully decoupled for the Toda theories while symmetrically coupled for the O(N) model. These examples provide explicit evidence of the universality and ubiquity of the Lieb-Liniger models and, at the same time, suggest that these models may exhaust the list of possible non-relativistic integrable theories of bosonic particles with local interactions.
Thermal Linear Expansion of Nine Selected AISI Stainless Steels
1978-04-01
D. Desai and C. Y. Ho CINDAS REPORT 51 April 1978i! Prepared for AMERICAN IRON AND STEEL INSTITUTE d 1000 Sixteenth Street N.W. Washington, D.C...WORDS (Continue on reverse side it necessary and Identify by~ block number) *Thermal linear expansion ---*Stainless steels --- Iron -Nickel alloys... Iron -Chromium alloys 20fIStACT (Continue on reverse side it neceearyediett b lc ubr Thstechnical report reviews the available experimental data and
Lee, Dong-Jin; Lee, Sun-Kyu
2015-01-01
This paper presents a design and control system for an XY stage driven by an ultrasonic linear motor. In this study, a hybrid bolt-clamped Langevin-type ultrasonic linear motor was manufactured and then operated at the resonance frequency of the third longitudinal and the sixth lateral modes. These two modes were matched through the preload adjustment and precisely tuned by the frequency matching method based on the impedance matching method with consideration of the different moving weights. The XY stage was evaluated in terms of position and circular motion. To achieve both fine and stable motion, the controller consisted of a nominal characteristics trajectory following (NCTF) control for continuous motion, dead zone compensation, and a switching controller based on the different NCTFs for the macro- and micro-dynamics regimes. The experimental results showed that the developed stage enables positioning and continuous motion with nanometer-level accuracy.
Population response to climate change: linear vs. non-linear modeling approaches.
Ellis, Alicia M; Post, Eric
2004-03-31
Research on the ecological consequences of global climate change has elicited a growing interest in the use of time series analysis to investigate population dynamics in a changing climate. Here, we compare linear and non-linear models describing the contribution of climate to the density fluctuations of the population of wolves on Isle Royale, Michigan from 1959 to 1999. The non-linear self excitatory threshold autoregressive (SETAR) model revealed that, due to differences in the strength and nature of density dependence, relatively small and large populations may be differentially affected by future changes in climate. Both linear and non-linear models predict a decrease in the population of wolves with predicted changes in climate. Because specific predictions differed between linear and non-linear models, our study highlights the importance of using non-linear methods that allow the detection of non-linearity in the strength and nature of density dependence. Failure to adopt a non-linear approach to modelling population response to climate change, either exclusively or in addition to linear approaches, may compromise efforts to quantify ecological consequences of future warming.
Rapid calculation of acoustic fields from arbitrary continuous-wave sources.
Treeby, Bradley E; Budisky, Jakub; Wise, Elliott S; Jaros, Jiri; Cox, B T
2018-01-01
A Green's function solution is derived for calculating the acoustic field generated by phased array transducers of arbitrary shape when driven by a single frequency continuous wave excitation with spatially varying amplitude and phase. The solution is based on the Green's function for the homogeneous wave equation expressed in the spatial frequency domain or k-space. The temporal convolution integral is solved analytically, and the remaining integrals are expressed in the form of the spatial Fourier transform. This allows the acoustic pressure for all spatial positions to be calculated in a single step using two fast Fourier transforms. The model is demonstrated through several numerical examples, including single element rectangular and spherically focused bowl transducers, and multi-element linear and hemispherical arrays.
Minimizing Higgs potentials via numerical polynomial homotopy continuation
NASA Astrophysics Data System (ADS)
Maniatis, M.; Mehta, D.
2012-08-01
The study of models with extended Higgs sectors requires to minimize the corresponding Higgs potentials, which is in general very difficult. Here, we apply a recently developed method, called numerical polynomial homotopy continuation (NPHC), which guarantees to find all the stationary points of the Higgs potentials with polynomial-like non-linearity. The detection of all stationary points reveals the structure of the potential with maxima, metastable minima, saddle points besides the global minimum. We apply the NPHC method to the most general Higgs potential having two complex Higgs-boson doublets and up to five real Higgs-boson singlets. Moreover the method is applicable to even more involved potentials. Hence the NPHC method allows to go far beyond the limits of the Gröbner basis approach.
NASA Astrophysics Data System (ADS)
Haoxiang, Chen; Qi, Chengzhi; Peng, Liu; Kairui, Li; Aifantis, Elias C.
2015-12-01
The occurrence of alternating damage zones surrounding underground openings (commonly known as zonal disintegration) is treated as a "far from thermodynamic equilibrium" dynamical process or a nonlinear continuous phase transition phenomenon. The approach of internal variable gradient theory with diffusive transport, which may be viewed as a subclass of Landau's phase transition theory, is adopted. The order parameter is identified with an irreversible strain quantity, the gradient of which enters into the expression for the free energy of the rock system. The gradient term stabilizes the material behavior in the post-softening regime, where zonal disintegration occurs. The results of a simplified linearized analysis are confirmed by the numerical solution of the nonlinear problem.
Construction of energy-stable Galerkin reduced order models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalashnikova, Irina; Barone, Matthew Franklin; Arunajatesan, Srinivasan
2013-05-01
This report aims to unify several approaches for building stable projection-based reduced order models (ROMs). Attention is focused on linear time-invariant (LTI) systems. The model reduction procedure consists of two steps: the computation of a reduced basis, and the projection of the governing partial differential equations (PDEs) onto this reduced basis. Two kinds of reduced bases are considered: the proper orthogonal decomposition (POD) basis and the balanced truncation basis. The projection step of the model reduction can be done in two ways: via continuous projection or via discrete projection. First, an approach for building energy-stable Galerkin ROMs for linear hyperbolicmore » or incompletely parabolic systems of PDEs using continuous projection is proposed. The idea is to apply to the set of PDEs a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. The resulting ROM will be energy-stable for any choice of reduced basis. It is shown that, for many PDE systems, the desired transformation is induced by a special weighted L2 inner product, termed the %E2%80%9Csymmetry inner product%E2%80%9D. Attention is then turned to building energy-stable ROMs via discrete projection. A discrete counterpart of the continuous symmetry inner product, a weighted L2 inner product termed the %E2%80%9CLyapunov inner product%E2%80%9D, is derived. The weighting matrix that defines the Lyapunov inner product can be computed in a black-box fashion for a stable LTI system arising from the discretization of a system of PDEs in space. It is shown that a ROM constructed via discrete projection using the Lyapunov inner product will be energy-stable for any choice of reduced basis. Connections between the Lyapunov inner product and the inner product induced by the balanced truncation algorithm are made. Comparisons are also made between the symmetry inner product and the Lyapunov inner product. The performance of ROMs constructed using these inner products is evaluated on several benchmark test cases.« less
An Emerging Theoretical Model of Music Therapy Student Development.
Dvorak, Abbey L; Hernandez-Ruiz, Eugenia; Jang, Sekyung; Kim, Borin; Joseph, Megan; Wells, Kori E
2017-07-01
Music therapy students negotiate a complex relationship with music and its use in clinical work throughout their education and training. This distinct, pervasive, and evolving relationship suggests a developmental process unique to music therapy. The purpose of this grounded theory study was to create a theoretical model of music therapy students' developmental process, beginning with a study within one large Midwestern university. Participants (N = 15) were music therapy students who completed one 60-minute intensive interview, followed by a 20-minute member check meeting. Recorded interviews were transcribed, analyzed, and coded using open and axial coding. The theoretical model that emerged was a six-step sequential developmental progression that included the following themes: (a) Personal Connection, (b) Turning Point, (c) Adjusting Relationship with Music, (d) Growth and Development, (e) Evolution, and (f) Empowerment. The first three steps are linear; development continues in a cyclical process among the last three steps. As the cycle continues, music therapy students continue to grow and develop their skills, leading to increased empowerment, and more specifically, increased self-efficacy and competence. Further exploration of the model is needed to inform educators' and other key stakeholders' understanding of student needs and concerns as they progress through music therapy degree programs. © the American Music Therapy Association 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Morgan, R; Gallagher, M
2012-01-01
In this paper we extend a previously proposed randomized landscape generator in combination with a comparative experimental methodology to study the behavior of continuous metaheuristic optimization algorithms. In particular, we generate two-dimensional landscapes with parameterized, linear ridge structure, and perform pairwise comparisons of algorithms to gain insight into what kind of problems are easy and difficult for one algorithm instance relative to another. We apply this methodology to investigate the specific issue of explicit dependency modeling in simple continuous estimation of distribution algorithms. Experimental results reveal specific examples of landscapes (with certain identifiable features) where dependency modeling is useful, harmful, or has little impact on mean algorithm performance. Heat maps are used to compare algorithm performance over a large number of landscape instances and algorithm trials. Finally, we perform a meta-search in the landscape parameter space to find landscapes which maximize the performance between algorithms. The results are related to some previous intuition about the behavior of these algorithms, but at the same time lead to new insights into the relationship between dependency modeling in EDAs and the structure of the problem landscape. The landscape generator and overall methodology are quite general and extendable and can be used to examine specific features of other algorithms.
Accelerated fatigue testing of dentin-composite bond with continuously increasing load.
Li, Kai; Guo, Jiawen; Li, Yuping; Heo, Young Cheul; Chen, Jihua; Xin, Haitao; Fok, Alex
2017-06-01
The aim of this study was to evaluate an accelerated fatigue test method that used a continuously increasing load for testing the dentin-composite bond strength. Dentin-composite disks (ϕ5mm×2mm) made from bovine incisor roots were subjected to cyclic diametral compression with a continuously increasingly load amplitude. Two different load profiles, linear and nonlinear with respect to the number of cycles, were considered. The data were then analyzed by using a probabilistic failure model based on the Weakest-Link Theory and the classical stress-life function, before being transformed to simulate clinical data of direct restorations. All the experimental data could be well fitted with a 2-parameter Weibull function. However, a calibration was required for the effective stress amplitude to account for the difference between static and cyclic loading. Good agreement was then obtained between theory and experiments for both load profiles. The in vitro model also successfully simulated the clinical data. The method presented will allow tooth-composite interfacial fatigue parameters to be determined more efficiently. With suitable calibration, the in vitro model can also be used to assess composite systems in a more clinically relevant manner. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne
2012-01-01
In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models. PMID:23275882
Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne
2012-12-01
In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.
Wang, Jye; Lin, Wender; Chang, Ling-Hui
2018-01-01
The Vulnerable Elders Survey-13 (VES-13) has been used as a screening tool to identify vulnerable community-dwelling older persons for more in-depth assessment and targeted interventions. Although many studies supported its use in different populations, few have addressed Asian populations. The optimal scaling system for the VES-13 in predicting health outcomes also has not been adequately tested. This study (1) assesses the applicability of the VES-13 to predict the mortality of community-dwelling older persons in Taiwan, (2) identifies the best scaling system for the VES-13 in predicting mortality using generalized additive models (GAMs), and (3) determines whether including covariates, such as socio-demographic factors and common geriatric syndromes, improves model fitting. This retrospective longitudinal cohort study analyzed the data of 2184 community-dwelling persons 65 years old or older from the 2003 wave of the national-wide Taiwan Longitudinal Study on Aging. Cox proportional hazards models and Generalized Additive Models (GAMs) were used. The VES-13 significantly predicted the mortality of Taiwan's community-dwelling elders. A one-point increase in the VES-13 score raised the risk of death by 26% (hazard ratio, 1.26; 95% confidence interval, 1.21-1.32). The hazard ratio of death increased linearly with each additional VES-13 score point, suggesting that using a continuous scale is appropriate. Inclusion of socio-demographic factors and geriatric syndromes improved the model-fitting. The VES-13 is appropriate for an Asian population. VES-13 scores linearly predict the mortality of this population. Adjusting the weighting of the physical activity items may improve the performance of the VES-13. Copyright © 2017 Elsevier B.V. All rights reserved.
Using an elastic magnifier to increase power output and performance of heart-beat harvesters
NASA Astrophysics Data System (ADS)
Galbier, Antonio C.; Karami, M. Amin
2017-09-01
Embedded piezoelectric energy harvesting (PEH) systems in medical pacemakers have been a growing and innovative research area. The goal of these systems, at present, is to remove the pacemaker battery, which makes up 60%-80% of the unit, and replace it with a sustainable power source. This requires that energy harvesting systems provide sufficient power, 1-3 μW, for operating a pacemaker. The goal of this work is to develop, test, and simulate cantilevered energy harvesters with a linear elastic magnifier (LEM). This research hopes to provide insight into the interaction between pacemaker energy harvesters and the heart. By introducing the elastic magnifier into linear and nonlinear systems oscillations of the tip are encouraged into high energy orbits and large tip deflections. A continuous nonlinear model is presented for the bistable piezoelectric energy harvesting (BPEH) system and a one-degree-of-freedom linear mass-spring-damper model is presented for the elastic magnifier. The elastic magnifier will not consider the damping negligible, unlike most models. A physical model was created for the bistable structure and formed to an elastic magnifier. A hydrogel was designed for the experimental model for the LEM. Experimental results show that the BPEH coupled with a LEM (BPEH + LEM) produces more power at certain input frequencies and operates a larger bandwidth than a PEH, BPEH, and a standard piezoelectric energy harvester with the elastic magnifier (PEH + LEM). Numerical simulations are consistent with these results. It was observed that the system enters high-energy and high orbit oscillations and that, ultimately, BPEH systems implemented in medical pacemakers can, if designed properly, have enhanced performance if positioned over the heart.
Heumann, Benjamin W.; Walsh, Stephen J.; Verdery, Ashton M.; McDaniel, Phillip M.; Rindfuss, Ronald R.
2012-01-01
Understanding the pattern-process relations of land use/land cover change is an important area of research that provides key insights into human-environment interactions. The suitability or likelihood of occurrence of land use such as agricultural crop types across a human-managed landscape is a central consideration. Recent advances in niche-based, geographic species distribution modeling (SDM) offer a novel approach to understanding land suitability and land use decisions. SDM links species presence-location data with geospatial information and uses machine learning algorithms to develop non-linear and discontinuous species-environment relationships. Here, we apply the MaxEnt (Maximum Entropy) model for land suitability modeling by adapting niche theory to a human-managed landscape. In this article, we use data from an agricultural district in Northeastern Thailand as a case study for examining the relationships between the natural, built, and social environments and the likelihood of crop choice for the commonly grown crops that occur in the Nang Rong District – cassava, heavy rice, and jasmine rice, as well as an emerging crop, fruit trees. Our results indicate that while the natural environment (e.g., elevation and soils) is often the dominant factor in crop likelihood, the likelihood is also influenced by household characteristics, such as household assets and conditions of the neighborhood or built environment. Furthermore, the shape of the land use-environment curves illustrates the non-continuous and non-linear nature of these relationships. This approach demonstrates a novel method of understanding non-linear relationships between land and people. The article concludes with a proposed method for integrating the niche-based rules of land use allocation into a dynamic land use model that can address both allocation and quantity of agricultural crops. PMID:24187378
Smooth individual level covariates adjustment in disease mapping.
Huque, Md Hamidul; Anderson, Craig; Walton, Richard; Woolford, Samuel; Ryan, Louise
2018-05-01
Spatial models for disease mapping should ideally account for covariates measured both at individual and area levels. The newly available "indiCAR" model fits the popular conditional autoregresssive (CAR) model by accommodating both individual and group level covariates while adjusting for spatial correlation in the disease rates. This algorithm has been shown to be effective but assumes log-linear associations between individual level covariates and outcome. In many studies, the relationship between individual level covariates and the outcome may be non-log-linear, and methods to track such nonlinearity between individual level covariate and outcome in spatial regression modeling are not well developed. In this paper, we propose a new algorithm, smooth-indiCAR, to fit an extension to the popular conditional autoregresssive model that can accommodate both linear and nonlinear individual level covariate effects while adjusting for group level covariates and spatial correlation in the disease rates. In this formulation, the effect of a continuous individual level covariate is accommodated via penalized splines. We describe a two-step estimation procedure to obtain reliable estimates of individual and group level covariate effects where both individual and group level covariate effects are estimated separately. This distributed computing framework enhances its application in the Big Data domain with a large number of individual/group level covariates. We evaluate the performance of smooth-indiCAR through simulation. Our results indicate that the smooth-indiCAR method provides reliable estimates of all regression and random effect parameters. We illustrate our proposed methodology with an analysis of data on neutropenia admissions in New South Wales (NSW), Australia. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Borah, Utpal; Aashranth, B.; Samantaray, Dipti; Kumar, Santosh; Davinci, M. Arvinth; Albert, Shaju K.; Bhaduri, A. K.
2017-10-01
Work hardening, dynamic recovery and dynamic recrystallization (DRX) occurring during hot working of austenitic steel have been extensively studied. Various empirical models describe the nature and effects of these phenomena in a typical framework. However, the typical model is sometimes violated following atypical transitions in deformation mechanisms of the material. To ascertain the nature of these atypical transitions, researchers have intentionally introduced discontinuities in the deformation process, such as interrupting the deformation as in multi-step rolling and abruptly changing the rate of deformation. In this work, we demonstrate that atypical transitions are possible even in conventional single-step, constant strain rate deformation of austenitic steel. Towards this aim, isothermal, constant true strain rate deformation of austenitic steel has been carried out in a temperature range of 1173-1473 K and strain rate range of 0.01-100 s-1. The microstructural response corresponding to each deformation condition is thoroughly investigated. The conventional power-law variation of deformation grain size (D) with peak stress (σp) during DRX is taken as a typical model and experimental data is tested against it. It is shown that σp-D relations exhibit an atypical two-slope linear behaviour rather than a continuous power law relation. Similarly, the reduction in σp with temperature (T) is found to consist of two discrete linear segments. In practical terms, the two linear segments denote two distinct microstructural responses to deformation. As a consequence of this distinction, the typical model breaks down and is unable to completely relate microstructural evolution to flow behaviour. The present work highlights the microstructural mechanisms responsible for this atypical behavior and suggests strategies to incorporate the two-slope behaviour in the DRX model.
Study on sampling of continuous linear system based on generalized Fourier transform
NASA Astrophysics Data System (ADS)
Li, Huiguang
2003-09-01
In the research of signal and system, the signal's spectrum and the system's frequency characteristic can be discussed through Fourier Transform (FT) and Laplace Transform (LT). However, some singular signals such as impulse function and signum signal don't satisfy Riemann integration and Lebesgue integration. They are called generalized functions in Maths. This paper will introduce a new definition -- Generalized Fourier Transform (GFT) and will discuss generalized function, Fourier Transform and Laplace Transform under a unified frame. When the continuous linear system is sampled, this paper will propose a new method to judge whether the spectrum will overlap after generalized Fourier transform (GFT). Causal and non-causal systems are studied, and sampling method to maintain system's dynamic performance is presented. The results can be used on ordinary sampling and non-Nyquist sampling. The results also have practical meaning on research of "discretization of continuous linear system" and "non-Nyquist sampling of signal and system." Particularly, condition for ensuring controllability and observability of MIMO continuous systems in references 13 and 14 is just an applicable example of this paper.
Solving large mixed linear models using preconditioned conjugate gradient iteration.
Strandén, I; Lidauer, M
1999-12-01
Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.
Kärkkäinen, Hanni P; Sillanpää, Mikko J
2013-09-04
Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed.
Kärkkäinen, Hanni P.; Sillanpää, Mikko J.
2013-01-01
Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed. PMID:23821618
Computing Linear Mathematical Models Of Aircraft
NASA Technical Reports Server (NTRS)
Duke, Eugene L.; Antoniewicz, Robert F.; Krambeer, Keith D.
1991-01-01
Derivation and Definition of Linear Aircraft Model (LINEAR) computer program provides user with powerful, and flexible, standard, documented, and verified software tool for linearization of mathematical models of aerodynamics of aircraft. Intended for use in software tool to drive linear analysis of stability and design of control laws for aircraft. Capable of both extracting such linearized engine effects as net thrust, torque, and gyroscopic effects, and including these effects in linear model of system. Designed to provide easy selection of state, control, and observation variables used in particular model. Also provides flexibility of allowing alternate formulations of both state and observation equations. Written in FORTRAN.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barajas-Solano, David A.; Tartakovsky, A. M.
2016-10-13
We present a hybrid scheme for the coupling of macro and microscale continuum models for reactive contaminant transport in fractured and porous media. The transport model considered is the advection-dispersion equation, subject to linear heterogeneous reactive boundary conditions. The Multiscale Finite Volume method (MsFV) is employed to define an approximation to the microscale concentration field defined in terms of macroscopic or \\emph{global} degrees of freedom, together with local interpolator and corrector functions capturing microscopic spatial variability. The macroscopic mass balance relations for the MsFV global degrees of freedom are coupled with the macroscopic model, resulting in a global problem for the simultaneous time-stepping of all macroscopic degrees of freedom throughout the domain. In order to perform the hybrid coupling, the micro and macroscale models are applied over overlapping subdomains of the simulation domain, with the overlap denoted as the handshake subdomainmore » $$\\Omega^{hs}$$, over which continuity of concentration and transport fluxes between models is enforced. Continuity of concentration is enforced by posing a restriction relation between models over $$\\Omega^{hs}$$. Continuity of fluxes is enforced by prolongating the macroscopic model fluxes across the boundary of $$\\Omega^{hs}$$ to microscopic resolution. The microscopic interpolator and corrector functions are solutions to local microscopic advection-diffusion problems decoupled from the global degrees of freedom and from each other by virtue of the MsFV decoupling ansatz. The error introduced by the decoupling ansatz is reduced iteratively by the preconditioned GMRES algorithm, with the hybrid MsFV operator serving as the preconditioner.« less
Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen
2006-01-01
This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con
Coskun, Devrim; Britto, Dev T; Kochian, Leon V; Kronzucker, Herbert J
2016-02-01
Potassium (K(+)) acquisition in roots is generally described by a two-mechanism model, consisting of a saturable, high-affinity transport system (HATS) operating via H(+)/K(+) symport at low (<1mM) external [K(+)] ([K(+)]ext), and a linear, low-affinity system (LATS) operating via ion channels at high (>1mM) [K(+)]ext. Radiotracer measurements in the LATS range indicate that the linear rise in influx continues well beyond nutritionally relevant concentrations (>10mM), suggesting K(+) transport may be pushed to extraordinary, and seemingly limitless, capacity. Here, we assess this rise, asking whether LATS measurements faithfully report transmembrane fluxes. Using (42)K(+)-isotope and electrophysiological methods in barley, we show that this flux is part of a K(+)-transport cycle through the apoplast, and masks a genuine plasma-membrane influx that displays Michaelis-Menten kinetics. Rapid apoplastic cycling of K(+) is corroborated by an absence of transmembrane (42)K(+) efflux above 1mM, and by the efflux kinetics of PTS, an apoplastic tracer. A linear apoplastic influx, masking a saturating transmembrane influx, was also found in Arabidopsis mutants lacking the K(+) transporters AtHAK5 and AtAKT1. Our work significantly revises the model of K(+) transport by demonstrating a surprisingly modest upper limit for plasma-membrane influx, and offers insight into sodium transport under salt stress. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
A novel, microscope based, non invasive Laser Doppler flowmeter for choroidal blood flow assessment
Strohmaier, C; Werkmeister, RM; Bogner, B; Runge, C; Schroedl, F; Brandtner, H; Radner, W; Schmetterer, L; Kiel, JW; Grabnerand, G; Reitsamer, HA
2015-01-01
Impaired ocular blood flow is involved in the pathogenesis of numerous ocular diseases like glaucoma or AMD. The purpose of the present study was to introduce and validate a novel, microscope based, non invasive laser Doppler flowmeter (NILDF) for measurement of blood flow in the choroid. The custom made NI-LDF was compared with a commercial fiber optic based laser Doppler flowmeter (Perimed PF4000). Linearity and stability of the NI-LDF were assessed in a silastic tubing model (i.d. 0.3 mm) at different flow rates (range 0.4 – 3 ml/h). In a rabbit model continuous choroidal blood flow measurements were performed with both instruments simultaneously. During blood flow measurements ocular perfusion pressure was changed by manipulations of intraocular pressure via intravitreal saline infusions. The NILDF measurement correlated linearly to intraluminal flow rates in the perfused tubing model (r = 0.99, p<0.05) and remained stable during a 1 hour measurement at a constant flow rate. Rabbit choroidal blood flow measured by the PF4000 and the NI-LDF linearly correlated with each other over the entire measurement range (r = 0.99, y = x* 1,01 – 12,35 P.U., p < 0,001). In conclusion, the NI-LDF provides valid, semi quantitative measurements of capillary blood flow in comparison to an established LDF instrument and is suitable for measurements at the posterior pole of the eye. PMID:21443871
The climate response to five trillion tonnes of carbon
NASA Astrophysics Data System (ADS)
Tokarska, Katarzyna B.; Gillett, Nathan P.; Weaver, Andrew J.; Arora, Vivek K.; Eby, Michael
2016-09-01
Concrete actions to curtail greenhouse gas emissions have so far been limited on a global scale, and therefore the ultimate magnitude of climate change in the absence of further mitigation is an important consideration for climate policy. Estimates of fossil fuel reserves and resources are highly uncertain, and the amount used under a business-as-usual scenario would depend on prevailing economic and technological conditions. In the absence of global mitigation actions, five trillion tonnes of carbon (5 EgC), corresponding to the lower end of the range of estimates of the total fossil fuel resource, is often cited as an estimate of total cumulative emissions. An approximately linear relationship between global warming and cumulative CO2 emissions is known to hold up to 2 EgC emissions on decadal to centennial timescales; however, in some simple climate models the predicted warming at higher cumulative emissions is less than that predicted by such a linear relationship. Here, using simulations from four comprehensive Earth system models, we demonstrate that CO2-attributable warming continues to increase approximately linearly up to 5 EgC emissions. These models simulate, in response to 5 EgC of CO2 emissions, global mean warming of 6.4-9.5 °C, mean Arctic warming of 14.7-19.5 °C, and mean regional precipitation increases by more than a factor of four. These results indicate that the unregulated exploitation of the fossil fuel resource could ultimately result in considerably more profound climate changes than previously suggested.
Modelling female fertility traits in beef cattle using linear and non-linear models.
Naya, H; Peñagaricano, F; Urioste, J I
2017-06-01
Female fertility traits are key components of the profitability of beef cattle production. However, these traits are difficult and expensive to measure, particularly under extensive pastoral conditions, and consequently, fertility records are in general scarce and somehow incomplete. Moreover, fertility traits are usually dominated by the effects of herd-year environment, and it is generally assumed that relatively small margins are kept for genetic improvement. New ways of modelling genetic variation in these traits are needed. Inspired in the methodological developments made by Prof. Daniel Gianola and co-workers, we assayed linear (Gaussian), Poisson, probit (threshold), censored Poisson and censored Gaussian models to three different kinds of endpoints, namely calving success (CS), number of days from first calving (CD) and number of failed oestrus (FE). For models involving FE and CS, non-linear models overperformed their linear counterparts. For models derived from CD, linear versions displayed better adjustment than the non-linear counterparts. Non-linear models showed consistently higher estimates of heritability and repeatability in all cases (h 2 < 0.08 and r < 0.13, for linear models; h 2 > 0.23 and r > 0.24, for non-linear models). While additive and permanent environment effects showed highly favourable correlations between all models (>0.789), consistency in selecting the 10% best sires showed important differences, mainly amongst the considered endpoints (FE, CS and CD). In consequence, endpoints should be considered as modelling different underlying genetic effects, with linear models more appropriate to describe CD and non-linear models better for FE and CS. © 2017 Blackwell Verlag GmbH.
Emergent properties of gene evolution: Species as attractors in phenotypic space
NASA Astrophysics Data System (ADS)
Reuveni, Eli; Giuliani, Alessandro
2012-02-01
The question how the observed discrete character of the phenotype emerges from a continuous genetic distance metrics is the core argument of two contrasted evolutionary theories: punctuated equilibrium (stable evolution scattered with saltations in the phenotype) and phyletic gradualism (smooth and linear evolution of the phenotype). Identifying phenotypic saltation on the molecular levels is critical to support the first model of evolution. We have used DNA sequences of ∼1300 genes from 6 isolated populations of the budding yeast Saccharomyces cerevisiae. We demonstrate that while the equivalent measure of the genetic distance show a continuum between lineage distance with no evidence of discrete states, the phenotypic space illustrates only two (discrete) possible states that can be associated with a saltation of the species phenotype. The fact that such saltation spans large fraction of the genome and follows by continuous genetic distance is a proof of the concept that the genotype-phenotype relation is not univocal and may have severe implication when looking for disease related genes and mutations. We used this finding with analogy to attractor-like dynamics and show that punctuated equilibrium could be explained in the framework of non-linear dynamics systems.
Long-term forecasting of internet backbone traffic.
Papagiannaki, Konstantina; Taft, Nina; Zhang, Zhi-Li; Diot, Christophe
2005-09-01
We introduce a methodology to predict when and where link additions/upgrades have to take place in an Internet protocol (IP) backbone network. Using simple network management protocol (SNMP) statistics, collected continuously since 1999, we compute aggregate demand between any two adjacent points of presence (PoPs) and look at its evolution at time scales larger than 1 h. We show that IP backbone traffic exhibits visible long term trends, strong periodicities, and variability at multiple time scales. Our methodology relies on the wavelet multiresolution analysis (MRA) and linear time series models. Using wavelet MRA, we smooth the collected measurements until we identify the overall long-term trend. The fluctuations around the obtained trend are further analyzed at multiple time scales. We show that the largest amount of variability in the original signal is due to its fluctuations at the 12-h time scale. We model inter-PoP aggregate demand as a multiple linear regression model, consisting of the two identified components. We show that this model accounts for 98% of the total energy in the original signal, while explaining 90% of its variance. Weekly approximations of those components can be accurately modeled with low-order autoregressive integrated moving average (ARIMA) models. We show that forecasting the long term trend and the fluctuations of the traffic at the 12-h time scale yields accurate estimates for at least 6 months in the future.
Chaotic sources of noise in machine acoustics
NASA Astrophysics Data System (ADS)
Moon, F. C., Prof.; Broschart, Dipl.-Ing. T.
1994-05-01
In this paper a model is posited for deterministic, random-like noise in machines with sliding rigid parts impacting linear continuous machine structures. Such problems occur in gear transmission systems. A mathematical model is proposed to explain the random-like structure-borne and air-borne noise from such systems when the input is a periodic deterministic excitation of the quasi-rigid impacting parts. An experimental study is presented which supports the model. A thin circular plate is impacted by a chaotically vibrating mass excited by a sinusoidal moving base. The results suggest that the plate vibrations might be predicted by replacing the chaotic vibrating mass with a probabilistic forcing function. Prechaotic vibrations of the impacting mass show classical period doubling phenomena.
Mechanical testing and modelling of carbon-carbon composites for aircraft disc brakes
NASA Astrophysics Data System (ADS)
Bradley, Luke R.
The objective of this study is to improve the understanding of the stress distributions and failure mechanisms experienced by carbon-carbon composite aircraft brake discs using finite element (FE) analyses. The project has been carried out in association with Dunlop Aerospace as an EPSRC CASE studentship. It therefore focuses on the carbon-carbon composite brake disc material produced by Dunlop Aerospace, although it is envisaged that the approach will have broader applications for modelling and mechanical testing of carbon-carbon composites in general. The disc brake material is a laminated carbon-carbon composite comprised of poly(acrylonitrile) (PAN) derived carbon fibres in a chemical vapour infiltration (CVI) deposited matrix, in which the reinforcement is present in both continuous fibre and chopped fibre forms. To pave the way for the finite element analysis, a comprehensive study of the mechanical properties of the carbon-carbon composite material was carried out. This focused largely, but not entirely, on model composite materials formulated using structural elements of the disc brake material. The strengths and moduli of these materials were measured in tension, compression and shear in several orientations. It was found that the stress-strain behaviour of the materials were linear in directions where there was some continuous fibre reinforcement, but non-linear when this was not the case. In all orientations, some degree of non-linearity was observed in the shear stress-strain response of the materials. However, this non-linearity was generally not large enough to pose a problem for the estimation of elastic moduli. Evidence was found for negative Poisson's ratio behaviour in some orientations of the material in tension. Additionally, the through-thickness properties of the composite, including interlaminar shear strength, were shown to be positively related to bulk density. The in-plane properties were mostly unrelated to bulk density over the range of densities of the tested specimens.Two types of FE model were developed using a commercially available program. The first type was designed to analyse the model composite materials for comparison with mechanical test data for the purpose of validation of the FE model. Elastic moduli predicted by this type of FE model showed good agreement with the experimentally measured elastic moduli of the model composite materials. This result suggested that the use of layered FE models, which rely upon an isostrain assumption between the layers, can be useful in predicting the elastic properties of different lay-ups of the disc brake material.The second type of FE model analysed disc brake segments, using the experimentally measured bulk mechanical properties of the disc brake material. This FE model approximated the material as a continuum with in-plane isotropy but with different properties in the through-thickness direction. In order to validate this modelling approach, the results of the FE analysis were compared with mechanical tests on disc brake segments, which were loaded by their drive tenons in a manner intended to simulate in-service loading. The FE model showed good agreement with in-plane strains measured on the disc tenon face close to the swept area of the disc, but predicted significantly higher strains than those experimentally measured on the tenon fillet curve. This discrepancy was attributed to the existence of a steep strain gradient on the fillet curve.
NASA Astrophysics Data System (ADS)
Peña, C.; Heidbach, O.; Moreno, M.; Li, S.; Bedford, J. R.; Oncken, O.
2017-12-01
The surface deformation associated with the 2010 Mw 8.8 Maule earthquake, Chile was recorded in great detail before, during and after the event. The quality of the post-seismic continuous GPS time series has facilitated a number of studies that have modelled the horizontal signal with a combination of after-slip and viscoelastic relaxation using linear Newtonian rheology. Li et al. (2017, GRL), one of the first studies that also looked into the details of the vertical post-seismic signal, showed that a homogeneous viscosity structure cannot well explain the vertical signal, but that with a heterogeneous viscosity distribution producing a better fit. It is, however, difficult to argue why viscous rock properties should change significantly with distance to the trench. Thus, here we investigate if a non-linear, strain-rate dependent power-law can fit the post-seismic signal in all three components - in particular the vertical one. We use the first 6 years of post-seismic cGPS data and investigate with a 2D geomechanical-numerical model along a profile at 36°S if non-linear creep can explain the deformation signal as well using reasonable rock properties and a temperature field derived for this region from Springer (1999). The 2D model geometry considers the slab as well as the Moho geometry. Our results show that with our model the post-seismic surface deformation signal can be reproduced as well as in the study of Li et al. (2017). These findings suggest that the largest deformations are produced by dislocation creep. Such a process would take place below the Andes ( 40 km depth) at the interface between the deeper, colder crust and the olivine-rich upper mantle, where the lowest effective viscosity results from the relaxation of tensional stresses imposed by the co-seismic displacement. Additionally, we present preliminary results from a 3D geomechanical-numerical model with the same rheology that provides more details of the post-seismic deformation especially along strike the subduction zone.
Adjoint Method and Predictive Control for 1-D Flow in NASA Ames 11-Foot Transonic Wind Tunnel
NASA Technical Reports Server (NTRS)
Nguyen, Nhan; Ardema, Mark
2006-01-01
This paper describes a modeling method and a new optimal control approach to investigate a Mach number control problem for the NASA Ames 11-Foot Transonic Wind Tunnel. The flow in the wind tunnel is modeled by the 1-D unsteady Euler equations whose boundary conditions prescribe a controlling action by a compressor. The boundary control inputs to the compressor are in turn controlled by a drive motor system and an inlet guide vane system whose dynamics are modeled by ordinary differential equations. The resulting Euler equations are thus coupled to the ordinary differential equations via the boundary conditions. Optimality conditions are established by an adjoint method and are used to develop a model predictive linear-quadratic optimal control for regulating the Mach number due to a test model disturbance during a continuous pitch
Slow walking model for children with multiple disabilities via an application of humanoid robot
NASA Astrophysics Data System (ADS)
Wang, ZeFeng; Peyrodie, Laurent; Cao, Hua; Agnani, Olivier; Watelain, Eric; Wang, HaoPing
2016-02-01
Walk training research with children having multiple disabilities is presented. Orthosis aid in walking for children with multiple disabilities such as Cerebral Palsy continues to be a clinical and technological challenge. In order to reduce pain and improve treatment strategies, an intermediate structure - humanoid robot NAO - is proposed as an assay platform to study walking training models, to be transferred to future special exoskeletons for children. A suitable and stable walking model is proposed for walk training. It would be simulated and tested on NAO. This comparative study of zero moment point (ZMP) supports polygons and energy consumption validates the model as more stable than the conventional NAO. Accordingly direction variation of the center of mass and the slopes of linear regression knee/ankle angles, the Slow Walk model faithfully emulates the gait pattern of children.
User's manual for interactive LINEAR: A FORTRAN program to derive linear aircraft models
NASA Technical Reports Server (NTRS)
Antoniewicz, Robert F.; Duke, Eugene L.; Patterson, Brian P.
1988-01-01
An interactive FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models is documented in this report. The program LINEAR numerically determines a linear system model using nonlinear equations of motion and a user-supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model.
Linear mixed-effects modeling approach to FMRI group analysis
Chen, Gang; Saad, Ziad S.; Britton, Jennifer C.; Pine, Daniel S.; Cox, Robert W.
2013-01-01
Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance–covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance–covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity for activation detection. The importance of hypothesis formulation is also illustrated in the simulations. Comparisons with alternative group analysis approaches and the limitations of LME are discussed in details. PMID:23376789
Linear mixed-effects modeling approach to FMRI group analysis.
Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W
2013-06-01
Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity for activation detection. The importance of hypothesis formulation is also illustrated in the simulations. Comparisons with alternative group analysis approaches and the limitations of LME are discussed in details. Published by Elsevier Inc.
Constitutive modelling of creep in a long fiber random glass mat thermoplastic composite
NASA Astrophysics Data System (ADS)
Dasappa, Prasad
The primary objective of this proposed research is to characterize and model the creep behaviour of Glass Mat Thermoplastic (GMT) composites under thermo-mechanical loads. In addition, tensile testing has been performed to study the variability in mechanical properties. The thermo-physical properties of the polypropylene matrix including crystallinity level, transitions and the variation of the stiffness with temperature have also been determined. In this work, the creep of a long fibre GMT composite has been investigated for a relatively wide range of stresses from 5 to 80 MPa and temperatures from 25 to 90°C. The higher limit for stress is approximately 90% of the nominal tensile strength of the material. A Design of Experiments (ANOVA) statistical method was applied to determine the effects of stress and temperature in the random mat material which is known for wild experimental scatter. Two sets of creep tests were conducted. First, preliminary short-term creep tests consisting of 30 minutes creep followed by recovery were carried out over a wide range of stresses and temperatures. These tests were carried out to determine the linear viscoelastic region of the material. From these tests, the material was found to be linear viscoelastic up-to 20 MPa at room temperature and considerable non-linearities were observed with both stress and temperature. Using Time-Temperature superposition (TTS) a long term master curve for creep compliance for up-to 185 years at room temperature has been obtained. Further, viscoplastic strains were developed in these tests indicating the need for a non-linear viscoelastic viscoplastic constitutive model. The second set of creep tests was performed to develop a general non-linear viscoelastic viscoplastic constitutive model. Long term creep-recovery tests consisting of 1 day creep followed by recovery has been conducted over the stress range between 20 and 70 MPa at four temperatures: 25°C, 40°C, 60°C and 80°C. Findley's model, which is the reduced form of the Schapery non-linear viscoelastic model, was found to be sufficient to model the viscoelastic behaviour. The viscoplastic strains were modeled using the Zapas and Crissman viscoplastic model. A parameter estimation method which isolates the viscoelastic component from the viscoplastic part of the non-linear model has been developed. The non-linear parameters in the Findley's non-linear viscoelastic model have been found to be dependent on both stress and temperature and have been modeled as a product of functions of stress and temperature. The viscoplastic behaviour for temperatures up to 40°C was similar indicating similar damage mechanisms. Moreover, the development of viscoplastic strains at 20 and 30 MPa were similar over all the entire temperature range considered implying similar damage mechanisms. It is further recommended that the material should not be used at temperature greater than 60°C at stresses over 50 MPa. To further study the viscoplastic behaviour of continuous fibre glass mat thermoplastic composite at room temperature, multiple creep-recovery experiments of increasing durations between 1 and 24 hours have been conducted on a single specimen. The purpose of these tests was to experimentally and numerically decouple the viscoplastic strains from total creep response. This enabled the characterization of the evolution of viscoplastic strains as a function of time, stress and loading cycles and also to co-relate the development of viscoplastic strains with progression of failure mechanisms such as interfacial debonding and matrix cracking which were captured in-situ. A viscoplastic model developed from partial data analysis, as proposed by Nordin, had excellent agreement with experimental results for all stresses and times considered. Furthermore, the viscoplastic strain development is accelerated with increasing number of cycles at higher stress levels. These tests further validate the technique proposed for numerical separation of viscoplastic strains employed in obtaining the non-linear viscoelastic viscoplastic model parameters. These tests also indicate that the viscoelastic strains during creep are affected by the previous viscoplastic strain history. (Abstract shortened by UMI.)
Bistable energy harvesting enhancement with an auxiliary linear oscillator
NASA Astrophysics Data System (ADS)
Harne, R. L.; Thota, M.; Wang, K. W.
2013-12-01
Recent work has indicated that linear vibrational energy harvesters with an appended degree-of-freedom (DOF) may be advantageous for introducing new dynamic forms to extend the operational bandwidth. Given the additional interest in bistable harvester designs, which exhibit a propitious snap through effect from one stable state to the other, it is a logical extension to explore the influence of an added DOF to a bistable system. However, bistable snap through is not a resonant phenomenon, which tempers the presumption that the dynamics induced by an additional DOF on bistable designs would inherently be beneficial as for linear systems. This paper presents two analytical formulations to assess the fundamental and superharmonic steady-state dynamics of an excited bistable energy harvester to which is attached an auxiliary linear oscillator. From an energy harvesting perspective, the model predicts that the additional linear DOF uniformly amplifies the bistable harvester response magnitude and generated power for excitation frequencies less than the attachment’s resonance while improved power density spans a bandwidth below this frequency. Analyses predict bandwidths having co-existent responses composed of a unique proportion of fundamental and superharmonic dynamics. Experiments validate key analytical predictions and observe the ability for the coupled system to develop an advantageous multi-harmonic interwell response when the initial conditions are insufficient for continuous high-energy orbit at the excitation frequency. Overall, the addition of an auxiliary linear oscillator to a bistable harvester is found to be an effective means of enhancing the energy harvesting performance and robustness.
Development and validation of a general purpose linearization program for rigid aircraft models
NASA Technical Reports Server (NTRS)
Duke, E. L.; Antoniewicz, R. F.
1985-01-01
A FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models is discussed. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high performance aircraft.
Gawthrop, Peter J.; Lakie, Martin; Loram, Ian D.
2017-01-01
Key points A human controlling an external system is described most easily and conventionally as linearly and continuously translating sensory input to motor output, with the inevitable output remnant, non‐linearly related to the input, attributed to sensorimotor noise.Recent experiments show sustained manual tracking involves repeated refractoriness (insensitivity to sensory information for a certain duration), with the temporary 200–500 ms periods of irresponsiveness to sensory input making the control process intrinsically non‐linear.This evidence calls for re‐examination of the extent to which random sensorimotor noise is required to explain the non‐linear remnant.This investigation of manual tracking shows how the full motor output (linear component and remnant) can be explained mechanistically by aperiodic sampling triggered by prediction error thresholds.Whereas broadband physiological noise is general to all processes, aperiodic sampling is associated with sensorimotor decision making within specific frontal, striatal and parietal networks; we conclude that manual tracking utilises such slow serial decision making pathways up to several times per second. Abstract The human operator is described adequately by linear translation of sensory input to motor output. Motor output also always includes a non‐linear remnant resulting from random sensorimotor noise from multiple sources, and non‐linear input transformations, for example thresholds or refractory periods. Recent evidence showed that manual tracking incurs substantial, serial, refractoriness (insensitivity to sensory information of 350 and 550 ms for 1st and 2nd order systems respectively). Our two questions are: (i) What are the comparative merits of explaining the non‐linear remnant using noise or non‐linear transformations? (ii) Can non‐linear transformations represent serial motor decision making within the sensorimotor feedback loop intrinsic to tracking? Twelve participants (instructed to act in three prescribed ways) manually controlled two systems (1st and 2nd order) subject to a periodic multi‐sine disturbance. Joystick power was analysed using three models, continuous‐linear‐control (CC), continuous‐linear‐control with calculated noise spectrum (CCN), and intermittent control with aperiodic sampling triggered by prediction error thresholds (IC). Unlike the linear mechanism, the intermittent control mechanism explained the majority of total power (linear and remnant) (77–87% vs. 8–48%, IC vs. CC). Between conditions, IC used thresholds and distributions of open loop intervals consistent with, respectively, instructions and previous measured, model independent values; whereas CCN required changes in noise spectrum deviating from broadband, signal dependent noise. We conclude that manual tracking uses open loop predictive control with aperiodic sampling. Because aperiodic sampling is inherent to serial decision making within previously identified, specific frontal, striatal and parietal networks we suggest that these structures are intimately involved in visuo‐manual tracking. PMID:28833126
Li, Peng; Ji, Haoran; Wang, Chengshan; ...
2017-03-22
The increasing penetration of distributed generators (DGs) exacerbates the risk of voltage violations in active distribution networks (ADNs). The conventional voltage regulation devices limited by the physical constraints are difficult to meet the requirement of real-time voltage and VAR control (VVC) with high precision when DGs fluctuate frequently. But, soft open point (SOP), a flexible power electronic device, can be used as the continuous reactive power source to realize the fast voltage regulation. Considering the cooperation of SOP and multiple regulation devices, this paper proposes a coordinated VVC method based on SOP for ADNs. Firstly, a time-series model of coordi-natedmore » VVC is developed to minimize operation costs and eliminate voltage violations of ADNs. Then, by applying the linearization and conic relaxation, the original nonconvex mixed-integer non-linear optimization model is converted into a mixed-integer second-order cone programming (MISOCP) model which can be efficiently solved to meet the requirement of voltage regulation rapidity. Here, we carried out some case studies on the IEEE 33-node system and IEEE 123-node system to illustrate the effectiveness of the proposed method.« less
NASA Astrophysics Data System (ADS)
Seti, Julia; Tkach, Mykola; Voitsekhivska, Oxana
2018-03-01
The exact solutions of the Schrödinger equation for a double-barrier open semiconductor plane nanostructure are obtained by using two different approaches, within the model of the rectangular potential profile and the continuous position-dependent effective mass of the electron. The transmission coefficient and scattering matrix are calculated for the double-barrier nanostructure. The resonance energies and resonance widths of the electron quasi-stationary states are analyzed as a function of the size of the near-interface region between wells and barriers, where the effective mass linearly depends on the coordinate. It is established that, in both methods, the increasing size affects in a qualitatively similar way the spectral characteristics of the states, shifting the resonance energies into the low- or high-energy region and increasing the resonance widths. It is shown that the relative difference of resonance energies and widths of a certain state, obtained in the model of position-dependent effective mass and in the widespread abrupt model in physically correct range of near-interface sizes, does not exceed 0.5% and 5%, respectively, independently of the other geometrical characteristics of the structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Peng; Ji, Haoran; Wang, Chengshan
The increasing penetration of distributed generators (DGs) exacerbates the risk of voltage violations in active distribution networks (ADNs). The conventional voltage regulation devices limited by the physical constraints are difficult to meet the requirement of real-time voltage and VAR control (VVC) with high precision when DGs fluctuate frequently. But, soft open point (SOP), a flexible power electronic device, can be used as the continuous reactive power source to realize the fast voltage regulation. Considering the cooperation of SOP and multiple regulation devices, this paper proposes a coordinated VVC method based on SOP for ADNs. Firstly, a time-series model of coordi-natedmore » VVC is developed to minimize operation costs and eliminate voltage violations of ADNs. Then, by applying the linearization and conic relaxation, the original nonconvex mixed-integer non-linear optimization model is converted into a mixed-integer second-order cone programming (MISOCP) model which can be efficiently solved to meet the requirement of voltage regulation rapidity. Here, we carried out some case studies on the IEEE 33-node system and IEEE 123-node system to illustrate the effectiveness of the proposed method.« less
A Flight Control System for Small Unmanned Aerial Vehicle
NASA Astrophysics Data System (ADS)
Tunik, A. A.; Nadsadnaya, O. I.
2018-03-01
The program adaptation of the controller for the flight control system (FCS) of an unmanned aerial vehicle (UAV) is considered. Linearized flight dynamic models depend mainly on the true airspeed of the UAV, which is measured by the onboard air data system. This enables its use for program adaptation of the FCS over the full range of altitudes and velocities, which define the flight operating range. FCS with program adaptation, based on static feedback (SF), is selected. The SF parameters for every sub-range of the true airspeed are determined using the linear matrix inequality approach in the case of discrete systems for synthesis of a suboptimal robust H ∞-controller. The use of the Lagrange interpolation between true airspeed sub-ranges provides continuous adaptation. The efficiency of the proposed approach is shown against an example of the heading stabilization system.
Synchronization control in multiplex networks of nonlinear multi-agent systems
NASA Astrophysics Data System (ADS)
He, Wangli; Xu, Zhiwei; Du, Wenli; Chen, Guanrong; Kubota, Naoyuki; Qian, Feng
2017-12-01
This paper is concerned with synchronization control of a multiplex network, in which two different kinds of relationships among agents coexist. Hybrid coupling, including continuous linear coupling and impulsive coupling, is proposed to model the coexisting distinguishable interactions. First, by adding impulsive controllers on a small portion of agents, local synchronization is analyzed by linearizing the error system at the desired trajectory. Then, global synchronization is studied based on the Lyapunov stability theory, where a time-varying coupling strength is involved. To further deal with the time-varying coupling strength, an adaptive updating law is introduced and a corresponding sufficient condition is obtained to ensure synchronization of the multiplex network towards the desired trajectory. Networks of Chua's circuits and other chaotic systems with double layers of interactions are simulated to verify the proposed method.
NASA Technical Reports Server (NTRS)
1980-01-01
A simple procedure to evaluate actual evaporation was derived by linearizing the surface energy balance equation, using Taylor's expansion. The original multidimensional hypersurface could be reduced to a linear relationship between evaporation and surface temperature or to a surface relationship involving evaporation, surface temperature and albedo. This procedure permits a rapid sensitivity analysis of the surface energy balance equation as well as a speedy mapping of evaporation from remotely sensed surface temperatures and albedo. Comparison with experimental data yielded promising results. The validity of evapotranspiration and soil moisture models in semiarid conditions was tested. Wheat was the crop chosen for a continuous measurement campaign made in the south of Italy. Radiometric, micrometeorologic, agronomic and soil data were collected for processing and interpretation.
NASA Astrophysics Data System (ADS)
Ham, Yoo-Geun; Song, Hyo-Jong; Jung, Jaehee; Lim, Gyu-Ho
2017-04-01
This study introduces a altered version of the incremental analysis updates (IAU), called the nonstationary IAU (NIAU) method, to enhance the assimilation accuracy of the IAU while retaining the continuity of the analysis. Analogous to the IAU, the NIAU is designed to add analysis increments at every model time step to improve the continuity in the intermittent data assimilation. Still, unlike the IAU, the NIAU method applies time-evolved forcing employing the forward operator as rectifications to the model. The solution of the NIAU is better than that of the IAU, of which analysis is performed at the start of the time window for adding the IAU forcing, in terms of the accuracy of the analysis field. It is because, in the linear systems, the NIAU solution equals that in an intermittent data assimilation method at the end of the assimilation interval. To have the filtering property in the NIAU, a forward operator to propagate the increment is reconstructed with only dominant singular vectors. An illustration of those advantages of the NIAU is given using the simple 40-variable Lorenz model.
TACD: a transportable ant colony discrimination model for corporate bankruptcy prediction
NASA Astrophysics Data System (ADS)
Lalbakhsh, Pooia; Chen, Yi-Ping Phoebe
2017-05-01
This paper presents a transportable ant colony discrimination strategy (TACD) to predict corporate bankruptcy, a topic of vital importance that is attracting increasing interest in the field of economics. The proposed algorithm uses financial ratios to build a binary prediction model for companies with the two statuses of bankrupt and non-bankrupt. The algorithm takes advantage of an improved version of continuous ant colony optimisation (CACO) at the core, which is used to create an accurate, simple and understandable linear model for discrimination. This also enables the algorithm to work with continuous values, leading to more efficient learning and adaption by avoiding data discretisation. We conduct a comprehensive performance evaluation on three real-world data sets under a stratified cross-validation strategy. In three different scenarios, TACD is compared with 11 other bankruptcy prediction strategies. We also discuss the efficiency of the attribute selection methods used in the experiments. In addition to its simplicity and understandability, statistical significance tests prove the efficiency of TACD against the other prediction algorithms in both measures of AUC and accuracy.
Continuous movement decoding using a target-dependent model with EMG inputs.
Sachs, Nicholas A; Corbett, Elaine A; Miller, Lee E; Perreault, Eric J
2011-01-01
Trajectory-based models that incorporate target position information have been shown to accurately decode reaching movements from bio-control signals, such as muscle (EMG) and cortical activity (neural spikes). One major hurdle in implementing such models for neuroprosthetic control is that they are inherently designed to decode single reaches from a position of origin to a specific target. Gaze direction can be used to identify appropriate targets, however information regarding movement intent is needed to determine when a reach is meant to begin and when it has been completed. We used linear discriminant analysis to classify limb states into movement classes based on recorded EMG from a sparse set of shoulder muscles. We then used the detected state transitions to update target information in a mixture of Kalman filters that incorporated target position explicitly in the state, and used EMG activity to decode arm movements. Updating the target position initiated movement along new trajectories, allowing a sequence of appropriately timed single reaches to be decoded in series and enabling highly accurate continuous control.